paper_id
stringlengths
12
48
title
stringlengths
12
155
url
stringlengths
39
46
abstract
stringlengths
389
2.11k
ocr_markdown
stringlengths
18.1k
576k
luo-etal-2023-prototype
Prototype-Based Interpretability for Legal Citation Prediction
https://aclanthology.org/2023.findings-acl.301
Deep learning has made significant progress in the past decade, and demonstrates potential to solve problems with extensive social impact. In high-stakes decision making areas such as law, experts often require interpretability for automatic systems to be utilized in practical settings. In this work, we attempt to address these requirements applied to the important problem of legal citation prediction (LCP). We design the task with parallels to the thought-process of lawyers, i.e., with reference to both precedents and legislative provisions. After initial experimental results, we refine the target citation predictions with the feedback of legal experts. Additionally, we introduce a prototype architecture to add interpretability, achieving strong performance while adhering to decision parameters used by lawyers. Our study builds on and leverages the state-of-the-art language processing models for law, while addressing vital considerations for high-stakes tasks with practical societal impact.
# Prototype-Based Interpretability For Legal Citation Prediction Chu Fei Luo∗1,2, Rohan Bhambhoria∗1,2, Samuel Dahan2,3, and Xiaodan Zhu1,2 1Department of Electrical and Computer Engineering & Ingenuity Labs Research Institute Queen's University 2Conflict Analytics Lab, Queen's University 3Cornell Law School {14cfl,r.bhambhoria,samuel.dahan,xiaodan.zhu}@queensu.ca ## Abstract ![0_Image_0.Png](0_Image_0.Png) Deep learning has made significant progress in the past decade, and demonstrates potential to solve problems with extensive social impact. In high-stakes decision making areas such as law, experts often require interpretability for automatic systems to be utilized in practical settings. In this work, we attempt to address these requirements applied to the important problem of legal citation prediction (LCP). We design the task with parallels to the thought-process of lawyers, i.e., with reference to both precedents and legislative provisions. After initial experimental results, we refine the target citation predictions with the feedback of legal experts. Additionally, we introduce a prototype architecture to add interpretability, achieving strong performance while adhering to decision parameters used by lawyers. Our study builds on and leverages the state-of-the-art language processing models for law, while addressing vital considerations for high-stakes tasks with practical societal impact. ## 1 Introduction Deep learning has made significant progress in the past decade. Researchers have begun applying state-of-the-art methods to problems with extensive social impact, which in turn brings about many critical challenges for these deep learning models. In high-stakes problems, it is essential to understand whether models follow the same reasoning as domain experts or practitioners, and if not, why these models arrive at their final outcomes (Arrieta et al., 2020). Decisions, in other words, often require human comprehensibility for experts to validate and trust their correctness (Belle and Papantonis, 2021). The field of law is one such high-stakes domain where decision making processes often have a significant societal impact. In this paper, we study a ∗ Equal contribution. key problem, citation prediction, which aims to predict the most fitting legal citation for a text passage. Legal citation (Paul et al., 2022; Xu et al., 2020) is a fundamental component of a lawyer's argument construction process. Unlike in other domains, citations in law are not just used to reference previous works. They serve to indicate "the nature of the authority upon which a statement is based" (AxelLute, 1982). Court rules go so far as to authorize judges to reject arguments that are not supported by cited authority, and lawyers who appeal on the basis of arguments for which they have cited no authority can be sanctioned (Martin, 2020). For this reason, any system built to address this high-stakes task must also present evidence to properly support the quality of legal argumentation. Existing works on legal citation-based tasks (Paul et al., 2022) do not adhere to the thoughtprocess employed by lawyers, leading to discrepancies or overlap with existing tasks such as legal judgment prediction. Specifically, existing works primarily predict legal citations solely from the facts of a case. While facts are important to understand a situation, lawyers often develop a legal argument by translating facts in an abstract legal problem first. In other words, citations draw from legal reference to *strengthen the lawyer's position* in court. Many citations are not made to ascertain the nature of the violation. They are instead decided by an understanding of the legal issue the lawyer wants to address, as well as how it relates to prior literature and provisions available in the law. In this research, we propose a new definition for this task called **Legal Citation Prediction (LCP)**. This new approach for citation prediction, illustrated in Figure 1 and described in Section 3.1, aims to mimic a lawyer's reasoning by providing prior literature, hereby referred to as **precedents**, and provisions of legislature, **provisions**, as input. To address the critical issue of interpretability, we extend a prototype-based architecture to better serve the requirements of this task (Zhang et al., 2021). Utilizing prototypes allows the model to formulate representative examples of each citation and draw similarities to legal references. To the best of our knowledge, this work represents the first attempt to use prototype-based interpretability for language model tuning towards a high-stakes legal task. The main contributions of this work include: - Defining a new task of Legal Citation Prediction (LCP), enhanced with feedback from legal experts. We compare performance before and after factoring in legal significance, illustrating the importance and benefits of multidisciplinary collaboration. - Implementing a prototype architecture that approaches this task with the goal of making the model interpretable. It is the first time that this method is being used in a legal application. We introduce a modified loss component which is capable of strengthening parallels to a lawyer's thought-process. - Conducting a thorough analysis on the practicality of this task, providing empirical evidence comparing utility to performance. Additionally, we conduct input perturbations to analyze the learned latent space. As the work is a joint effort of NLP researchers and law practitioners, we also hope it reflects and contributes to learning how machine/deep learning may be more safely deployed in high-stakes fields. ## 2 Related Work Citation Prediction The task of citation prediction has been extensively studied in academic literature. For scholarly works, citations serve as a method of information search; this is valuable in identifying trends in a given area of research (Yu et al., 2012). The citation frequency of documents can serve as a proxy of that work's influence, which allows for statistical analysis of the trends in the broader community (Hou et al., 2019). Citation networks are also used in industry to measure adoption of academic methods (Kim et al., 2016), and in the public sector for allocation of funds by governmental organizations (Leydesdorff et al., 2019). Most works formulate citation prediction as link prediction on the citation network (Yu et al., 2012; Liu et al., 2019a), leveraging semantic information from the document texts as well as metadata such as authors and venue (Shibata et al., 2012). For legal citation tasks, prior research uses academic citation prediction or information retrieval techniques, which we refer to as Legal Citation Recommendation (Huang et al., 2021; Dadgostari et al., 2021). Huang et al. (2021), in particular, explored limiting the context given as input to improve the performance of their works. Although we derive insights from Legal Citation Recommendation research, these studies do not utilize either prior literature or provisions of legislation. Another approach is followed by Sadeghian et al. (2018), where the authors construct a heterogeneous citation network from a legal corpus and attempt to predict links as the purpose of a citation. Other works (Xu et al., 2020; Yang et al., 2019; Wang et al., 2019) also consider legal citation prediction as a type of Legal Judgment Prediction task, where they identify statutes violation from facts as a proxy to the final judgment of the case. We refer to this formulation of the task as Legal Statute Identification (LSI) based on previous work (Paul et al., 2022). Formally, it is either a multi-label document classification task or an inductive link prediction task that predicts statutes on the basis of the facts of a situation. However, while the facts are crucial to formulate a legal argument, citations serve primarily to support a lawyer's arguments. As such, they should not be treated as indicative of the final judgment. We adopt a multi-label classification setting in this work, but we do not restrict the input to the facts of a case. Interpretability One significant challenge that has been insufficiently addressed in legal citation work is interpretability. Interpretability is a key requirement of AI systems applied to law, as argued in previous studies (Górski and Ramakrishna, 2021). Fulfilling interpretability requirements increases trust in machine learning systems. This, in turn, fosters broader adoption and stimulates further development of applied AI research (Rudin, 2019). In the legal field, there are many problems and tasks that can benefit from NLP models (Zhong et al., 2020b). These include legal decision making (Bhambhoria et al., 2022), judgment prediction (Zhong et al., 2020a), and similar charge disambiguation (Liu et al., 2021). In general, there are many techniques to explain or interpret a model *post-hoc*, i.e. after training (Ribeiro et al., 2016; Covert et al., 2020). However, these methods can be unfaithful to the original model and insufficient for specialized, high-stakes tasks (Luo et al., 2022; Jin et al., 2022). Consequently, it is important to integrate interpretability into the model's architecture during the training process with *ante-hoc* methods. Bhambhoria et al. (2022), for example, explored inherent interpretability in a legal decision making task, which enabled lawyers to contribute to model design from an early stage. Very few works have applied stateof-the-art models to legal citation prediction (Paul et al., 2022; Xu et al., 2020), and none have proposed solutions for inherent interpretability. Prototype-based models are successful in various settings, such as few-shot learning, but many works have adapted them to be used for interpretability in both NLP and computer vision (Zhang et al., 2021; Chen et al., 2019). Among existing antehoc techniques is ProtGNN (Zhang et al., 2021), a graph-based architecture that makes predictions based on similarity to "prototypes", i.e. representative training samples for each target label. In this work, we adapt the prototype architecture to provide interpretability for legal citation prediction. ## 3 Prototype-Based Legal Citation Prediction We propose a prototype-based architecture to address the task of Legal Citation Prediction. The full task is described in Section 3.1, and our architecture is shown in Figure 2. Prototype-based architectures add interpretability by basing their predictions on similarity to "prototypes", which are representative training samples for each target label. Our proposed architecture enhances interpretability and grounds the model decision in a lawyer's thought process via a customized loss function that considers **precedents** and **provisions** as comparison points. For our LCP task, we use a combination of automatically discovered prototypes for precedents and manually chosen prototypes for provisions. We use vanilla fine-tuning as a baseline, where we append a linear classification head to a pre-trained language model and fine-tune all parameters with multi-class cross-entropy loss, only using the case text as input. At this stage, we also explore performance tradeoffs of task configurations such as limiting text spans and the number of target labels. We also incorporate expert feedback from initial results to adjust the experiment parameters. Then, we fine-tune from the base pre-trained model while adding our prototype-based custom loss objective, encouraging an interpretable latent space organized around precedents and provisions. We explore performance trade-offs and the learned embedding space through perturbations. ## 3.1 Defining Legal Citation Prediction We define Legal Citation Prediction (LCP) as a multi-label classification based on the input text, as well as the provisions and precedents for our target citations. For a passage of text x ∈ X, and a set of possible target citation labels L, where |L| = n, the goal is to predict the subset of appropriate target citations y ∈ L, represented as an n-dimensional vector. A critical motivation for LCP is the thought process lawyers undertake when finding citations for their work. Similar to scholarly citations, lawyers attempt to find the most relevant precedent to strengthen their own arguments (Savelka and Ashley, 2021). When lawyers present a legal argument and judges write opinions, they package their interpretation of what the law is and how it should be applied to a given situation. During this process, lawyers make reference to statutes, regulations, court rules, as well as prior appellate decisions they believe to be pertinent and supporting. We define **provisions** as pieces of written law, such as statutes, regulations, and court rules, and **precedents** to be prior appellate decisions made in court. Various legal systems apply provisions and prece- ![3_image_0.png](3_image_0.png) dents in varying scales of importance. For example, common law is distinguished by its use of prominent appellate decisions, also known as caselaw. These two components have distinct characteristics, in the same way the definition of a word and its appearance in a sentence can inform the word's usage in different ways. Existing works have strictly used prior appellate decisions, or strictly used the source text, but we theorize making both available can provide more context to citation prediction. In addition to the input text x, we also make available the target provisions to be cited, *P rovis*, and relevant precedents, *P reced*, taken to be other text passages with the same target citation in the training set. We automatically sample representative precedents in this work, described in Section 3.2. ## 3.2 Training Prototypes The prototype construction process is described in Algorithm 1. First, we encode the entire training set into the latent space. Then, for each target provision in the legislation, we find the subset of training samples that contain a citation to that text, denoted by Xl. We then cluster Xl with k-means, taking the centroids as prototype candidates Cl. For each candidate, we try to locate the closest training sample by cosine similarity xl,j . If the similarity exceeds a chosen threshold smin, we update the candidate to match the embedding of xl,j . Otherwise, we take the candidate as the final prototype to be used in training. For a batch of n samples, with input embeddings f(x) for input x, and a feed-forward classification head c, the loss is denoted by L in Equation 1. The input embedding f(x) is taken as the CLS token, and similarity scores S are calculated with the similarity score described in Equation 2. We Algorithm 1 Prototype formulation algorithm for prototypes pj ∈ P, and target citation labels lj ∈ L. ELM denotes a language model encoder, cos is cosine similarity, and Cluster is any clustering algorithm that can produce centroids C. **Require:** Training dataset $\{x_{i},y_{i}\}_{i=1}^{n}\in X$, Possible cited provisions $l\in\mathrm{L}$ 1: Encode all samples $f(x_{i})=E_{LM}(x_{i})[0]$ 2: **for**$l\in\mathrm{L}$**do** 3: $X_{l}=x_{i}\in X\mid y_{i}(l)=1$ 4: $C_{l}=\mathrm{Cluster}(f(X_{l}))$ 5: **for**$c_{l,j}\in C_{l}$**do** 6: $x_{l,j}=\mathrm{argmin}_{x\in X_{l}}\cos(f(x),c_{l,j})$ 7: **if**$\cos(f(x_{l,j}),c_{l,j})>s_{min}$**then** 8: $p_{j}=f(x_{l,j})$ 9: **else** 10: $p_{j}=c_{l,j}$ 11: **end if** 12: **end for** 13: **end for** add L2 normalization to the distance metric from previous work (Zhang et al., 2021), where pk is the prototype, h is the embedding output of f(xi), and ϵ is a regularizing weight. Also, we use the standard binary cross-entropy loss for multi-label classification. $$\mathcal{L}=\frac{1}{n}\sum\limits_{i=1}^{n}\text{BCELoss}\left(c\circ\mathcal{S}\circ f\left(x_{i}\right),y_{i}\right)\tag{1}$$ $$+\lambda\mathcal{D}_{preced}+\delta\mathcal{D}_{provis}$$ $$\mathcal{S}=\text{sim}\left(p_{k},h\right)=\left|\log\!\left(\left|\left|\frac{\left\|p_{k}-h\right\|_{2}^{2}+1}{\left\|p_{k}-h\right\|_{2}^{2}+\epsilon}\right|\right|^{2}\right)\right|\tag{2}$$ # 6. ![4_image_1.png](4_image_1.png) All precedents are represented in the loss as D*P reced* with a similar formulation to the previous work, (Zhang et al., 2021). In this work, we extend the loss by adding a new term, D*P rovis* to represent legislation provisions. We also establish the existing loss terms with coefficients of λ to represent precedents, as shown in Equation 3. $$\mathcal{D}_{preced}=\lambda_{1}\frac{1}{n}\sum_{i=1}^{n}\min_{j:p_{j}\in P_{yi}}\|f\left(x_{i}\right)-p_{j}\|_{2}^{2}$$ $$+\lambda_{2}(-\frac{1}{n}\sum_{i=1}^{n}\min_{j:p_{j}\notin P_{yi}}\|f\left(x_{i}\right)-p_{j}\|_{2}^{2})$$ $$+\lambda_{3}\sum_{k=1}^{C}\sum_{\begin{subarray}{c}i\neq j\\ p_{i},p_{j}\in P_{k}\end{subarray}}\max\left(0,\cos\left(p_{i},p_{j}\right)-s_{\max}\right)$$ $$=1\sum_{\begin{subarray}{c}i\neq j\\ p_{i},p_{j}\in P_{k}\end{subarray}}\max\left(0,\cos\left(p_{i},p_{j}\right)-s_{\max}\right)$$ $$D_{p r o v i s}={\frac{1}{n}}\sum_{i=1}^{n}\operatorname*{min}_{j:d_{j}\in D_{y i}}\|f\left(x_{i}\right)-d_{j}\|_{2}^{2}\quad(4)$$ The λ1 term is used to encourage embeddings to move closer to a prototype cluster of their class, where f(xi) is the embedding representation of the input xi, pj ∈ Pyi is the set of prototypes belonging to all provisions of legislation cited yi, and n is the batch size. In contrast, the λ2 encourages embeddings to move away from prototypes of different classes, scored similarly to λ1 but using pj ∈/ Pyi . The λ3 term moves prototypes of the same class further away from each other, punishing prototypes that have a cosine similarity above a threshold smax. D*P rovis* is defined similarly to the λ1 term, as shown in Equation 4, and serves a similar purpose of encouraging input embeddings to be closer to the provision source text embedding. ## 4 Experimental Settings 4.1 Encoder Model Selection Preliminary experiments are summarized in Table 2. Please refer to Appendix A.2 for model details. We ![4_image_0.png](4_image_0.png) use *vanilla fine-tuning* for all preliminary experiments. With just the case text as input, we append a linear classification head to a pre-trained language model (depicted as ELM in Figure 2), and finetune all parameters with multi-class cross-entropy loss. At this stage, we also explore performance tradeoffs of task configurations such as limiting text spans and the number of target labels. We choose LegalBERT for our architecture as it outperforms RoBERTa and has equivalent performance to Longformer for lower computational cost. ## 4.2 Dataset We built a dataset of court opinions, hereby referred to as the PACER dataset, constructed from United States federal court documents curated by the Free Law Project1. This data has been used in previous works (Dadgostari et al., 2021). However, we download and preprocess the data from scratch. These documents are derived from 1276 jurisdictions in the United States of America, ranging from the federal Supreme Court to local district or municipal courts. We downloaded the files via the Free Law Project's CourtListener bulk API on February 10, 2022. For this work, we focus on predicting provisions of the U.S. Code. The provision source text associated with each target citation is retrieved from the Legal Information Institute (LII) maintained by Cornell Law School2. The gold labels are automatically extracted from the text via regex. We remove documents that do not contain any U.S. Code citations, and filter for documents that contain at least one of the top 100 most frequently cited subsections, for a total of 175,741 documents. Each opinion cites an average of 3.02 U.S. Code provisions, and each provision has an average of 5308.51 citing opinions. This is divided 1https://free.law/ **https://www.lauf.edu/** **https://www.lauf.edu/** | Model | 5 labels | 20 labels | 100 labels | | | | |-------------------------------------------------------|------------|-------------|--------------|------|------|------| | Macro-f1 Micro-f1 Macro-f1 Micro-f1 Macro-f1 Micro-f1 | | | | | | | | RoBERTa | 51.6 | 49.5 | 55.0 | 59.6 | 34.4 | 45.2 | | LegalBERT | 54.4 | 49.4 | 55.1 | 60.7 | 32.9 | 44.8 | | Longformer | 53.6 | 51.3 | 56.0 | 58.8 | 33.7 | 44.9 | into a 80:5:15 train, validation, and test split ratio. The dataset label distribution is long-tailed, with the most frequently cited U.S. code appearing more than twice as often as the next. The label imbalance can be observed in Figure 3. Expert Feedback We sought to obtain feedback on the PACER dataset and prototype discovery from legal experts. After conducting preliminary experiments, further described in Section 4.1, we performed one iteration of our prototype discovery process on the best-performing checkpoints. We encoded the train set with RoBERTa (Liu et al., 2019b), clustered the embeddings, then found example cases in the training set closest to the cluster centroids by cosine similarity. We then showed four examples and their provision source text to three legal experts, including the source text of the provision and the full court opinion, and asked them to rate the relevance of the provision citation to the original case. The legal experts were asked to rate each example on a scale of 3, where 3 is highly relevant and 1 is completely irrelevant, and the results are summarized in Table 1. Overall, the average rating of the four examples was 1.5 - i.e. the relevance of the prototypes is relatively low. Responses for one example are shown in Table 7 of Appendix C. The legal experts stated in a follow-up interview that this was due to the target citations L. We chose frequency of citation as an indication of importance, but citations serve different purposes, as mentioned in previous work (Sadeghian et al., 2018). Other legal citation tasks manually choose prediction targets based on significance (Paul et al., 2022), or use citations that serve a specific purpose, like caselaw (Dadgostari et al., 2021). Many of the citations we automatically targeted were procedural; they define proper procedures in court proceedings, such as appropriate legal fees, but are not relevant to the legal argument being made. The lawyers gave low ratings because the citations were not valuable ## Prediction Targets. With the assistance of a legal expert, we manually removed procedural citations from the top 100 automatically chosen targets. A second expert was consulted to validate the filtering and sort ambiguous categories. We kept definitions as relevant, since they do not form a legal basis but are important to building an argument. We considered government regulations, such as allowance administration expense, to be adjacent to procedural citations and removed them as well. Of the top 100 citations, 55 (55%) of them are procedural; of the top 20, 15 (75%) are procedural. ## 4.3 Surrounding Text Span Context The definition of the Legal Citation Prediction task implicitly suggests we are only concerned with the portions of a document that are relevant to the citation. This differs from focusing on the course of events that led to a potential violation of a law. Also, the 512 token limit in some of our pre-trained language models like RoBERTa is a significant limitation when parsing longer legal documents. We theorize that the surrounding context within a document is more important for a citation, which may contain opinions and arguments alongside facts of the situation, as discussed in previous works (Yu et al., 2012). To address this issue in our work, we filter document sentences by the *surrounding context* of our target citations, i.e. taking n sentences before and after a citation sentence. For example, ±2 implies for a sentence of the input that contains a citation sc ∈ xi, we retain the 2 sentences before and after in the sequence {*..., s*1, s2, sc, s3, s4*, ...*}, resulting in 5 sentences total. Table 6 in Appendix B illustrates how this preprocessing significantly reduces document length. With ±2 context, the mean document length is below the token limit. We do not always classify all 100 target citations. Some experiments remove labels to handle dataset imbalance, and others to reflect the feedback of legal experts, as described in Section 4.2. In the | Context | 20 labels | 100 labels | 45 labels | | | | |-------------------------------------------------------|-------------|--------------|-------------|------|------|------| | Macro-f1 Micro-f1 Macro-f1 Micro-f1 Macro-f1 Micro-f1 | | | | | | | | N/A | 55.1 | 60.7 | 32.9 | 44.8 | 43.4 | 50.9 | | ±4 | 66.7 | 68.9 | 50.3 | 58.8 | 68.7 | 73.7 | | ±2 | 68.9 | 71.1 | 55.7 | 62.0 | 69.5 | 74.9 | case where a document does not contain any of the target citations, we randomly sample sentences until there is a minimum of 15 selected. ## 5 Results And Analyses Baseline Results of our baseline experiments with vanilla fine-tuning are shown in Table 3. The purpose of these experiments is to determine the optimal number of surrounding sentences to provide as input context during training, and also to examine the effects of target labels on performance. Removing rarer classes helps alleviate the challenge of the long-tail imbalance in the dataset, as discovered in previous works (Ma et al., 2020). However, one of the challenges of law is the vast citation network; therefore, it is important to investigate model performance over as many possible citations as possible. From the preliminary experiments in Table 2, we observe that all models demonstrate slightly higher prediction accuracy with 20 labels compared to 5, but the performance decreases significantly with 100 labels.To this end, we continue further experiments with **20 labels** and **100 labels**. We add a **45-label** setting with only non-procedural citations based on expert feedback in Section 4.2. Under the 20-label setting, we observe that inputs without context filtering and naively taking the beginning of a document results in the worst performance. Providing context of 4 sentences preceding and following the required citation, denoted by ±4, leads to better results. Finally, by including ±2 sentences for context, we obtain the best performance. As expected, we observe lower F1 scores for 100 labels due to the increase in rare classes. For subsequent experiments, we maintain 20 labels as the baseline for the following reasons: 1) We observe strong empirical evidence for this setting, and 2) The number of labels corresponds to a reasonable number of outcomes which can correspond to prior filtering by legal professionals, and our system would help with disambiguation. The performance of 45 labels without context follows a similar trend, and performance falls between 20 labels and 100 labels. However, once we provide surrounding context to the model, we see a significant increase in performance, with ±4 and ±2 both improving over the baseline by 23-25 points in both Macro- and Micro-F1. Compared to the other settings that exhibit 15-22 points in F1 improvement, it is clear that context is more important with legally relevant citations, or procedural citations are more likely to be mentioned in the first 512 tokens. Prototype-based Model Next, we test the prototype-based model based on the best performing baseline configuration of ±2 labels, summarized in Table 4, reporting results on 20 and 45 labels to compare performance with and without considerations to legal significance. We perform a simple model ablation: first, we train from the base model with only the D*P reced* loss term. Next, we add the D*P rovis* term that incorporates provision source text. Over these experiments, the prototype-based loss results in comparable performance to vanilla fine-tuning while only tuning on the D*P reced* term. However, the 20-label task (75% procedural citations) sees dramatic improvements from the D*P rovis* term, outperforming vanilla fine-tuning by 4 points in Macro-F1. Conversely, the D*P rovis* term with the same hyperparameters seems to hinder the performance of the 45-label task, decreasing by 4 points in Macro-F1. Perturbations To further validate the feature importance of the provisions, we perform several perturbations from our P reced + *P rovis* model. We attempt two settings: 1) **Keyword Masking**, i.e. replacing the input keywords with the [MASK] token, and 2) **Random Masking**, where we randomly mask 15% of the input tokens. For keyword masking, we use a statistical keyword extractor, YAKE (Campos et al., 2020), to extract 20 ngrams, up to n = 2, from the provision text. | Experiment Setting | 20 labels | 45 labels | | | |----------------------|--------------|--------------|-------------|-------------| | Macro-f1 | Micro-f1 | Macro-f1 | Micro-f1 | | | P reced | 69.0 (+0.1) | 72.9 (+1.8) | 69.3 (-0.2) | 74.4 (-0.5) | | P reced + P rovis | 73.2 (+4.3) | 73.4 (+2.3) | 65.9 (-3.6) | 71.4 (-3.5) | | Keyword Masking | 42.5 (-30.7) | 52.4 (-21.0) | 57.7 (-8.2) | 63.6 (-7.8) | | Random Masking | 73.7 (+0.5) | 74.8 (+1.4) | 68.7 (+2.8) | 72.2 (+0.8) | | Freezing Encoder | 66.9 (-6.3) | 69.2 (-4.2) | 64.0 (-1.9) | 72.4 (-1.0) | ![7_image_1.png](7_image_1.png) ![7_image_0.png](7_image_0.png) All results are summarized in Table 4. **Keyword** Masking reduces the model performance significantly compared to random masking for both the 20-label and 45-label setting, but the effects are stronger with 20 labels. This implies that citations with less relevance to the legal argument have more similarity to their source provision, and our distance-based classification sees increased performance. It also explains why legal experts found little value in predicting these citations; if it is easy to predict a citation from keywords, the task would be trivial to a lawyer. Conversely, citations with more significance to a legal argument seem to have more abstract relationships to the surrounding context and to other citing documents, and constraining them to the provision text has an adverse effect on the performance. This also explains why random masking offers better performance, as this step reduces spurious noise (Wang et al., 2022). Vanilla Fine-tuning vs. Prototypes Additionally, we compare vanilla fine-tuning to prototypebased training by freezing the latent space, as denoted by the Freezing Encoder experiment in Table 4. We perform prototype discovery, encode the provision text, then train our classification head c without updating the prototype embeddings or language model parameters. It is interesting to note the performance decrease ranges from 1.0 to 6.3. In other words, holding everything else constant, re-organizing the latent space with our prototypebased loss results in a 1.0-6.3 point increase in F1 score. We visually inspect the learned information by projecting the latent space of the 20-label models. We reduce the dimensionality of the embeddings with UMAP (McInnes et al., 2018) to produce 2D projections shown in Figure 4. Comparing the baseline to our prototype-based loss, the latter is visually more well-organized; there are fewer outliers at the edges of the latent space and clear clusters of cases corresponding to different citations. After further investigation into the architecture, we observed there was *only one prototype* discovered for each target citation after vanilla fine-tuning. The remaining cluster centroids did not meet the cosine similarity threshold. This likely increases redundancies in the activations, which makes it easier to find patterns in embedding distances. However, the prototypes are arbitrarily located in the latent space, so the model might be learning spurious prototype activation patterns. ## 6 Conclusion We study Legal Citation Prediction (LCP), a problem that serves as a foundational function for decision making in modern legal systems with societal impact. In our work, we built an inherently interpretable prototype architecture that is compatible with any language model encoder, and explains its decisions based on similarity to precedents and provision text of the target citation. We automatically discover prototypes from the training set for each target citation, which reduces the need for expert input, and make predictions based on an input's similarity to these prototypes. Through empirical study, we show strong evidence of our architecture towards LCP, offering more interpretability than vanilla fine-tuning for equivalent performance. We also demonstrate that leveraging a combination of precedents and the target provisions' texts in the during model training results in comparable or greater performance to the baseline language model, although the effectiveness of our full architecture depends on the legal nature of the target citations. Compared to vanilla fine-tuning, our model's latent space visibly separates different citation embeddings into distinct groups. It is possible to apply our distance-based classification head to an encoder trained with vanilla fine-tuning, but the classification is likely based on spurious features. In practice, this system could be extended to any citation target that has prior examples and source text, such as prior cases (i.e. caselaw). Interpretability is a crucial requirement for deploying transparent AI systems, and we encourage more work in this direction for applications with significant social impact. ## Limitations We observe two main legal limitations for this project. First, it has limited practical use for non-lawyers seeking legal help, also called selfrepresented litigants. In fact, any system providing legal citations, both precedent or statutory provision, to an untrained lawyer will be of very little use, and even harmful. It is hard to imagine in what context this might be used by non-lawyers considering that they might not be able to translate facts into a legal problem. That being said, many direct-to-public legal applications have emerged recently, and many of these applications do provide insightful legal information along with the legal sources (Morrison, 2019; Dahan and Liang, 2020). While these applications have raised concerns as to their legality, notably with the issue of unauthorized practice of law, many regulators including in Canada, the United States and Europe have cautiously supported the development of AI-power technology for the general public. Second, several lawyers (especially appellate) have surprisingly expressed concerns regarding "the Googlization of legal databases" (Vaidhyanathan, 2011). While they recognize the advantages of intuitive AI non-Boolean research, they claim that these algorithms are not superior when it comes to locating a more obscure appellate case law, to help win a case. It has even been argued that Boolean logic remains faster and more efficient because it does not lead to missed case. According to this view, while the "Googlized" legal database may quickly locate important caselaw especially if decided by a higher court, it can miss less obvious cases (Mart et al., 2019). In our work, this challenge translates to the long-tail problem for legal citations, and our use of embedding distance encourages matching based on semantic similarity. In other words, we only look for the most obvious citations, which correlates to higher performance on easier (procedural) citations and lower performance on harder (non-procedural) citations. Our work does not sufficiently address this problem, so we encourage more proficient information retrieval or prototype discovery methods in the future. From a deep learning perspective, the main limitation is due to the use of k-means clustering in our implementation of the system. There were several points of instability noted during the training process, which we theorize is largely due to the initializations of the k-means clustering algorithm. When the prototypes are initialized, the corresponding terms in the loss function have a strong influence on the cross-entropy loss, which leads to model collapse. Even when the prototypes are initialized properly, the loss function overfits to the prototypes after several updates but does not provide improvement in the classification performance, which is why we choose the best model by validation macro F1 instead of validation loss. ## Ethics Statement Intended Use We see at least two applications for legal practice. First, this system could serve as predictive text drafting application for legal memo and judicial opinions. This application would recommend a list of citations - both precedents, statutes and even secondary literature - that is the most relevant to the legal problems and concept discussed in the memo or opinion. Second, such application may also be integrated into legal databases, such as Westlaw or LexisNexis. While these databases have been working on new algorithms based on non-Boolean keyword searches with more intuitive AI features, more work is needed when it comes to finding the most relevant citation. Failure Mode Although the task of citation prediction is high-stakes and key in a lawyer's decision making process, risks associated with system failure are mitigated due to the system's enhanced interpretability. Since this system is intended for lawyers, and also classifies based on similarity to previous works, a user would be able to leverage their expertise to validate the decision. If the chosen provision text does not match the legal argument the user had in mind, they can easily examine the similarity to other available citations, or discard the system's decision entirely. Misuse Potential As mentioned in the paper, there is a high potential for people to confuse legal citations with legal judgment, and people can leverage the discovered citations to directly decide a ruling. This system could be misused in that way similar to previous models used in the industry, such as COMPAS (Kirkpatrick, 2017). ## Acknowledgements The research is in part supported by the NSERC Discovery Grants, New Frontiers in Research Fund, and the Research Opportunity Seed Fund (ROSF) of Ingenuity Labs Research Institute at Queen's University. We also acknowledge feedback from David Liang and Solinne Jung from the Conflict Analytics Lab, as well as the ACL review committee, that helped us improve the work. ## References Alejandro Barredo Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador García, Sergio Gil-López, Daniel Molina, Richard Benjamins, et al. 2020. Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. *Information Fusion*, 58:82–115. Paul Axel-Lute. 1982. Legal citation form: Theory and practice. *Law Libr. J.*, 75:148. Vaishak Belle and Ioannis Papantonis. 2021. Principles and practice of explainable machine learning. Frontiers in Big Data, 4. Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv:2004.05150. Rohan Bhambhoria, Hui Liu, Samuel Dahan, and Xiaodan Zhu. 2022. Interpretable low-resource legal decision making. *arXiv preprint arXiv:2201.01164*. Ricardo Campos, Vítor Mangaravite, Arian Pasquali, Alípio Jorge, Célia Nunes, and Adam Jatowt. 2020. Yake! keyword extraction from single documents using multiple local features. *Information Sciences*, 509:257–289. Ilias Chalkidis, Manos Fergadiotis, Prodromos Malakasiotis, Nikolaos Aletras, and Ion Androutsopoulos. 2020. LEGAL-BERT: The muppets straight out of law school. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 2898– 2904, Online. Association for Computational Linguistics. Chaofan Chen, Oscar Li, Daniel Tao, Alina Barnett, Cynthia Rudin, and Jonathan K Su. 2019. This looks like that: deep learning for interpretable image recognition. Advances in neural information processing systems, 32. Ian Covert, Scott Lundberg, and Su-In Lee. 2020. Feature removal is a unifying principle for model explanation methods. *arXiv preprint arXiv:2011.03623*. Faraz Dadgostari, Mauricio Guim, Peter A Beling, Michael A Livermore, and Daniel N Rockmore. 2021. Modeling law search as prediction. *Artificial Intelligence and Law*, 29(1):3–34. Samuel Dahan and David Liang. 2020. The case for ai-powered legal aid. *Queen's LJ*, 46:415. Łukasz Górski and Shashishekar Ramakrishna. 2021. Explainable Artificial Intelligence, Lawyer's Perspective, page 60–68. Association for Computing Machinery, New York, NY, USA. Jie Hou, Hanxiao Pan, Teng Guo, Ivan Lee, Xiangjie Kong, and Feng Xia. 2019. Prediction methods and applications in the science of science: A survey. Computer Science Review, 34:100197. Zihan Huang, Charles Low, Mengqiu Teng, Hongyi Zhang, Daniel E Ho, Mark S Krass, and Matthias Grabmair. 2021. Context-aware legal citation recommendation using deep learning. In Proceedings of the Eighteenth International Conference on Artificial Intelligence and Law, pages 79–88. Weina Jin, Xiaoxiao Li, and Ghassan Hamarneh. 2022. Evaluating explainable ai on a multi-modal medical imaging task: Can existing algorithms fulfill clinical requirements? In *Association for the Advancement* of Artificial Intelligence Conference (AAAI), volume 000, pages 000–000. Dong Ha Kim, Bo Kyeong Lee, and So Young Sohn. 2016. Quantifying technology–industry spillover effects based on patent citation network analysis of unmanned aerial vehicle (uav). *Technological Forecasting and Social Change*, 105:140–157. Keith Kirkpatrick. 2017. It's not the algorithm, it's the data. *Communications of the ACM*, 60:21 - 23. Loet Leydesdorff, Lutz Bornmann, and Caroline S Wagner. 2019. The relative influences of government funding and international collaboration on citation impact. Journal of the Association for Information Science and Technology, 70(2):198–201. Hanwen Liu, Huaizhen Kou, Chao Yan, and Lianyong Qi. 2019a. Link prediction in paper citation network to construct paper correlation graph. *EURASIP Journal on Wireless Communications and Networking*, 2019(1):1–12. Xiao Liu, Da Yin, Yansong Feng, Yuting Wu, and Dongyan Zhao. 2021. Everything has a cause: Leveraging causal inference in legal text analysis. *CoRR*, abs/2104.09420. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*. Chu Fei Luo, Rohan Bhambhoria, Samuel Dahan, and Xiaodan Zhu. 2022. Evaluating Explanation Correctness in Legal Decision Making. Proceedings of the Canadian Conference on Artificial Intelligence. Https://caiac.pubpub.org/pub/67i6fcki. Yuhao Ma, Meina Kan, Shiguang Shan, and Xilin Chen. 2020. Learning deep face representation with longtail data: An aggregate-and-disperse approach. *Pattern Recognition Letters*, 133:48–54. Susan Nevelow Mart, Breda Joe, Ed Walters, Tito Sierra, and Khalid Al-Kofahi. 2019. Inside the black box of search algorithms. *AALL Spectrum*. Peter W. Martin. 2020. § 1-000. basic legal citation: What and why? Leland McInnes, John Healy, and James Melville. 2018. Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426. Will Morrison. 2019. Technology task force: Update report. Shounak Paul, Pawan Goyal, and Saptarshi Ghosh. 2022. LeSICiN: A Heterogeneous Graph-based Approach for Automatic Legal Statute Identification from Indian Legal Documents. In Proceedings of the 36th AAAI Conference on Artificial Intelligence (AAAI). Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "why should i trust you?" explaining the predictions of any classifier. In *Proceedings of* the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135– 1144. Cynthia Rudin. 2019. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. *Nature Machine* Intelligence, 1(5):206–215. Ali Sadeghian, Laksshman Sundaram, Daisy Zhe Wang, William F Hamilton, Karl Branting, and Craig Pfeifer. 2018. Automatic semantic edge labeling over legal citation graphs. *Artificial Intelligence and Law*, 26(2):127–144. Jaromir Savelka and Kevin D Ashley. 2021. Discovering explanatory sentences in legal case decisions using pre-trained language models. arXiv preprint arXiv:2112.07165. Naoki Shibata, Yuya Kajikawa, and Ichiro Sakata. 2012. Link prediction in citation networks. *Journal of the* American society for information science and technology, 63(1):78–85. S. Vaidhyanathan. 2011. *The Googlization of Everything: (And Why We Should Worry)*. University of California Press. Pengfei Wang, Yu Fan, Shuzi Niu, Ze Yang, Yongfeng Zhang, and Jiafeng Guo. 2019. Hierarchical matching network for crime classification. In *proceedings* of the 42nd international ACM SIGIR conference on research and development in information retrieval, pages 325–334. Tianlu Wang, Rohit Sridhar, Diyi Yang, and Xuezhi Wang. 2022. Identifying and mitigating spurious correlations for improving robustness in NLP models. In *Findings of the Association for Computational* Linguistics: NAACL 2022, pages 1719–1729, Seattle, United States. Association for Computational Linguistics. Nuo Xu, Pinghui Wang, Long Chen, Li Pan, Xiaoyan Wang, and Junzhou Zhao. 2020. Distinguish confusing law articles for legal judgment prediction. *arXiv* preprint arXiv:2004.02557. Wenmian Yang, Weijia Jia, Xiaojie Zhou, and Yutao Luo. 2019. Legal judgment prediction via multiperspective bi-feedback network. *arXiv preprint* arXiv:1905.03969. Xiao Yu, Quanquan Gu, Mianwei Zhou, and Jiawei Han. 2012. Citation prediction in heterogeneous bibliographic networks. *Proceedings of the 12th SIAM International Conference on Data Mining, SDM 2012*, pages 1119–1130. Zaixi Zhang, Qi Liu, Hao Wang, Chengqiang Lu, and Cheekong Lee. 2021. Protgnn: Towards selfexplaining graph neural networks. arXiv preprint arXiv:2112.00911. Haoxi Zhong, Yuzhong Wang, Cunchao Tu, Tianyang Zhang, Zhiyuan Liu, and Maosong Sun. 2020a. Iteratively questioning and answering for interpretable legal judgment prediction. *Proceedings of the AAAI* Conference on Artificial Intelligence, 34(01):1250– 1257. Haoxi Zhong, Chaojun Xiao, Cunchao Tu, Tianyang Zhang, Zhiyuan Liu, and Maosong Sun. 2020b. How does nlp benefit legal system: A summary of legal artificial intelligence. In *Proceedings of ACL*. ## A Additional Implementation Details A.1 Training Parameters All experiments were run on 11GB Nvidia 2080 GPUs. We used an initial learning rate of 2e-5, weight decay of 0.01, batch size of 8, and trained the system for 20 epochs. The training pipeline and models were implemented using Huggingface and Pytorch python libraries, and all pre-trained language model checkpoints were also downloaded from Huggingface's online repository. The total runtime is approximately 30 hours for 20 epochs using the BERT-base variants, but we observe convergence in the loss by 10 epochs. Additionally, we select the best model by validation macro-F1 score with our prototype model instead of validation loss. All experiments are reported as a single run. Clustering for prototype discovery was implemented with the PyKeops (Python Kernel Operations) library 3. We used k-means clustering on the embedding space with k=5, clustered on cosine distance, and we re-cluster the prototypes every 5 epochs. We tested k=3 and k=5 for clustering, and chose k=5 as the best performing. This 3https://www.kernel-operations.io/ U.S. Code # citations # documents | 42 § 1983 | 64524 | 31246 | |--------------|---------|---------| | 11 § 523 (a) | 22377 | 7676 | | 28 § 1331 | 18419 | 13967 | | 28 § 157 (b) | 16517 | 12319 | | 42 § 1981 | 13190 | 6178 | Table 5: Statistics on frequency of citations for the top 5 most frequently cited U.S. codes. U.S. Code citations are formatted [Title] §[Section] [(optional) Subsection]. Each citation can appear in a document multiple times, but even counting documents alone, there is a significant imbalance between the different labels. Table 6: RoBERTa token count statistics with varying context spans. | Context | Min. | Max. | Mean | |-----------|--------|--------|---------| | N/A | 5 | 120593 | 1973.83 | | ±4 | 3 | 39848 | 825.96 | | ±2 | 3 | 21005 | 466.06 | result aligns with the findings of previous work (Zhang et al., 2021). We tested other configurations, such as euclidean distance instead of cosine, but these settings gave the best performance. For the D*P reced* weights, we use the same λ values as described in (Zhang et al., 2021), where λ1 = 0.10, λ2 = 0.0005, and λ3 = 0.001. Then, we chose δ = 0.10 for D*P rovis*, and a minimum cosine similarity smin of -1. ## A.2 Preliminary Language Modelling Results We experimented with three models: - **RoBERTa-base** (Liu et al., 2019b) (110M parameters) - An optimized version of BERT, which is an encoder-only pre-trained language model. - **LegalBERT** (Chalkidis et al., 2020) (110M parameters) - A variant of BERT domain adapted to law by pre-training on court cases. - **Longformer** (Beltagy et al., 2020) (149M parameters) - An optimized transformer architecture with sparse attention for a longer context window (4096 tokens vs. 512 in BERT). ## A.3 Preprocessing During the training process, we use regex expressions to remove all HTML web formatting, links, U.S. Code citations, and Supreme Court citations. Non-ASCII characters are also removed, but everything else is preserved as they did not cause a noticeable decrease in performance. Additionally, when predicting fewer than 100 citations in later experiments, we again remove any documents that do not contain the target citations. For our train set of 141237 documents, this step results in 50567 documents for experiments on the top 5 citations, 90243 for top 20 citations. We also retrieve surrounding text spans in different configurations as described in Section 4.3. ## B Additional Dataset Analysis The long-tail nature of the dataset stays consistent across the different target label settings we reported, as shown in Figure 5. ## C Expert Feedback We recruited three volunteer legal experts through our institution for our brief feedback study. They have approximately 20 years of combined experience and were able to converse freely to share opinions. While we did not provide compensation for this specific study, they are active collaborators and receive salary or course credits for this interview. ![13_image_0.png](13_image_0.png) | Citation | 42 U.S. Code § 1988 | |----------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Provision | Proceedings in vindication of civil rights (a)Applicability of statutory and common law... (b)Attorney's fees... the court, in its discretion, may allow the prevailing party, other than the United States, a reasonable attorney's fee as part of the costs, except that in any action brought against a judicial officer for an act or omission taken in such officer's judicial capacity such officer shall not be held liable for any costs, including attorney's fees, unless such action was clearly in excess of such officer's jurisdiction. (c)Expert fees... | | Precedent | <s>MEMORANDUM-DECISION AND ORDER KAHN, District Judge. Presently pending is a motion by Plaintiff Association of International Automobile Manufacturers, Inc. ("AIAM") for attorneys fees pursuant to Fed.R.Civ.P. 54(d) (A) and <mask>. Plaintiff asserts that it is the prevailing party in an action brought under and is thus presumptively entitled to such fees. Ass'n v. Cahill ("AAMA II"), <mask>, reprinted at 1997 U.S.C.C.A.N. 1077, 1388. The remaining two prongs are also clearly established. First, the right created by 209(a)'s prohibition is demonstrably not so vague and amorphous that its enforcement will strain judicial competence... | | Expert 1 | 2: It seems somewhat relevant but hard to say as I have no expertise in this area. Also, I wonder whether the model is a Legal-Bert and saw the whole case. | | Expert 2 | 3: The relevant part is the part about the attorney fees | | Expert 3 | 1: The provision deals with compensating a party for attorney fees in order to enforce an action. The sample, however, is dealing with a party/state's right to enforce emission standards. | | Table 7: An example of a legal citation, the contents of the provision, a court opinion discovered by clustering the | | Table 7: An example of a legal citation, the contents of the provision, a court opinion discovered by clustering the latent space, and comments from the 3 legal experts. 2 of the 3 legal experts mention the few sentences regarding attorney fees as relevant, but the third brought up the issue of these citations being ultimately separate to the case's contents. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitations section ✓ A2. Did you discuss any potential risks of your work? Ethical Statement section ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1 ✓ A4. Have you used AI writing assistants when working on this paper? I used GPT4 to proofread the camera ready version of the paper. I inputted the paper in chunks to provide feedback on grammar/style issues, then I manually proofread the paper again. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4.2 ✓ B1. Did you cite the creators of artifacts you used? 4.2 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? 4.2 ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? 4.2 ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? 4.2 ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? 4.2 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. 4.2 ## C ✓ **Did You Run Computational Experiments?** 5 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix A.1, A.2 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Appendix A.1 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 5 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Appendix A.1 D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** 4.1 ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? 4.1 ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Appendix C ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Appendix C ✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Work was done by members of the research lab as part of their role. ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Appendix C
wicke-2023-lms
{LM}s stand their Ground: Investigating the Effect of Embodiment in Figurative Language Interpretation by Language Models
https://aclanthology.org/2023.findings-acl.302
Figurative language is a challenge for language models since its interpretation is based on the use of words in a way that deviates from their conventional order and meaning. Yet, humans can easily understand and interpret metaphors, similes or idioms as they can be derived from embodied metaphors. Language is a proxy for embodiment and if a metaphor is conventional and lexicalised, it becomes easier for a system without a body to make sense of embodied concepts. Yet, the intricate relation between embodiment and features such as concreteness or age of acquisition has not been studied in the context of figurative language interpretation concerning language models. Hence, the presented study shows how larger language models perform better at interpreting metaphoric sentences when the action of the metaphorical sentence is more embodied. The analysis rules out multicollinearity with other features (e.g. word length or concreteness) and provides initial evidence that larger language models conceptualise embodied concepts to a degree that facilitates figurative language understanding.
# Lms Stand Their Ground: Investigating The Effect Of Embodiment In Figurative Language Interpretation By Language Models Philipp Wicke Ludwig-Maximilians-University (LMU) Institute for Information and Language Processing (CIS) Munich Center for Machine Learning (MCML) [email protected] ## Abstract Figurative language is a challenge for language models since its interpretation is based on the use of words in a way that deviates from their conventional order and meaning. Yet, humans can easily understand and interpret metaphors, similes or idioms as they can be derived from embodied metaphors. Language is a proxy for embodiment and if a metaphor is conventional and lexicalised, it becomes easier for a system without a body to make sense of embodied concepts. Yet, the intricate relation between embodiment and features such as concreteness or age of acquisition has not been studied in the context of figurative language interpretation concerning language models. Hence, the presented study shows how larger language models perform better at interpreting metaphoric sentences when the action of the metaphorical sentence is more embodied. The analysis rules out multicollinearity with other features (e.g. word length or concreteness) and provides initial evidence that larger language models conceptualise embodied concepts to a degree that facilitates figurative language understanding. ## 1 Introduction Infants acquire their first conceptual building blocks by observation and manipulation in the physical world. These primary building blocks enable them to make sense of their perceptions (Mandler and Cánovas, 2014). In return, their embodiment defines the capabilities with which they can explore and understand the world. The early conceptual system is built from spatial schemas, which enables early word understanding (Mandler, 1992). These so-called *Image Schemas* are recurring cognitive structures shaped by physical interaction with the environment. They emerge from bodily experience and motivate subsequent conceptual metaphor mappings (Johnson, 2013). The metaphorical mapping is visible in our everyday language whenever we use figurative language. For example, if we say that she dances *like a turtle*, that is to say, that she dances poorly. The metaphor in this phrase is readily interpreted by humans, who would favour the interpretation of *dances poorly* over *dances well*. The *turtle dance* example employs a conceptual mapping in which the *turtle* provides the source domain for the attributes *slow* and *rigid*, which turn the *dance* target domain into *poorly dancing*. This mapping draws from the human, bodily experience of dancing and therefore enables interpretation. For a language model (LM), the understanding of figurative language is a great challenge (Liu et al., 2022). By nature of their digital implementation as computer algorithms, LMs are nonembodied and do not ground their conceptualisation by physical interaction with the environment. Instead, LMs learn statistical features of language by deep learning vast amounts of data (Vaswani et al., 2017). Whether these learned statistical features allow LMs to mirror or copy natural language understanding (NLU) is subject to discussion (Zhang et al., 2022a). Moreover, Tamari et al. (2020) suggest an *embodied* language understanding paradigm for LMs can benefit NLU systems through grounding by metaphoric inference. One can argue that most embodied metaphors are heavily conventional (e.g. UP IS GOOD, DOWN IS BAD, KNOWLEDGE IS LIGHT, IGNORANCE IS DARKNESS) and as such, they are lexicalised in a language without an inherent need to understand their bodily basis. This lexicalisation should allow LMs to conceptualise and interpret them correctly and more robustly than less conventional metaphors. Conventionality relates to word frequency and age of acquisition (AoA), i.e. more frequent words and words that are acquired early in life are more conventional. We argue that embodiment has a measurable effect on the interpretation of metaphors that differs from the effect of other linguistic features. Moreover, we investigate whether an interpretation of figurative language with more 4899 embodied concepts is easier for LMs. Analogously, we investigate conflating factors such as the AoA, word frequency, concreteness and word length. The relation between embodiment and LMs' ability to interpret figurative language has not yet been investigated and is the key contribution of this research. The following Section (2) starts with a review of language model abilities, more specifically figurative language interpretation abilities. The review identifies a suitable data set for our experiment and describes its formation in Section 3. We use a subset of the *Fig-QA* data set (Liu et al., 2022), a Winograd-style figurative language understanding task, and correlate the performance of various LMs concerning the degree of embodiment of the metaphorical actions that the LMs are tasked to interpret. In Section 4, we identify that models, that can reach a certain performance on our *Fig-QA* subset, shows a significant and positive correlation between the rating of the embodiment of the action involved in the metaphorical phrase and the model's ability to interpret the metaphor correctly. An in-depth analysis of additional features, such as the AoA, word length or frequency, does not indicate multicollinearity among those features. In Section 5 we conclude that the degree of embodiment of the action within the metaphoric phrase is a predictor of the LMs' ability to correctly interpret the figurative language. Lastly, we discuss the limitations and broader implications of the work. ## 2 Related Works 2.1 Language Model Abilities The presented work investigates the zero-shot capabilities of LMs of different types and sizes. Arguably, LMs' capabilities to solve language-based tasks, which they have not been trained on, are an emerging property of their complexity and largescale statistical representation of language. It is a property that makes them unsupervised multitask learners (Radford et al., 2019; Brown et al., 2020). Despite task-agnostic pre-training and a task-agnostic architecture, LMs can perform various NLP tasks without seeing a single example of the task, albeit with mixed results (Srivastava et al., 2022). This raises the question of whether language models mirror the human conceptual understanding encoded in language or whether they "only" learn statistical features from the underlying training distribution, allowing them to generalise and convincingly solve previously unseen tasks. Several works have tried to assess to what extent LMs are capable to perform more complex NLP tasks (e.g. logical reasoning or metaphoric inference). For example, Zhang et al. (2022a) investigate the logical reasoning capabilities of BERT (Devlin et al., 2019). For this, the authors define a simplistic problem space for logical reasoning and show that BERT learns statistical features from its training distribution, but fails to generalise when presented with other distributions and drops in performance. According to the authors, this implies that BERT does not emulate a correct reasoning function in the same way that humans would conceptualise the problem. Similarly, Sanyal et al. (2022) evaluate whether the RoBERTa model (Liu et al., 2019) or the T5 model (Raffel et al., 2020) can perform logical reasoning by understanding implicit logical semantics. The authors test the models on various logical reasoning data sets whilst introducing minimal logical edits to their rule base. Consequently, Sanyal et al. (2022) show that LMs, even when fine-tuned on logical reasoning, do not sufficiently learn the semantics of some logical operators. Han et al. (2022) present a diverse data set for reasoning in natural language. An evaluation of the GPT-3 model (Brown et al., 2020) on their data set shows a performance that is only slightly better than random. This indicates that there is a fundamental gap between human reasoning and LM reasoning and their conceptualisation capabilities. Yet, language models have demonstrated emergent abilities (Wei et al., 2022), encompassing enhanced skills and capabilities that are absent in smaller language models. Such abilities cannot be accurately predicted by extrapolating the performance of smaller models. Consequently, investigating the influence of model size on different tasks becomes imperative in comprehending the potentials and constraints of smaller and large language models. The related works show that, although LMs seem to mirror an aspect of reasoning, e.g. logical reasoning, a closer look at the underlying conceptualisation of these abilities can reveal they are not robust and fail to mirror deeper semantics. Both logical reasoning and figurative language interpretation require an understanding of relationships between words and concepts and the ability to make inferences based on that understanding. This overlap in cognitive processes allows for the development of models that can perform both tasks effectively. ## 2.2 Figurative Language Interpretation Liu et al. (2022) are among the first to quantitatively assess the ability of LMs to interpret figurative language. Their *Fig-QA* data set is publicly available1and we discuss the construction of our subcorpus in more detail in Section 3.1. In short, the authors present crowdsourced creative metaphor phrases with two possible interpretations of various LMs and check for which interpretation the model returns the higher probability distribution. The main contribution of Liu et al. (2022) is the *Fig-QA* task, which consists of 10,256 examples of human-written creative metaphors that are paired in a Winograd schema. The authors also contribute an assessment of various LMs in zeroshot, few-shot and fine-tuned settings on *Fig-QA*. Moreover, their results indicate that overall, LMs fall short of human performance. On a phrase and word level, the authors find that longer phrases are harder to interpret and that metaphors relying on commonsense knowledge concerning objects' volume, height, mass, brightness or colour are easier to interpret. This indicates that bodily modalities seem to facilitate interpretation success. They also show that larger models (i.e. number of parameters) perform better on the task. All of these findings have been reproduced by our experiments. Chakrabarty et al. (2022) present FLUTE, a data set of 8,000 figurative NLI instances. Their data set includes the different figurative language categories of metaphor, simile, and sarcasm. In contrast to Fig-QA, the authors do not create metaphors in a Winograd scheme as a forced-choice task but create natural language explanations (NLE) using GPT-3 (Brown et al., 2020) and human validation. Their experiments with state-of-the-art NLE benchmark models show poor performance in comparison to human performance. The authors do not differentiate the metaphors, similes and sarcastic phrases concerning linguistic features. Moreover, they include a language model in the creation process, which, as far as our study is concerned, introduces a bias to the data set. Hence, we decide to use the Fig-QA data set instead of FLUTE. ## 2.3 Modelling Embodied Language It is generally understood that language is grounded in experience based on interaction with the world (Bender and Koller, 2020; Bisk et al., 2020). Hence, 1https://huggingface.co/datasets/ nightingal3/fig-qa there is an interest to leverage LMs' capabilities in interactions with the environment. For example, Suglia et al. (2021) present EmBERT, which attempts language-guided visual task completion. Their model uses a pre-trained BERT stack fused with an embedding for detecting objects from visual input. The model achieves competitive performance on ALFRED, a benchmark task for interpreting instructions (Shridhar et al., 2020). Huang et al. (2022) investigate if LMs know enough embodied knowledge about the world to ground high-level tasks in the procedural planning of instructions for household tasks. For example, the authors pass a prompt, e.g. "Step 1: Squeeze out a glob of lotion" to a pre-trained LM (e.g. GPT-3) and extract actionable knowledge from its response. Their results indicate that large language models (<10B parameters) can produce plausible action plans for embodied agents. Embodiment In this study, the term *embodiment* relates to cognitive sciences: Humans process a linguistic statement such as "*to grab an apple*" using embodied simulations in the brain. Perceptual experiences activate cortical regions that are dedicated to sensory actions and those regions partially reactivate premotor areas to implement, what Barsalou (1999) calls, *perceptual symbols*. Reading of actions words such as kick or *lick* is associated with premotor cortex activation responsible for controlling movements for these actions (Hauk et al., 2004). This effect is diminished by figurative language (Schuil et al., 2013). Therefore, a statement such as "*to grasp the idea*" does not necessarily rely on premotor cortex simulation. The semantic processing of the linguistic statement is therefore linked to its context and degree of embodiment in the sense that the action can be simulated by a brain in a body (Zwaan, 2014). This understanding of the term *embodiment* guides the evaluation of how language models, which do not have a brain in a body, can interpret figurative language phrases with a varying degree of embodied actions. ## 3 Statistical Evaluation The review of related works shows that there are abilities of LMs that go beyond mere language generation, e.g. logical reasoning, and action planning. It is unclear how LMs conceptualise actions that humans conceptualise using interaction with the environment. Figurative language acts as a test bed to assess metaphorical conceptualisations since they are grounded in embodied experience and interaction with the environment. We take Liu et al. (2022)'s findings as a starting point to focus on the effect of embodiment in figurative language interpretation by langauge models of various sizes. ## 3.1 Experimental Framework Embodiment Rating and Data Set To assess the effect of embodiment on the task, we discuss the effects of embodiment in semantic processing and introduce the simplification underlying our study through an example. The *Fig-QA* provides the following item: (A) The pants were as faded *as ...* $$(\mathrm{A.1})\dots\,{\mathit{t h e\ m e m n o r y\ o f p o g s e t}}$$ $$o f p o g s$$ $$(\mathrm{A.2})\dots\,t h$$ (A.2) *... the sun in June* with the possible interpretations: (A.I) *they were very faded* (A.II) *they were bright* The LM is prompted with each combination of sentence completion and interpretation (i.e. A.1+A.I, A.1+A.II, A.2+A.I, A.2+A.II). Notably, Liu et al. (2022) have shown that the addition of "*that is* to say" as a concatenation between metaphorical phrase and interpretation phrase elicits better model performance, hence we also include this prompt in our studies. Subsequently, the prediction scores of the language modelling head (scores for each vocabulary token) are retrieved and the highest probability becomes the LMs choice of interpretation (for more details, see (Liu et al., 2022)). We compare this example with a different *Fig-QA* item: # (B) *She doncces like $a\,$.... $$(\mathbf{B.1})\dots f a i r y$$ $$(\mathbf{B.2})\dots\,t u r t l e$$ with the possible interpretations: (B.I) *she dances well* (B.II) *she dances poorly* Given our hypothesis that embodiment affects the LMs' ability to interpret these phrases, we score (A) and (B) concerning the embodiment. As a simplification, we limit the rating of embodiment to the actions within the phrase. Every phrase evaluated has at least one word with a score related to an action. Most of the time, these related actions are verbs. Thus, we rate *faded* for (A) and *dances* for (B) with respect to their relative embodiment. For this scoring, we consult data by Sidhu et al. (2014). In their empirical study, Sidhu et al. (2014) characterise a dimension of a relative embodiment for verbs. In the construction of the data set, "participants were asked to judge the degree to which the meaning of each verb involved the human body, on a 1–7 scale" (Sidhu et al., 2014). Their resulting data set consists of ratings for 687 English verbs. Our hypothesis is that embodiment is a semantic component which affects the interpretation ability of LMs concerning figurative language. With their data set, the authors provide evidence that the meaning of a verb has a semantic component linked to the human body in the lexical processing of that verb. They assume that more robust semantic activation is generated by more embodied verbs (Sidhu et al., 2014). This provides us with data set we can apply to our experiment on figurative language. Moreover, their experiment provides additional control variables such as the AoA and word length, which have a known effect on lexical processing (Colombo and Burani, 2002) and are included in our results (Sec. 4). At the time of conducting our experiment, *FigQA* did only provide the *training* and *development* data, which we will refer to as train & dev. Hence, we identify all phrases from the train & dev data set that contain at least one word with an embodiment rating from Sidhu et al. (2014). The process of creating the subcorpus with embodiment ratings (CEmb) begins by identifying verbs using *spaCy* (Honnibal et al., 2020). The lemmatized versions of the verbs for the metaphorical phrases are then matched with embodiment scores, resulting in a subcorpus (CEmb) with 1,438 entries. If more than one verb is present in the metaphorical sentence, the average is assigned. We note that, future work will assess whether a different heuristic for treating multiple actions influences our results. Analogously, we construct a subcorpus of the same size with metaphorical phrases that do not contain an embodied verb (C*N oE*). For both subcorpora, we only keep phrases in which the verb is contained in the Winograd pair. The resulting subcorpora statistics are listed in Table 1 and further examples from the subcorpus are presented in the Appendix in Section A. The previous examples (A) and (B) are thus augmented as follows: (A) The pants were as faded *as ...* Embodiment Rating: 2.36 (B) She dances *like a ...* Embodiment Score: 6.50 With the annotated *Fig-QA* subcorpus CEmb we now turn to the models we select to assess whether there is a correlation between embodiment score and LM task performance. Hypotheses The main hypothesis for the statistical evaluation can be summarized as follows: 1. There is a correlation between the LMs' interpretation capabilities of metaphors and the amount of embodiment of the verbs within those metaphorical phrases. Intuitively, more embodied actions such as kick, move or eat are much more concrete, shorter and basic, when compared to resonate, compartmentalise or *misrepresent*. Therefore, the analysis of embodied actions must take into account factors such as concreteness, AoA, word length and word frequency. Moreover, common metaphors are conventional and more lexicalised. Consequently, they might simply be more embodied and the effect of embodied verbs might stem from the fact that these verbs are more concrete in the context that they are presented. Hence, the first hypothesis should not stand alone, but will be evaluated along with two additional null hypotheses: 1.I There is no correlation between the LMs' interpretation capabilities of metaphors and the amount of concreteness of the verbs within those metaphorical phrases irrespective of their embodiment rating. In our evaluation, the concreteness of a word in its context will be scored using an open-source predictor2 based on distributional models and behavioural norms explained in (Rotaru, 2020). Details of the concreteness scoring with the predictor have been summarized in Sec. 3.2. Concreteness ratings are often subjective ratings (Brysbaert et al., 2014) or determined by other low-level features, such as AoA, word frequency and word length (Rotaru, 2020). To isolate the effect of embodiment, we add the second null hypothesis: 1.II There is no correlation between the LMs' interpretation capabilities of metaphors and other linguistic features, such as AoA, word frequency and word length. 2https://github.com/armandrotaru/ TeamAndi-CONcreTEXT For AoA we obtain scores for each of the actions from (Kuperman et al., 2012) and for word frequency from (Van Heuven et al., 2014). Together with word length and embodiment score we test for variance inflation to respond to 1.II. Model Selection The selection of our models is based on three criteria: First, we want to reproduce the results by (Liu et al., 2022) having a comparable measure. Second, we want to check whether the effect generalises to other large LMs. Third, we want a variation of different model sizes to account for varying performance on the task as a result of model size. For the latter two criteria, we start with the smallest available models of each type and check intermediate model sizes. We do not consider it necessary to check whether or not scaled, largest versions of each model perform better on the task since this is a general property of LMs (Brown et al., 2020; Srivastava et al., 2022). In the original *Fig-QA* study, the authors examine three transformer-based LMs with different parameter sizes: GPT-2 (Radford et al., 2019), GPT-3 (Brown et al., 2020) and GPT-Neo (Black et al., 2021). To reproduce the results by Liu et al. (2022), we include GPT-2, GPT-3, GPT-Neo LMs and add OPT LMs (Zhang et al., 2022b). An overview of the models and their specifications is shown in Table 2. Notably, we want to correlate whether the type and number of parameters play a role when it comes to performance concerning the embodiment. Hence, we include pairs of models from each type that are small (<1 billion) and medium to large (>1 billion) in their number of parameters. ## 3.2 Methodology We apply the same methodology of evaluation as Liu et al. (2022). In our zero-shot setting, each pretrained LM is prompted with the metaphor sentences combined with one of the interpretation sentences, concatenated with *that is to say*. For *OpenAI* models, the API provides the log probabilities per token as logprob return value. We access all other models using huggingface.co and its transformer library. To create the same evaluation metric as for results by (Liu et al., 2022), we follow (Tunstall et al., 2022) and implement a function that returns the logprob based on the prediction scores of the language modelling head. All code and data is publicly available3. 3https://osf.io/puhxb/?view_only= 15933a2da0a14f07834ba1d479ce9c43 | Label | Source | Description | Number of entries | |---------|------------------|--------------------------------------------------------------|---------------------| | CLiu | Fig-QA test | All phrases from the Fig-QA test set | 1,146 | | CEmb | Fig-QA train/dev | Phrases that have at least one action with embodiment rating | 1,438 | | CN oE | Fig-QA train/dev | Phrases that do not have an action with embodiment rating | 1,438 | | Label | #Parameters in Millions | Provider | |-------------------|---------------------------|------------| | GPT-3 (small) | ∼350 | OpenAI | | GPT-3 (large) | ∼175,000 | OpenAI | | OPT (small) | 350 | Facebook | | OPT | | | | (medium) | 13,000 | Facebook | | GPT-Neo (small) | 125 | EleutherAI | | GPT-NeoX (medium) | 20,000 | EleutherAI | | GPT-2 (small) | 355 | OpenAI | | GPT-2 XL (medium) | 1,500 | OpenAI | Table 1: CLiu is 100% of the *Fig-QA* test set and 11% of the entire *Fig-QA* data set (Liu et al., 2022). Our selected subsets CEmb and CNoE are mutually exclusive and each composes 14% of the entire *Fig-QA* data set. Table 2: Different model types and parameter numbers have been selected for the evaluation. For each type, we have selected a pair of smaller and medium to large model version. Reproduction and Suffix Prompting In an initial experiment, we reproduce the same experiment by Liu et al. (2022), but instead of performing the zero-shot classification on the test set, we evaluate the performance on CEmb and C*N oE* with and without suffix prompting (*that is to say*). This allows us to compare our data set against the baseline. The result indicates that GPT-3 models perform slightly worse on our subcorpora, but all conditions benefit from the suffix prompting (see Appendix C). Since both CEmb and C*N oE* are more difficult for GPT-3, we can rule out that this effect stems solely from the embodiment component present in CEmb, which is not present in C*N oE*. Moreover, we adopt the suffix prompting for all further experiments. Concreteness Scoring To determine the concreteness of a verb in context, (Rotaru, 2020) built a predictor based on a combination of distributional models, together with behavioural norms. We adopt the same settings and model choice as presented by the author, but exclude the word frequency behavioural norm, as we investigate it as a separate feature. We evaluate our predictor on the same English test set of the *CONcreTEXT* task at EVALITA2020 (Gregori et al., 2020) and receive a mean Spearman correlation of 0.87, which is in line with (Rotaru, 2020). The context-dependent models used in the predictor include ALBERT (Lan et al., 2020), BERT and GPT-2. Statistical Tests For each model, we obtain its performance on the data set with a binary scoring of each figurative phrase as being correctly or incorrectly identified. We correlate this series of binary values with the continuous variable of embodiment ratings by calculating the point biserial correlation coefficient and the associated p-value. Moreover, we assess various other language features to isolate any effect of embodiment. As described in previous sections and based on the work by Liu et al. (2022); Sidhu et al. (2014); Colombo and Burani (2002), we test for the effects of word concreteness, AoA, word frequency and word length. This analysis includes an assessment of the amount of multicollinearity within the regression variables by determination of the variance inflation factor (VIF). Moreover, we conduct linear regressions for all models (and all sizes) with respect to their task performance and the features: embodiment score, AoA, word frequency and word length. For these linear regressions, we include those with and without the embodiment score feature in order to assess whether this feature contributes to a higher coefficient of determination (R2). ## 4 Results Embodiment Correlation The results of all models are listed in Table 3 and visualized in Figure 1. Overall, for each pair of small and larger models, the larger models always perform better on the interpretation task than the smaller version of the model. Moreover, all larger versions of the models show a significant correlation (p < 0.05) between the embodiment rating and task performance. In two instances, GPT-NeoX (20B) and GPT-2 (1.5B), ![6_image_0.png](6_image_0.png) the p value is < 0.01. In the case of GPT-3, both model variants show a significant correlation. In all correlations, the coefficient is positive, albeit small (<0.1), which indicates that embodiment has a positive effect on task performance. All smaller models (except for GPT-3 with 350M parameters) do not show a significant correlation between embodiment score and task performance. Concreteness Using the concreteness-in-context predictor, we provide a concreteness value for each verb in CEmb and correlate those predictions with all models' performance. As a result, there is no significant correlation between the concreteness of the action word in its context and the performance of the LM on the interpretation (results in Appendix B). We do not reject hypothesis 1.I. Regression Analysis The linear regressions for all models and model sizes, both with and without the feature of embodiment score, revealed that the coefficient of determination (R2) was consistently higher for regressions that included the embodiment score feature. Furthermore, for cases without the embodiment feature, none of the other variables, such as Age of Acquisition (AoA), word frequency, or word length, showed a significant correlation with task performance. Related results and figures are available in Section D of the Appendix. | Model | Accuracy on CEmb | p-Value | Correlation | * p <.05 | |-------------------|--------------------|-----------|---------------|------------| | coefficient | ** p <.01 | | | | | GPT-3 (small) | 0.594 | 0.018 | 0.062 | * | | GPT-3 (large) | 0.667 | 0.034 | 0.056 | * | | OPT (small) | 0.561 | 0.206 | 0.033 | | | OPT | | | | | | (medium) | 0.627 | 0.034 | 0.056 | * | | GPT-Neo (small) | 0.535 | 0.399 | 0.022 | | | GPT-NeoX (medium) | 0.648 | 0.005 | 0.073 | ** | | GPT-2 (small) | 0.597 | 0.158 | 0.037 | | | GPT-2 XL (medium) | 0.606 | 0.009 | 0.069 | ** | AoA Word \begin{tabular}{c c c} Word & Embodiment & Word \\ Frequency & Rating & Length \\ \hline 1.345 & 1.326 & 1.017 & 86.387 \\ \end{tabular} Variance Inflation Pairwise correlations between AoA, word frequency, embodiment score and word length are visualized in Figure 2. Intuitively, frequency and AoA are expected to be correlated with each other, because words that are acquired much later in life are often less frequently used words, as they tend to be more complex or specific words. The multicollinearity test through VIF is presented in Table 4. All factors are close to 1.0, which indicates that there is no multicollinearity among predictor variables (if the VIF is between 5 and 10, multicollinearity is likely to present) (James et al., 2013). Given that there is no multicollinearity between embodiment score and other linguistic features, we do not reject hypothesis 1.II. ## 4.1 Interpretation The results of the correlation analysis indicate that embodiment affects the LMs' ability to interpret figurative language when the LM achieves a certain level of performance, which depends on the size of the model. The correlation coefficient is positive in all significant cases and those significant correlations occur in all larger (>1B parameter) model ver- ![7_image_0.png](7_image_0.png) sions. Since task performance increases with model size, the effect of embodiment becomes more apparent through more successful interpretations in better-performing models. The fact that concreteness, AoA, word length and word frequency do not inflate this effect, shows that the embodiment rating is not an arbitrary construct that implicitly models another linguistic feature. There are slight differences in the model types when it comes to the performance of the models. For example, GPT-3 shows a significant correlation for the effect of an embodiment for both, the small (350M) and large (175B) model sizes. Yet, this effect does not occur in the small GPT-2 (355M) model, but in the large GPT-2 (1.5B) version. Notably, *OpenAI* does not explicitly list the Ada model with 350M, but its performance ranks close with 350M versions on various tasks (Brown et al., 2020). Hence, this difference has only limited relevance. Nonetheless, we assume that the effect is correlating with model size and that a reliable effect can be seen in larger models with a parameter number of over 1 billion. ## 5 Conclusion 5.1 Contribution To The Field We successfully reproduce results that are in line with (Liu et al., 2022). Moreover, we provide a subcorpus with ratings of an embodiment for the Fig-QA task. We identify the contribution of embodied verbs to LMs' ability to interpret figurative language. To the best of our knowledge, this study is the first to provide evidence that the psycholinguistic norm of the perceived embodiment has been investigated in an NLP task for LMs. ## 5.2 Discussion Benchmarks, such as BIG-bench (Srivastava et al., 2022), show that different types and sizes of LMs can be evaluated on many different tasks to identify potential shortcomings or limitations. This paper takes an entirely different approach by zeroing in on a particular task, which has been augmented with a specific semantic evaluation (embodiment ratings of actions), to highlight how difficult tasks, such as figurative language interpretation, benefit not only from model size but from specific embodied semantics. Figurative language is difficult for LMs because its interpretation is often not conveyed directly by the conventional meaning of its words. Human NLU is embodied and grounded by physical interaction with the environment (Di Paolo et al., 2018). Consequently, it could be expected that LMs struggle when the interpretation of figurative language depends on a more embodied action. Yet, the opposite has been shown as more embodied concepts are more lexicalised and larger LMs can interpret them better in figurative language. Hence, our study provides valuable insight that raises the question of whether this effect is limited to figurative language or translates to other NLU tasks for LMs. ## 5.3 Limitations And Future Works The current results are limited to one specific figurative language task (*Fig-QA)*. In future work, we aim to test whether our hypothesis holds for other figurative language interpretation tasks, such as those by (Chakrabarty et al., 2022; Stowe et al., 2022). Moreover, we want to assess BIG-bench (Srivastava et al., 2022) performances on various other tasks concerning embodiment scoring and see whether the bias can be detected in tasks other than figurative language interpretation. The statistical evaluation has attempted to measure many different linguistic dimensions, e.g. AoA, word length, word frequency and concreteness in context. Empirically, this indicates that the effect of embodiment is not simply explainable by other factors. Theoretically, we argue that this correlation can be causally explained through the lexicalisation of conventional metaphors. We simplify conventionality by assuming that word frequency and age of acquisition (AoA) are indicators of conventionality, i.e. more frequent words and words that are acquired early in life are more conventional. Nonetheless, a thorough explanation of the effect of embodiment on LMs' capabilities for language tasks requires many more studies. Ethical Consideration It should be noted that a key component of the experiment is built from (Sidhu et al., 2014) with their ratings of relative embodiment. For their study, the authors have sampled data exclusively from (N=67, 57 female) "graduate students at the University of Calgary who participated in exchange for bonus credit in a psychology course, had a normal or corrected-to-normal vision, and reported English proficiency" (Sidhu et al., 2014). Even though *embodiment* is supposed to be a general, human experience, the pool of participants is relatively homogeneous (mostly female, educated and presumably able-bodied). A broader and more diverse set of ratings, specifically concerning differently-abled participants and cultural backgrounds should be targeted. Computing Cost All model inferences (except OpenAI) have been conducted on University servers with 8x *NVIDIA RTX* A6000 (300 W). Each experiment for each model lasted at most 10min with full power consumption. A conservative estimate of 2,400 W (8 GPUs x 300 W) for 20 experiments results in a power consumption of at most 8 kWh, which equals emission of at most ∼3.5 kg CO2 for all experiments with model inference. Data and Artefact Usage Existing artefacts used in this research are attributed to their creators and their consent has been acquired before the studies. This concerns the *embodiment ratings* by Sidhu et al. (2014), the *Fig-QA* corpus by Liu et al. (2022) and the *concreteness predictor* by Rotaru (2020). ## Acknowledgements We would like to thank the Pexman Language Processing Lab (University of Calgary) for sharing the embodiment ratings. We would also like to thank Team Andi for publishing and providing their CONcreTEXT submission. Without model access and hosting by *Huggingface* and *OpenAI* the study would not have been possible. We would also like to thank Marianna Bolognesi and Stefan Riegl for their valuable feedback. Moreover, we thank the three ARR reviewers for their valuable feedback. ## References Lawrence W Barsalou. 1999. Perceptual symbol systems. *Behavioral and brain sciences*, 22(4):577–660. Emily M Bender and Alexander Koller. 2020. Climbing towards nlu: On meaning, form, and understanding in the age of data. In Proceedings of the 58th annual meeting of the association for computational linguistics, pages 5185–5198. Yonatan Bisk, Ari Holtzman, Jesse Thomason, Jacob Andreas, Yoshua Bengio, Joyce Chai, Mirella Lapata, Angeliki Lazaridou, Jonathan May, Aleksandr Nisnevich, Nicolas Pinto, and Joseph Turian. 2020. Experience grounds language. In Proceedings of the 2020 Conference on EMNLP, pages 8718–8735. ACL. Sid Black, Gao Leo, Phil Wang, Connor Leahy, and Stella Biderman. 2021. GPT-Neo: Large Scale Autoregressive Language Modeling with MeshTensorflow. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. *Advances in neural information processing* systems, 33:1877–1901. Marc Brysbaert, Amy Beth Warriner, and Victor Kuperman. 2014. Concreteness ratings for 40 thousand generally known english word lemmas. *Behavior* research methods, 46(3):904–911. Tuhin Chakrabarty, Arkadiy Saakyan, Debanjan Ghosh, and Smaranda Muresan. 2022. Flute: Figurative language understanding and textual explanations. In Proceedings of the 2022 Conference on EMNLP. Lucia Colombo and Cristina Burani. 2002. The influence of age of acquisition, root frequency, and context availability in processing nouns and verbs. *Brain* and Language, 81(1-3):398–411. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the NAACL: Human Language Technologies, page 4171–4186. Ezequiel A Di Paolo, Elena Clare Cuffari, and Hanne De Jaegher. 2018. *Linguistic bodies: The continuity* between life and language. MIT press. Lorenzo Gregori, Maria Montefinese, Daniele P Radicioni, Andrea Amelio Ravelli, and Rossella Varvara. 2020. Concretext@ evalita2020: The concreteness in context task. In *EVALITA proceedings*. CEUR.org. Simeng Han, Hailey Schoelkopf, Yilun Zhao, Zhenting Qi, Martin Riddell, Luke Benson, Lucy Sun, Ekaterina Zubova, Yujie Qiao, Matthew Burtell, et al. 2022. Folio: Natural language reasoning with firstorder logic. *arXiv preprint arXiv:2209.00840*. Olaf Hauk, Ingrid Johnsrude, and Friedemann Pulvermüller. 2004. Somatotopic representation of action words in human motor and premotor cortex. *Neuron*, 41(2):301–307. Matthew Honnibal, Ines Montani, Sofie Van Landeghem, and Adriane Boyd. 2020. spacy: Industrialstrength natural language processing in python. Wenlong Huang, Pieter Abbeel, Deepak Pathak, and Igor Mordatch. 2022. Language models as zero-shot planners: Extracting actionable knowledge for embodied agents. *arXiv preprint arXiv:2201.07207*. Gareth James, Daniela Witten, Trevor Hastie, and Robert Tibshirani. 2013. *An introduction to statistical learning*, volume 112. Springer. Mark Johnson. 2013. The body in the mind: The bodily basis of meaning, imagination, and reason. University of Chicago press. Victor Kuperman, Hans Stadthagen-Gonzalez, and Marc Brysbaert. 2012. Age-of-acquisition ratings for 30,000 english words. *Behavior research methods*, 44(4):978–990. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. Albert: A lite bert for self-supervised learning of language representations. In Proceedings of ICRL 2020. Emmy Liu, Chen Cui, Kenneth Zheng, and Graham Neubig. 2022. Testing the ability of language models to interpret figurative language. In Proceedings of the 2022 Conference of the NAACL. ACL. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*. Jean M Mandler. 1992. How to build a baby: Ii. conceptual primitives. *Psychological review*, 99(4):587. Jean M Mandler and Cristóbal Pagán Cánovas. 2014. On defining image schemas. *Language and cognition*, 6(4):510–532. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. *OpenAI* blog, 1(8):9. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(140):1–67. Armand Stefan Rotaru. 2020. Andi@ concretext: Predicting concreteness in context for english and italian using distributional models and behavioural norms (short paper). In *EVALITA proceedings*. CEUR.org. Soumya Sanyal, Zeyi Liao, and Xiang Ren. 2022. Robustlr: Evaluating robustness to logical perturbation in deductive reasoning. In Proceedings of the 2022 Conference on EMNLP. Karen DI Schuil, Marion Smits, and Rolf A Zwaan. 2013. Sentential context modulates the involvement of the motor cortex in action language processing: An fmri study. *Frontiers in human neuroscience*, 7:100. Mohit Shridhar, Jesse Thomason, Daniel Gordon, Yonatan Bisk, Winson Han, Roozbeh Mottaghi, Luke Zettlemoyer, and Dieter Fox. 2020. Alfred: A benchmark for interpreting grounded instructions for everyday tasks. In *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition*, pages 10740–10749. David M Sidhu, Rachel Kwan, Penny M Pexman, and Paul D Siakaluk. 2014. Effects of relative embodiment in lexical and semantic processing of verbs. Acta psychologica, 149:32–39. Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al. 2022. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. *arXiv preprint* arXiv:2206.04615. Kevin Stowe, Prasetya Utama, and Iryna Gurevych. 2022. IMPLI: Investigating NLI models' performance on figurative language. In *Proceedings of the* 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5375–5388, Dublin, Ireland. ACL. Alessandro Suglia, Qiaozi Gao, Jesse Thomason, Govind Thattai, and Gaurav Sukhatme. 2021. Embodied bert: A transformer model for embodied, language-guided visual task completion. In *Novel* Ideas in Learning-to-Learn through Interaction at EMNLP2021. Ronen Tamari, Chen Shani, Tom Hope, Miriam RL Petruck, Omri Abend, and Dafna Shahaf. 2020. Language (re) modelling: Towards embodied language understanding. pages 6268–6281. Lewis Tunstall, Leandro von Werra, and Thomas Wolf. 2022. *Natural language processing with transformers*. O'Reilly Media, Inc. Walter JB Van Heuven, Pawel Mandera, Emmanuel Keuleers, and Marc Brysbaert. 2014. Subtlex-uk: A new and improved word frequency database for british english. Quarterly journal of experimental psychology, 67(6):1176–1190. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *Advances in neural information processing* systems, 30. Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. 2022. Emergent abilities of large language models. *Transactions* on Machine Learning Research. Survey Certification. Honghua Zhang, Liunian Harold Li, Tao Meng, KaiWei Chang, and Guy Van den Broeck. 2022a. On the paradox of learning to reason from data. *arXiv* preprint arXiv:2205.11502. Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. 2022b. Opt: Open pre-trained transformer language models. *arXiv preprint arXiv:2205.01068*. Rolf A Zwaan. 2014. Embodiment and language comprehension: Reframing the discussion. Trends in cognitive sciences, 18(5):229–234. ## A **Appendix: Further Samples Form** Cemb Further samples from CEmb are provided in Table 5. The table depicts a selection of linguistic examples of more embodied and less embodied metaphors. Originally, these samples are form the *Fig-QA* data set by Liu et al. (2022), providing the entries for the columns *Figurative Phrase* and *Target Interpretation*. The *Embod. Score* is the embodiment rating of the action identified in the respective column and taken from Sidhu et al. (2014). A lower embodiment score indicates that the action has been rated as "less embodied". In Table 5, none of the language models shows a statistically significant correlation between the variables (α < 0.05). ## B Appendix: Concreteness Correlation Table 6 shows the results of the the point biserial correlation between the concreteness of action in context and performance of LM on the interpretation of figurative language phrases (CEmb). pointbiserial of the *scipy stats* package for Python has been used to determine the correlation. This function uses a t-test with n-1 degrees of freedom. The value of the point-biserial correlation has been calculated from: $$r_{\mathrm{pb}}={\frac{({\bar{Y}}_{1}-{\bar{Y}}_{0})}{s_{y}}}{\sqrt{\frac{N_{1}N_{2}}{N(N-1)}}}$$ $$(1)$$ N(N − 1) (1) With Y0 and Y1 as the means of the metric observations; N1 and N2 as the number of observations; N as the total number of observations and sy as the standard deviation of all the metric observations. ## C Appendix: Reproduction And Suffix Prompting Table 7 presents the results of the zero-shot performance of the GPT-3 models Ada (∼350M parameters) and *Davinci* (∼175B parameters) on the different corpora (Table 1) with respect to the suffix that is to say. On all data sets (CLiu, CEmb, C*N oE* and CEmb + C*N oE*) performance of both models is better if the suffix is provided. ## D Appendix: Linear Regression Data In addition to the VIF, we performed two linear regression analyses for each of the models (and sizes). These results are visualized in Figure 3. The first includes all features (embodiment score, Age of Acquisition, word frequency and word length). The second excludes the embodiment score and shows a lower coefficient of determination (R2) for all regressions. Details of these linear regressions are exemplified in the results for *GPT3_350m* in Table 8 (with embodiment score feature) and in Table 9 (without embodiment score feature). | Figurative Phrase | Target Interpretation | Embod. Score | Action | |--------------------------------------------------|-----------------------------------------|----------------|------------| | The chihuahua believes it is a wolf | The small dog thinks it is undefeatable | 2.83 | believe | | The chihuahua believes it is a lap blanket | The small dog always stays on your lap | 2.83 | believe | | He knew her like a sister | He knew her very well. | 3.00 | know | | He knew her like a stranger | He didn't know her well. | 3.00 | know | | He guided her like a lighthouse. | He was a good guide. | 3.61 | guide | | He guided her like a broken GPS. | He was a terrible guide. | 3.61 | guide | | The argument appears as a crystal clear spring | The argument makes sense | 3.90 | appear | | The argument appears as a muddy rut | The argument is senseless | 3.90 | appear | | the movie raised your spirits to heaven | The movie was uplifting. | 4.29 | raise | | the movie raised your spirits to the ocean floor | The movie was depressing. | 4.29 | raise | | It was buried as deep as an oil well | It was buried deep | 4.87 | bury | | It was buried as deep as a bathtub | It was not buried deep | 4.87 | bury | | The reporter wrote like a monkey on crack | The reporter wrote badly | 5.19 | write | | The reporter wrote like Hemingway | The reporter wrote well | 5.19 | write | | He should be cooking for Gordon Ramsay | His cooking is good | 5.47 | cook | | He should be cooking for McDonalds | His cooking is bad | 5.47 | cook | | She sings like a nightingale | She sings beautifully | 6.03 | sing | | She sings like an angry crow | She sings horribly | 6.03 | sing | | The food tasted like eating a mother's love | The food tasted amazing | 6.26 | eat, taste | | The food tasted like eating the bottom of a shoe | The food tasted disgusting | 6.26 | eat, taste | | He could sprint like the wind | He was fast. | 6.46 | sprint | | He could sprint like a tortoise | He was slow. | 6.46 | sprint | Table 5: Linguistic examples of more embodied and less embodied metaphors, sampled from CEmb (derived from Liu et al. (2022)). Every pair of sentences is presented with both target sentences to each model. Embodiment scores retrieved from Sidhu et al. (2014). | Model | GPT-3 | GPT-3 | OPT | OPT | GPT-Neo | GPT-NeoX | GPT-2 | GPT-2 XL | |-------------|---------|---------|--------|--------|-----------|------------|---------|------------| | Parameters | 350M | 175B | 350M | 13B | 125M | 20B | 355M | 1.5B | | Correlation | -0.016 | -0.039 | -0.037 | -0.032 | -0.020 | -0.026 | -0.036 | -0.003 | | p-value | 0.554 | 0.139 | 0.163 | 0.227 | 0.448 | 0.318 | 0.176 | 0.921 | Table 6: Results of the point biserial correlation between the concreteness of action in context and performance of LM on the interpretation of figurative language phrases (CEmb). None of the LMs shows a statistically significant correlation between the variables (α < 0.05). GPT-3 (small) GPT-3 (large) Corpus w/ suffix w/o suffix w/ suffix w/o suffix CLiu 0.601 0.591 - 0.684 CEmb 0.594 0.577 0.667 0.661 C*N oE* 0.583 0.572 0.661 0.659 CEmb + C*N oE* 0.591 0.574 0.665 0.660 Table 7: Comparing the zero-shot performance of the GPT-3 models Ada (∼350M parameters) and *Davinci* (∼175B parameters) on the different corpora (Tab. 2). The comparison includes the variable *that is to say* suffix prompting. Table 8: Linear Regression results with all features including embodiment score for GPT-3 (ada). *coef* : regression coefficients, se: standard errors, T: T-values, *p-val*: p-values, r2: coefficient of determination (R2), adjr2: adjusted R2, *CI[2.5%]*: lower confidence intervals, *CI[97.5%]*: upper confidence intervals. | feature | coef | se | T | p-val | R2 | adj_r2 | CI[2.5%] | CI[97.5%] | |-------------------|----------|---------|----------|---------|----------|----------|------------|-------------| | Intercept | 0.45075 | 0.12647 | 3.56405 | 0.00038 | 0.00393 | 0.00115 | 0.20266 | 0.69884 | | embodiment rating | 0.02906 | 0.01449 | 2.00613 | 0.04503 | 0.00064 | 0.05748 | | | | freq-rating | -0.00000 | 0.00000 | -0.07397 | 0.94104 | -0.00000 | 0.00000 | | | | aoa-rating | 0.00163 | 0.01278 | 0.12763 | 0.89846 | -0.02344 | 0.02671 | | | | len-rating | 0.00073 | 0.01385 | 0.05249 | 0.95814 | -0.02645 | 0.02790 | | | Feature coef se T pval R2**adj_r2 CI[2.5%] CI[97.5%]** ![12_image_0.png](12_image_0.png) Intercept 0.6659 0.0671 9.9174 0.00000 0.00114 -0.00095 0.5342 0.7976 freq-rating -0.00000 0.00000 -1.0400 0.29852 -0.00000 0.00000 aoa-rating -0.00994 0.01142 -0.8700 0.38434 -0.03234 0.01246 len-rating -0.00227 0.01379 -0.1648 0.86916 -0.02931 0.02477 len-rating 0.00073 0.01385 0.05249 0.95814 -0.02645 0.02790 ![12_image_1.png](12_image_1.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: A1. Did you describe the limitations of your work? Left blank. A2. Did you discuss any potential risks of your work? Left blank. A3. Do the abstract and introduction summarize the paper's main claims? Left blank. A4. Have you used AI writing assistants when working on this paper? Left blank. ## B **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? Left blank. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Left blank. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Left blank. ## C **Did You Run Computational Experiments?** Left blank. C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Left blank. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Left blank. C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Left blank. C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Left blank. D **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Left blank.
yu-etal-2023-making
Making Better Use of Training Corpus: Retrieval-based Aspect Sentiment Triplet Extraction via Label Interpolation
https://aclanthology.org/2023.findings-acl.303
In this paper, we aim to adapt the idea of retrieval-based neural approaches to the Aspect Sentiment Triplet Extraction (ASTE) task. Different from previous studies retrieving semantic similar neighbors, the ASTE task has its specialized challenges when adapting, i.e., the purpose includes predicting the sentiment polarity and it is usually aspect-dependent. Semantic similar neighbors with different polarities will be infeasible even counterproductive. To tackle this issue, we propose a retrieval-based neural ASTE approach, named RLI (Retrieval-based Aspect Sentiment Triplet Extraction via Label Interpolation), which exploits the label information of neighbors. Given an aspect-opinion term pair, we retrieve semantic similar triplets from the training corpus and interpolate their label information into the augmented representation of the target pair. The retriever is jointly trained with the whole ASTE framework, and neighbors with both similar semantics and sentiments can be recalled with the aid of this distant supervision. In addition, we design a simple yet effective pre-train method for the retriever that implicitly encodes the label similarities. Extensive experiments and analysis on two widely-used benchmarks show that the proposed model establishes a new state-of-the-art on ASTE.
# Making Better Use Of Training Corpus: Retrieval-Based Aspect Sentiment Triplet Extraction Via Label Interpolation Guoxin Yu ∗, Lemao Liu †, Haiyun Jiang , Shuming Shi **, Xiang Ao** † Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing 100190, China. University of Chinese Academy of Sciences, Beijing 100049, China. Tencent AI Lab, China. Institute of Intelligent Computing Technology, Suzhou, CAS. {yuguoxin20g,aoxiang}@ict.ac.cn {redmondliu, haiyunjiang, shumingshi}@tencent.com ## Abstract In this paper, we aim to adapt the idea of retrieval-based neural approaches to the Aspect Sentiment Triplet Extraction (ASTE) task. Different from previous studies retrieving semantic similar neighbors, the ASTE task has its specialized challenges when adapting, i.e., the purpose includes predicting the sentiment polarity and it is usually aspect-dependent. Semantic similar neighbors with different polarities will be infeasible even counterproductive. To tackle this issue, we propose a retrieval-based neural ASTE approach, named RLI (Retrieval-based Aspect Sentiment Triplet Extraction via Label Interpolation), which exploits the label information of neighbors. Given an aspect-opinion term pair, we retrieve semantic similar triplets from the training corpus and interpolate their label information into the augmented representation of the target pair. The retriever is jointly trained with the whole ASTE framework, and neighbors with both similar semantics and sentiments can be recalled with the aid of this distant supervision. In addition, we design a simple yet effective pre-train method for the retriever that implicitly encodes the label similarities. Extensive experiments and analysis on two widely-used benchmarks show that the proposed model establishes a new state-of-the-art on ASTE. ## 1 Introduction As an emerging sub-task of Aspect-based Sentiment Analysis (ABSA), Aspect Sentiment Triplets Extraction (ASTE) extracts all sentimental triplets of a given sentence. Every triplet contains three elements, namely aspect terms, opinion terms, and *Work done while this author was an intern at Tencent. †Corresponding authors. The is and , including the , and the . ordered from here was ![0_image_0.png](0_image_0.png) neg pos ( The is and probably the best that I have had. Figure 1: An example sentence (in gray block). RLI retrieves triplets considering both label similarity and semantic similarity (in green block). Existing Retrievalbased Methods fetch similar sentences only according to semantic similarities (in blue block). their corresponding sentiment polarities. For example, in the sentence "*Great food but the service is* horrible.", ASTE attempts to identify (food, *great*, positive) and (service, horrible, *negative*). Since the sentiment polarity of a triplet is aspectdependent and determined by the corresponding opinion terms, establishing reciprocity among elements within the triplets could yield easier sentiment predictions. Following this idea, existing work devised advanced methods to explore the correlation between the aspect and the opinion terms. To name some, Xu et al. (2020b); Wu et al. (2020) proposed new tagging schemes to build connections among three elements within a sentence. Li et al. (2019); Zhang et al. (2020); Xu et al. (2021); Zhao et al. (2022) designed various end-to-end frameworks to explore relations among elements by sub-task interaction mechanisms. Chen et al. (2021a); Liu et al. (2022) matched the elements by machine reading comprehension. Despite their effectiveness, existing methods may fail to clarify the intricate relationships among elements in some challenging cases, e.g., sentences with uncommon aspect/opinion terms, or the aspect and opinion terms are distant from each other. For example, as shown in the first sentence in Fig. 1, "*scallop roll*" may be difficult to be extracted because "*scallop*" is an uncommon aspect word that rarely appeared in the training set. And it is intractable to connect "*scallop roll*" with "*spicy*" due to the long distance between them. These make it challenging to extract the triplet (scallop roll, spicy, *positive*). To tackle the above issues, we attempt to apply retrieval-based models to ASTE, which have shown strength in several NLP tasks (Cai et al., 2022; Shang et al., 2021; Xu et al., 2020a) such as language model, machine translation, etc. Their basic idea is to retrieve semantic similar neighbors from training corpus or external data to improve the model's robustness towards infrequent data points (Meng et al., 2021; Li et al., 2022). However, the ASTE task has its specialized challenges when adapting, i.e., its purpose includes predicting the sentiment polarities and it is usually aspect-dependent. For example, the two triplets (battery, long, *positive*) and (boot-time, long, *negative*) have the same opinion word but opposite aspect-level sentiment. Hence, it may derive a drawback of the conventional retrieval-based model: the semantic similar neighbors with different sentiments may be infeasible even counterproductive. To remedy the challenges, we propose a retrieval-based neural ASTE approach, named RLI (Retrieval-based Aspect Sentiment Triplet Extraction via Label Interpolation), which can exploit the label information of neighbors. We first collect all triplets from the training set to construct a knowledge store and detect all candidate aspectopinion pairs. For each pair, we retrieve semantic similar triplets from the constructed store. Then we interpolate their label information into the augmented representation of the candidate pair to predict the final sentiment. Unlike existing retrievalbased methods which retrieve neighbors only according to semantic similarities, we jointly train the retriever and the triplets extraction model such that the neighbors with both similar semantics and sentiment could be fetched. In addition, we propose a simple yet effective method to pre-train the proposed retriever, which could encode label information implicitly by using pseudo-labeled data before the joint training. Exhibiting our idea by an example in Fig. 1, RLI could retrieve a relevant triplet (*tuna roll*, spicy, *positive*) for the candidate pair (*scallop roll*, spicy) (cf. the green block). By high relevance between "*tuna roll*" and "*scallop roll*", we can infer that (scallop roll, *spicy*) could be a valid pair and deduce its *positive* polarity. While, as shown in the blue block, the conventional retrieval-based methods may likely fetch a triplet (cocktail, spicy, negative), which has an opposite sentiment and may give false guidance. Extensive experimental results and analysis on two standard datasets for ASTE show that the proposed model establishes a new state-of-the-art on the ASTE task and performs well on challenging examples. ## 2 Related Work 2.1 Aspect Sentiment Triplet Extraction Recall that the key of resolving ASTE is to establish reciprocity among three elements within the triplets. Early studies (Peng et al., 2020; Huang et al., 2021) designed pipeline models to extract these elements successively and group them into triplets, which suffered from error propagation and aggregation. To avoid such obstacles, Xu et al. (2020b); Wu et al. (2020); Chen et al. (2020) proposed novel tagging schemes to connect the elements and train the models in an end-to-end fashion. Zhang et al. (2020); Zhao et al. (2022); Huan et al. (2022) devised multi-task frameworks to exploit the interactions among various sub-tasks. Chen et al. (2021b, 2022) constructed the given text to different graphs and fully utilized the relations between words. Besides, some studies gradually put forward new paradigms for ASTE. Yu et al. (2021) regarded the aspect and opinion terms as arguments of the expressed sentiment in a reinforcement learning framework. Chen et al. (2021a); Liu et al. (2022) converted ASTE to a machine reading comprehension problem. Xu et al. (2021) considered ASTE as a span prediction problem. Recently, a series of generative methods (Zhang et al., 2021a; Yan et al., 2021; Zhang et al., 2021b; Gao et al., 2022) come to the fore, which regarded ASTE as a text generation problem achieved superior performance. Nevertheless, all the above methods may become fragile in sentences with multiplex triplets, where aspect terms or opinion terms are uncommon, correlations are complicated or sentiments are unclear. ## 2.2 Retrieval Augmented Methods Prior studies have proved that retrieval-based methods could improve performance across a variety of NLP tasks. They retrieved similar neighbors from external knowledge to facilitate the model's robustness towards infrequent data points, which has been applied in question answering (Li et al., 2020; Karpukhin et al., 2020), neural machine translation (Tu et al., 2018; Xu et al., 2020a; Shang et al., 2021; Cai et al., 2022), language modeling (Guu et al., 2017; Khandelwal et al., 2019), dialog generation (Fan et al., 2020; Thulke et al., 2021), and etc. Due to the considerable computational cost of retrieving from large-scale corpora, Wang et al. (2022) proposed to fetch data most similar to the input only from the training data. They simply concatenated them with input to achieve significantly better performance on many natural language processing tasks. Apart from their effectiveness, these methods only consider semantic information but ignore the label similarity, which may retrieve triplets with similar semantic yet opposite sentiments. Hence they might render ineffective for solving ASTE. ## 3 Methodology Overview 3.1 Task Definition Suppose X = {x1, x2, · · · , xn} is a sentence with n words, and each span is represented by S = (S1, S2) where S1 and S2 denote the start and end positions of the span. Typically, ASTE is treated as a span extraction task: given a sentence X, ASTE aims to extract a triplet set T = {h*A, O, y*i}, where A = (A1, A2) and O = (O1, O2) respectively denote the spans of an aspect term and an opinion term, and y ∈ {positive, neutral, negative} is the sentiment polarity of the triplet. It is worth noting that each triplet h*A, O, y*i is dependent on a sentence X, but we only mention h*A, O, y*i and skip its corresponding sentence X for brevity. ## 3.2 Model Overview The proposed approach consists of four distinct modules namely: triplets store construction, candidate aspects and opinions detection, triplet-based retrieval, and *triplets extraction*, which are shown in Figure 2. The first module constructs a triplets store for triplets-level retrieval (§4.1). The second one extracts the candidate aspect-opinion span pairs based on a span-level sequence labeling method (§4.2). The third phase retrieves neighbors for each candidate aspect-opinion pair from the constructed store(§4.3). Moreover, we interpolate the representations and label information of the retrieved triplets with candidate pairs and further predict their final sentiment polarities(§4.4). Finally, we present how to pre-train the retriever to implicitly encode label information and jointly train the retrieval model and ASTE model(§5). ## 4 Rli Model 4.1 Triplet Store Construction ASTE pays more attention to the aspect terms and opinion terms than other words in the given sentence X. To this end, we construct a knowledge store M containing all the triplets in the training set, instead of accommodating all the sentences. To represent each triplet h*A, O, y*i in M, we employ BERT to define its key and value as follows. We first use BERT to get the representations H = {h1, h2, · · · , hn} for each word in the sentence X. Then we define the representations of aspect A and opinion terms O by EA and EO: $$\begin{array}{l}{{E_{A}=h_{A_{1}}\oplus h_{A_{2}}\oplus f_{\mathrm{span}}(A_{2}-A_{1}+1),}}\\ {{E_{O}=h_{O_{1}}\oplus h_{O_{2}}\oplus f_{\mathrm{span}}(O_{2}-O_{1}+1),}}\end{array}\tag{1}$$ where ⊕ is the concatenation of two vectors, fspan works as a trainable feature extractor related to the span width following Xu et al. (2021). Afterward, we concatenate the above spans and a trainable sentiment embedding together to represent each triplet h*A, O, y*i as a key-value h*K, V* i pairs: $$\begin{array}{l}{{K=E_{A}\oplus E_{O},}}\\ {{V=\operatorname*{fsemiment}(y),}}\end{array}\qquad\qquad(2)$$ where fsentiment is a learnable conversion function of a sentiment polarity y. Note that K and V encode representation information and label information of h*A, O, y*i, respectively. Finally, the triplet store M = {hAi, Oi, yii|i ∈ [1, |M|]} can be actually represented by a set of key-value pairs M = {hKi, V ii|i ∈ [1, |M|]}. ## 4.2 Candidate Aspects & Opinions Detection Similar to Xu et al. (2021), during the inference stage, given a sentence X, we firstly extract all possible candidate spans which may be either aspect or opinion span, and then we employ a classifier to predict whether a candidate span S is an aspect, an opinion, or an invalid span. Specifically, we first ![3_image_0.png](3_image_0.png) F N N neu none pos neg Loss Function Gold aspects & opinions Gold label neg Loss Function embeddings of sentiment FFNN feed forward neural network relevance function use BERT to obtain the representation ES for each candidate span S as EA and EO in Eq. (1). Then a detection model Pdet is used to detect the type of the candidate span S: aspect, opinion, or invalid span. $E_{S}=h_{S_{1}}\oplus h_{S_{2}}\oplus f_{\rm span}(S_{2}-S_{1}+1)$, $P_{\rm det}(c|S,{\bf X})={\rm softmax}(g(E_{S}))[c]$, $c\in\{$aspect, opinion, invalid$\}$, where g is a feed-forward neural network, and [c] denotes taking the probability for the dimension corresponding to the type c. Theoretically, there are n(n+1) 2spans in the sentence X, but it is too slow to make predictions for all possible spans. In practice, we limit the maximum length of spans thus discarding some excessively long spans. According to Eq. (3), we select the top K candidate aspect spans and opinion spans. Subsequently, we pair candidate aspect spans and opinion spans to create K2candidate aspect-opinion pairs. Suppose h*A, O*i denotes each candidate aspect-opinion pair, and it can be represented by K = EA ⊕ EO as defined in Eq. (2). ## 4.3 Triplet-Based Retrieval For each candidate aspect-opinion pair h*A, O*i, we retrieve the L most relevant triplets from the constructed store by a relevance function between the pair h*A, O*i and each triplet hAi, Oi, yii from triplet store M. Formally, the relevance function d between h*A, O*i and hAi, Oii is defined as: $$d(A,O;A^{i},O^{i})=K^{\top}\mathbf{W}K^{i},$$ where W is trainable parameters, K is the representation of the candidate pair h*A, O*i and Kiis the representation of each aspect-opinion pair hAi, Oii in M. According to the relevance function d, we select the top-L triplets denoted by M(*A, O*) = {hAl, Ol, yli|l ∈ [1, L]} with the highest relevance scores in M, which will be further used as memory to extract the triplet in the next subsection. ## 4.4 Triplets Extraction So far, we have acquired the representations of K2candidate aspect-opinion pairs and their similar triplets by retrieval. For each aspect-opinion pair h*A, O*i, recall M(*A, O*) = {hAl, Ol, yli|l ∈ [1, L]} denotes the retrieved triplets. We interpolate both the representation and label information of the retrieved triplets to predict the polarity of h*A, O*i. Specifically, we aggregate the dense representations of each candidate pair and its retrieved triplets by using an attention model defined by d. h(A, O) = (K + X L l=1 αlKl) ⊕ X L l=1 αlV l, (5) αl = exp d(A, O; Al, Ol) PL j=1 exp d(A, O; Aj, Oj ) , $\eqref{eq:walpha}$ where K and Klare the representations of h*A, O*i and hAl, Oli as defined in Eq. (2), respectively. V l is the sentiment label embedding of the retrieval triplets hAl, Ol, yli. Next, we predict the final sentiment polarity of K2 pairs by a neural model. For each candidate aspect-opinion h*A, O*i pair, Pext(y|*A, O,* X) = softmax(F(h(*A, O*)))[y], y ∈ {positive, negative, neutral, none},(6) $$(4)$$ where F is a feed-forward neural network, and "none" denotes the aspect-opinion pair is not a meaningful pair with definite sentiment polarity. In this way, we can not only achieve aspect-opinion pair extraction by judging whether the label is "none", but also extract triplets by identifying valid pairs with definite sentiments. ## 5 Training 5.1 Pre-Training For Retrieval To make the retriever memorize the sentiment similarity information in advance, we propose a simple yet effective method to pre-train the retriever by using external unlabeled data, which prompts the retrieved triplets to have similar sentiments. Specifically, over the external unlabeled data, we first use the *candidate aspect & opinion detection* module to extract aspect-opinion pairs. Then a feed-forward neural network is used to predict if they are valid and further determine their sentiment polarities. In this way, we obtain a set of triplets {h*A, O, y*i} from the external data, where y is the pseudo polarity predicted by the neural network. We call them pseudo-labeled data. Furthermore, for each triplet h*A, O, y*i, we randomly select two triplets hA0, O0, yi and hA00, O00, y0i which suffice to the following constraints: the former is with the same polarity and the other is with a different sentiment polarity, i.e., y0 6= y. Inspired by contrastive learning, we optimize a ranking loss Lpre to maximize the relevance score between triplets with the same sentiments meanwhile minimize that between triplets with opposite sentiments. $$\begin{array}{c}{{{\mathcal L}_{\mathrm{pre}}=d(A,O;A^{\prime},O^{\prime})^{2}-}}\\ {{\qquad\qquad\left(1-d(A,O;A^{\prime\prime},O^{\prime\prime})\right)^{2},}}\end{array}$$ d(·) is the relevance function defined in Eq. (4) which measures the similarity between two triplets. After pre-training and initializing the relevance scores, we jointly train all the proposed modules. Due to the pre-training, the retriever encodes sentiment similarities and RLI can retrieve helpful triplets to assist the sentiment prediction of the candidate aspect-opinion pair from a warm start. ## 5.2 Joint Training For a sentence X, suppose S(X) denotes a span pool including K candidate spans for X. The standard practice to train ASTE models relies on manually annotated data. That is, for each span S ∈ S(X), there is a golden label c ∈ {aspect, opinion, invalid}; and for each aspectopinion pair hA, O*i ∈ S × S*, there is a golden label y ∈ {positive, negative, neutral, none}. We employ the standard cross entropy to train the candidate aspect terms and opinion terms detection model Pdet, triplets extraction model Pext as well as the retrieval similarity in a joint manner as follows: $$\begin{array}{l}{{{\mathcal L}_{\mathrm{det}}=-\sum_{\mathbf{X}}\sum_{S\in{\mathcal S}(\mathbf{X})}\log P_{\mathrm{det}}(c|S,\mathbf{X}),}}\\ {{{\mathcal L}_{\mathrm{ext}}=-\sum_{\mathbf{X}}\sum_{A,O\in{\mathcal S}(\mathbf{X})}\log P_{\mathrm{ext}}(y|A,O,\mathbf{X}),}}\end{array}\tag{7}$$ where X is over a training set with manually annotated golden triplets. The overall loss is calculated as a weighted sum of the above two loss functions. $${\mathcal{L}}={\mathcal{L}}_{\mathrm{det}}+\alpha\cdot{\mathcal{L}}_{\mathrm{ext}},$$ L = Ldet + α · Lext, (8) where α > 0 is a hyperparameter to trade off both loss terms. In each iteration, we first perform the triplets retrieval based on the last-time iteration. Next, we extract the triplets with the help of retrieved triplets and update the parameters in the current iteration. Note that the relevance scores are used to define the representation h(*A, O*) through the attention in Eq. (5), on which the classifier Pext is based. Therefore, minimizing L actually optimizes the three models, i.e., the detection model in Eq. (3), the triplets extraction model in Eq. (6) and the retrieval similarity in Eq. (4). It is notable that we don't use the external pseudo-labeled data to train the joint model but only pre-train the retriever in §5.1: the triplet store for retrieval consists of all those triplets created from the original training data for ASTE and the loss function in Eq. (8) is minimized over the original training data as well. ## 6 Experiment 6.1 Settings Datasets.1 To evaluate our method as comprehensively as possible, we conduct experiments on Da (Peng et al., 2020) and Db (Xu et al., 2020b). Both of them contain 3 datasets in the restaurants domain and 1 dataset in the electronics domain. For pre-training, we use two external datasets from He et al. (2018), one is from the Yelp domain, and the other is from the Amazon electronics domain. 1See Appendix A for more details of datasets. | Model | Res14 | Lap14 | Res15 | Res16 | | | | | |-------------|---------|---------|---------|---------|---------|-------|---------|-------| | Pair | Triplet | Pair | Triplet | Pair | Triplet | Pair | Triplet | | | WhatHowWhy♦ | 56.10 | 51.89 | 53.85 | 43.50 | 56.23 | 46.79 | 60.04 | 53.62 | | CMLA+♦ | 48.95 | 43.12 | 44.10 | 32.90 | 44.60 | 35.90 | 50.00 | 41.60 | | RINANTE+♦ | 46.29 | 34.03 | 29.70 | 20.00 | 35.40 | 28.00 | 30.70 | 23.30 | | Unified+♦ | 55.34 | 51.68 | 52.56 | 42.47 | 56.85 | 46.69 | 53.75 | 44.51 | | Dual-MRC♦ | 74.93 | 70.32 | 63.37 | 55.58 | 64.97 | 57.21 | 75.71 | 67.40 | | Generative∇ | 77.68 | 72.46 | 66.11 | 57.59 | 67.98 | 60.11 | 77.38 | 69.98 | | GAS] | - | 70.20 | - | 54.50 | - | 59.10 | - | 65.00 | | LEGO] | - | 72.60 | - | 59.50 | - | 63.20 | - | 71.50 | | JETt M=6 ∇ | - | 60.41 | - | 46.65 | - | 53.68 | - | 63.41 | | JET o M=6 ∇ | - | 63.92 | - | 50.00 | - | 54.67 | - | 62.98 | | SPAN* | 78.62 | 73.96 | 69.48 | 60.59 | 71.56 | 64.50 | 78.85 | 70.48 | | RLI(Ours) | 79.92 | 74.98 | 70.27 | 61.97 | 72.66 | 65.71 | 81.29 | 73.33 | Baselines. We compared our method 2 with various baselines, which are evaluated on Da and Db. - **Pipeline models**: WhatHowWhy (Peng et al., 2020), CMLA+ (Wang et al., 2017), RINANTE+ (Dai and Song, 2019), Unified+ (Li et al., 2019), and TOP (Huang et al., 2021). - **MRC based methods**: Dual-MRC (Mao et al., 2021), BMRC (Chen et al., 2021a). - **Reinforce learning based methods**: RL (Yu et al., 2021). - **Generative models**: Generative (Yan et al., 2021), GAS (Zhang et al., 2021b), LEGO (Gao et al., 2022). - **End-to-end models**: JET (Xu et al., 2020b), OTE-MTL (Zhang et al., 2020), GTS (Wu et al., 2020), SPAN (Xu et al., 2021), and EMCGCN (Chen et al., 2022). Evaluation Metrics. We implement five metrics to evaluate our proposed model: F1 score for pair extraction, Precision, Recall, *F1 score for triplet extraction*, and *Retrieval Accuracy*. Particularly, *Retrieval Accuracy* is the proportion of correct triplets retrieved, of which sentiment polarities are consistent with the gold label of the candidate aspectopinion pair. We select the best model based on the F1 score for triplet extraction on the development set. The reported scores are the average of 5 runs with distinct random seeds. ## 6.2 Main Results We compare our method with various baselines on Da and Db comprehensively. The results are reported in Table 1 and Table 2, respectively. Firstly, in Table 1, our model outperforms all the compared models on the *F1 score for pair extraction*. We speculate that our model could judge if a candidate aspect-opinion pair is valid or not by observing the relevance scores between a pair and its retrieved triplets. Secondly, as the two tables show, our model considerably improves precision, *recall*, and F1 score for triplet extraction compared to pipeline and end-to-end models over most datasets. This indicates that relevant triplets conduce to exploit the interactions between aspect terms and opinion terms. Thirdly, we observe that our model even achieves more competitive results than emerging generative methods, of which backbones may be stronger (T5 (Raffel et al., 2020) or BART (Lewis et al., 2019)). Such results suggest the superiority of retrieval-based methods. ## 6.3 Ablation Test In Table 3, we perform an ablation study and report the results on the development and test set of Db to investigate the effects of key modules. On the one hand, we compute the relevance scores according to semantic similarity. Then we execute the model to retrieve triplets based on the fixed semantic similarity to get the results "w/o joint". It follows that the F1 scores for triplets ex- | Model | Res14 | Lap14 | Res15 | Res16 | | | | | | | | | |-------------|---------|---------|---------|---------|-------|-------|-------|-------|-------|-------|-------|-------| | P | R | F1 | P | R | F1 | P | R | F1 | P | R | F1 | | | WhatHowWhy♦ | 43.24 | 63.66 | 51.46 | 37.38 | 50.38 | 42.87 | 48.07 | 57.51 | 52.32 | 46.96 | 64.24 | 54.21 | | TOP] | 63.59 | 73.44 | 68.16 | 57.84 | 59.33 | 58.58 | 54.53 | 63.30 | 58.59 | 63.57 | 71.98 | 67.52 | | BMRC] | 72.17 | 65.43 | 68.64 | 65.91 | 52.15 | 58.18 | 62.48 | 55.55 | 58.79 | 69.87 | 65.68 | 67.35 | | RL] | 70.60 | 68.65 | 69.61 | 64.80 | 54.99 | 59.50 | 65.45 | 60.29 | 62.72 | 67.21 | 69.69 | 68.41 | | GAS[ | - | - | 72.16 | - | - | 60.78 | - | - | 62.10 | - | - | 70.10 | | OTE-MTL] | 63.07 | 58.25 | 60.56 | 54.26 | 41.07 | 46.75 | 60.88 | 42.68 | 50.18 | 65.65 | 54.28 | 59.42 | | GTS♦ | 67.76 | 67.29 | 67.50 | 57.82 | 51.32 | 54.36 | 62.59 | 57.94 | 60.15 | 66.08 | 69.91 | 67.93 | | JETt M=6 ♦ | 63.44 | 54.12 | 59.41 | 53.53 | 43.28 | 47.86 | 68.20 | 42.89 | 52.66 | 65.28 | 51.95 | 63.83 | | JETo M=6 ♦ | 70.56 | 55.94 | 62.40 | 55.39 | 47.33 | 51.04 | 64.45 | 51.96 | 57.53 | 70.42 | 58.37 | 63.83 | | SPAN ♦ | 72.89 | 70.89 | 71.85 | 63.44 | 55.84 | 59.38 | 62.18 | 64.45 | 63.27 | 69.45 | 71.17 | 70.26 | | EMC-GCN ∇ | 71.21 | 72.39 | 71.78 | 61.70 | 56.26 | 58.81 | 61.54 | 62.47 | 61.93 | 65.62 | 71.30 | 68.33 | | RLI (Ours) | 77.46 | 71.97 | 74.34 | 63.32 | 57.43 | 60.96 | 60.08 | 70.66 | 65.41 | 70.50 | 74.28 | 72.34 | | Dataset | Model | Dev F1 | Test F1 | |------------|------------------|----------|-----------| | w/o joint | 66.85 | 72.07 | | | Res14 | w/o sentiment | 67.55 | 73.70 | | joint | w/o pre-training | 67.12 | 72.58 | | full model | 68.00 | 74.34 | | | w/o joint | 60.03 | 60.33 | | | Lap14 | w/o sentiment | 61.90 | 60.54 | | joint | w/o pre-training | 61.06 | 60.02 | | full model | 62.55 | 60.96 | | | w/o joint | 70.83 | 63.99 | | | Res15 | w/o sentiment | 71.54 | 65.04 | | joint | w/o pre-training | 71.24 | 64.48 | | full model | 72.21 | 65.41 | | | w/o joint | 70.47 | 70.30 | | | Res16 | w/o sentiment | 71.44 | 71.39 | | joint | w/o pre-training | 70.75 | 71.69 | | full model | 73.04 | 72.34 | | traction over most datasets decrease by 1% − 2%, which proves that retrieving triplets only based on the semantic similarities is infeasible even counterproductive. However, joint training of the retriever and ASTE modules could dynamically optimize the retrieved triplets for better ASTE. On the other hand, we evaluate two ablated models under joint training. First, we amputate the label information PL l=1 αlV lin Eq. (5) to obtain model "w/o sentiment". Its degraded performance confirms the importance of sentiment label information. Second, we remove the retriever pre-training and jointly train the full model to obtain results | Models | Base | Base+Aug | Ours | |----------|--------|--------------|--------------| | Avg. F1 | 66.19 | 66.99(+0.80) | 68.26(+2.07) | Table 4: Average *F1 score for triplets extraction* on Db. "w/o pre-training". By comparison, we find that the pre-training increased the F1 scores, which verifies that pre-training encodes label similarity and improves the quality of retrieval. It makes the label information of retrieved triplets more similar to the gold sentiment polarities and thus achieves better sentiment prediction performance. ## 6.4 Auxiliary Experiment In order to prove the improvement of our model derives from the triplets retrieval instead of external augmented data, we conduct an auxiliary experiment and display the average F1 scores for triplet extraction in Table 4. Specifically, we remove the retrieval module from our full model to obtain a Base model. Then we pre-train the Base model on the pseudo-labeled data and fine-tune it on the original Db and the results are denoted as **Base+Aug**. As Table 4 shown, even if Base+Aug gets 0.8 gain, our model achieves a higher 2.07 improvement compared to Base. Since we didn't use external data for ASTE's joint training in our method, the results reveal that the capabilities of our model are not from the external data but mainly come from the assistance of triplets retrieval. ![7_image_0.png](7_image_0.png) Figure 3: Results on Res14 of Db. ![7_image_1.png](7_image_1.png) ## 7 Analysis 7.1 Inference Results Analysis To prove the advantage of our method in dealing with challenging cases, we execute an in-depth study to analyze the results of triplets extraction from two perspectives: dis, the distance between aspect and opinion terms, and fre, the frequency of aspect/opinion terms appearing in the training set, $$\mathbf{\Sigma}(9)$$ $$\begin{array}{l}{{d i s=\frac{|i_{a}-i_{o}|}{n},}}\\ {{f r e=m i n(f r e_{a},f r e_{o}),}}\end{array}$$ where ia and io represent the start indexes of the aspect and opinion term, n is the length of the sentence, frea and freo are times of the aspect and opinion term appear in the training set. Firstly, according to dis, we divided all the gold triplets into three groups and compared the proportion of triplets with different dis correctly extracted by Base (declared in §6.4) and our model. As Fig. 3(a) shows, our model extracted more triplets with *dis >* 0.6 successfully. This means that the Base model may fail to connect the aspect term with a correct faraway opinion term. Nevertheless, our model could reduce the influence of longdistance by referring to relevant triplets. Secondly, in Fig. 3(b) we categorized all the triplets into three groups by the frequency fre and find that our model could extract more triplets containing aspect/opinion terms that never appear in the training set (fre = 0). We conjecture that our model could find them by imitating similar triplets. As a result, we conclude that our model can solve such tricky cases better. ## 7.2 Sensitivity Analysis We perform a sensitivity analysis to determine the effects of retrieval accuracy and the number of retrieved triplets. According to the triplets' retrieval accuracy, we put all the triplets into different buckets and compare the proportion of triplets correctly extracted by Base and our model over them. In Fig. 4(a), when the accuracy is in [0.8, 1], the improvements of our model are more significant. Unfortunately, when the accuracy falls into [0, 0.2], our model is even slightly weak. This ensures that our model improves ASTE by retrieving triplets with the same sentiment polarities as the gold sentiments. The more triplets with the same sentiments retrieved, the greater their auxiliary function. Besides, we investigate the effects of the number L of the retrieved triplets in Fig. 4(b). It is noted that the *F1 score for triplets extraction* increases with L. But if L is too large, the computational complexity will increase rapidly while the performance improvement is weak. So we set L to 5 to obtain a trade-off between complexity and performance. ## 7.3 Case Study To better understand the effectiveness of retrieved triplets, we empirically perform a case study on Db in Fig. 5. BEFORE denotes extracted results of Base model (declared in §6.4), AFTER denotes the results of our full model, and RETRIEVED is our model's top-1 retrieved triplets and the sentences they come from. These cases demonstrate that the retrieved triplets could extract aspect-opinion pair with long-distance and overcome the problems of uncommon aspect/opinion terms with low frequency in the training set. ## 8 Conclusion In this paper, we proposed a retrieval-based ASTE approach name RLI, which could exploit the sentiment information of neighbors to solve challenging cases in ASTE. A retriever fetching both semantic and sentiment-similar triplets is devised, and we jointly train the retriever with the ASTE framework to remedy the specialized challenges when adapting the retrieve-based methods in aspect-level sentiment analysis tasks. In addition, we proposed From , the were … | was | | |--------------|----| | and | | | so | | | was | | | the | | | . | | | (Service, | | | good, | | | pos) | | | (Service, | | | good, | | | pos) | | | (null) | | | (atmosphere, | | | good, | | | pos) | | | And | | | these | | | are | | | not | | | small, | | | wimpy, | | | fast | | | food | | | type | | | (null) | | | (patties, | | | real, | | | pos) | , | | ... | | | (atmosphere, | | | aimable, | | | pos) | | | (null) | | | (patties, | | | full | | | sized, | | | pos) | | | the | | | . | | | (dal | | | bukhara, | | | good, | | | pos) | | | trying | | | their | | | assortment | | | of | | | . | | | (bruschetta, | | | look | | | forward, | | | pos) | | | (cake, | | | creamy, | | | pos) | | | and | | | . | | | (spider | | | rolls, | | | recommended, | | | pos) | | | and | | | . | | | (spider | | | rolls, | | | recommended, | | | pos) | | | burgers | | | - | | | these | | | are | | | , | | | . | | | (icing, | | | flurry, | | | pos) | | | (cake, | | | flurry, | | | pos) | | | (cake, | | | not | | | ultra | | | sweet, | | | neg) | | | (cake, | | | not | | | ultra | | | sweet, | | | pos) | | | (null) | | | The | | | icing | | | made | | | this | | | , | | | it | | | was | | | , | | | , | | | and | | | . | | | (null) | | | (cake, | | | light, | | | pos) | | (beginning appetizers, incredible, pos) ... you go for the amounts of , the , ... (atmosphere, aimable, pos) (null) (patties, full sized, pos) and yes is so dam and so are all the . (dal bukhara, good, pos) I plan to come here again and to trying their assortment of . (bruschetta, look forward, pos) Highly is the and . (spider rolls, recommended, pos) Highly is the and . (spider rolls, recommended, pos) (cake, creamy, pos) The is every penny and you get ( both in AND ) . (quality, more than enough, pos) a simple yet effective pre-train method for the retriever to implicitly encode the label similarities. Extensive experiments and analyses have proven the superiority of the proposed method. ## Limitations Our method has three major limitations. First, the auxiliary data corpus with label information might be rare. Recall that the corpus we used in this paper is the training set of different benchmarks. However, large-scale labeled data as the auxiliary data source might be infeasible in practice, hence it may limit the model deployment in real-world scenarios. Second, our method is trained and evaluated on English datasets. Additional data processing as well as annotation is necessary for other linguistic settings. Third, external unlabeled data with the same domain as the ASTE datasets are needed for the pre-training of the retriever. In our experiment, we choose two external datasets in the restaurant and electronics domains. If our method is applied to other fields, we need to find additional external data in the corresponding domain for pre-training. ## Acknowledgements The research work is supported by National Key RD Plan No. 2022YFC3303303, the National Natural Science Foundation of China under Grant (No.61976204), the Project of Youth Innovation Promotion Association CAS, Beijing Nova Program Z201100006820062. We would like to thank the anonymous reviewers for their insightful comments. ## References Deng Cai, Yan Wang, Lemao Liu, and Shuming Shi. 2022. Recent advances in retrieval-augmented text generation. In *Proceedings of the 45th International* ACM SIGIR Conference on Research and Development in Information Retrieval, pages 3417–3419. Hao Chen, Zepeng Zhai, Fangxiang Feng, Ruifan Li, and Xiaojie Wang. 2022. Enhanced multi-channel graph convolutional network for aspect sentiment triplet extraction. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2974–2985, Dublin, Ireland. Association for Computational Linguistics. Peng Chen, Shaowei Chen, and Jie Liu. 2020. Hierarchical sequence labeling model for aspect sentiment triplet extraction. In CCF International Conference on Natural Language Processing and Chinese Computing, pages 654–666. Springer. Shaowei Chen, Yu Wang, Jie Liu, and Yuelin Wang. 2021a. Bidirectional machine reading comprehension for aspect sentiment triplet extraction. In *Proceedings Of The AAAI Conference On Artificial Intelligence*, volume 35, pages 12666–12674. Zhexue Chen, Hong Huang, Bang Liu, Xuanhua Shi, and Hai Jin. 2021b. Semantic and syntactic enhanced aspect sentiment triplet extraction. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021*, pages 1474–1483. Hongliang Dai and Yangqiu Song. 2019. Neural aspect and opinion term extraction with mined rules as weak supervision. In *Proceedings of the 57th Annual Meeting of the Association for Computational* Linguistics, pages 5268–5277. Angela Fan, Claire Gardent, Chloe Braud, and Antoine Bordes. 2020. Augmenting transformers with knn-based composite memory for dialogue. arXiv preprint arXiv:2004.12744. Tianhao Gao, Jun Fang, Hanyu Liu, Zhiyuan Liu, Chao Liu, Pengzhang Liu, Yongjun Bao, and Weipeng Yan. 2022. LEGO-ABSA: A prompt-based task assemblable unified generative framework for multitask aspect-based sentiment analysis. In *Proceedings of the 29th International Conference on Computational Linguistics*, pages 7002–7012, Gyeongju, Republic of Korea. International Committee on Computational Linguistics. Kelvin Guu, Panupong Pasupat, Evan Liu, and Percy Liang. 2017. From language to programs: Bridging reinforcement learning and maximum marginal likelihood. In *Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics* (Volume 1: Long Papers), pages 1051–1062, Vancouver, Canada. Association for Computational Linguistics. Ruidan He, Wee Sun Lee, Hwee Tou Ng, and Daniel Dahlmeier. 2018. Exploiting document knowledge for aspect-level sentiment classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 579–585, Melbourne, Australia. Association for Computational Linguistics. Hai Huan, Zichen He, Yaqin Xie, and Zelin Guo. 2022. A multi-task dual-encoder framework for aspect sentiment triplet extraction. *IEEE Access*. Lianzhe Huang, Peiyi Wang, Sujian Li, Tianyu Liu, Xiaodong Zhang, Zhicong Cheng, Dawei Yin, and Houfeng Wang. 2021. First target and opinion then polarity: Enhancing target-opinion correlation for aspect sentiment triplet extraction. *arXiv preprint* arXiv:2102.08549. Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769– 6781. Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2019. Generalization through memorization: Nearest neighbor language models. *arXiv preprint arXiv:1911.00172*. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. *arXiv preprint arXiv:1910.13461*. Huayang Li, Yixuan Su, Deng Cai, Yan Wang, and Lemao Liu. 2022. A survey on retrieval-augmented text generation. *arXiv preprint arXiv:2202.01110*. Xiaoya Li, Yuxian Meng, Mingxin Zhou, Qinghong Han, Fei Wu, and Jiwei Li. 2020. Sac: Accelerating and structuring self-attention via sparse adaptive connection. *arXiv preprint arXiv:2003.09833*. Xin Li, Lidong Bing, Piji Li, and Wai Lam. 2019. A unified model for opinion target extraction and target sentiment prediction. In *Proceedings of the AAAI* conference on artificial intelligence, volume 33, pages 6714–6721. Shu Liu, Kaiwen Li, and Zuhe Li. 2022. A robustly optimized bmrc for aspect sentiment triplet extraction. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 272–278. Yue Mao, Yi Shen, Chao Yu, and Longjun Cai. 2021. A joint training dual-mrc framework for aspect based sentiment analysis. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 35, pages 13543–13551. Julian McAuley, Christopher Targett, Qinfeng Shi, and Anton Van Den Hengel. 2015. Image-based recommendations on styles and substitutes. In *Proceedings of the 38th international ACM SIGIR conference on research and development in information retrieval*, pages 43–52. Yuxian Meng, Shi Zong, Xiaoya Li, Xiaofei Sun, Tianwei Zhang, Fei Wu, and Jiwei Li. 2021. Gnn-lm: Language modeling based on global contexts via gnn. In *International Conference on Learning Representations*. Haiyun Peng, Lu Xu, Lidong Bing, Fei Huang, Wei Lu, and Luo Si. 2020. Knowing what, how and why: A near complete solution for aspect-based sentiment analysis. In *Proceedings of the AAAI Conference on* Artificial Intelligence, volume 34, pages 8600–8607. Maria Pontiki, Dimitrios Galanis, Haris Papageorgiou, Ion Androutsopoulos, Suresh Manandhar, Mohammad Al-Smadi, Mahmoud Al-Ayyoub, Yanyan Zhao, Bing Qin, Orphée De Clercq, et al. 2016. Semeval-2016 task 5: Aspect based sentiment analysis. In *International workshop on semantic evaluation*, pages 19–30. Maria Pontiki, Dimitrios Galanis, Harris Papageorgiou, Suresh Manandhar, and Ion Androutsopoulos. 2015. Semeval-2015 task 12: Aspect based sentiment analysis. In *Proceedings of the 9th international workshop on semantic evaluation (SemEval 2015)*, pages 486–495. Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014. SemEval-2014 task 4: Aspect based sentiment analysis. In *Proceedings of the* 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 27–35, Dublin, Ireland. Association for Computational Linguistics. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(140):1–67. Lu Xu, Hao Li, Wei Lu, and Lidong Bing. 2020b. Position-aware tagging for aspect sentiment triplet extraction. *arXiv preprint arXiv:2010.02609*. Wei Shang, Chong Feng, Tianfu Zhang, and Da Xu. 2021. Guiding neural machine translation with retrieved translation template. In *2021 International* Joint Conference on Neural Networks (IJCNN), pages 1–7. IEEE. Duyu Tang, Bing Qin, and Ting Liu. 2015. Learning semantic representations of users and products for document level sentiment classification. In Proceedings of the 53rd annual meeting of the Association for Computational Linguistics and the 7th international joint conference on natural language processing (volume 1: long papers), pages 1014–1023. David Thulke, Nico Daheim, Christian Dugast, and Hermann Ney. 2021. Efficient retrieval augmented generation from unstructured knowledge for taskoriented dialog. *arXiv preprint arXiv:2102.04643*. Zhaopeng Tu, Yang Liu, Shuming Shi, and Tong Zhang. 2018. Learning to remember translation history with a continuous cache. Transactions of the Association for Computational Linguistics, 6:407–420. Shuohang Wang, Yichong Xu, Yuwei Fang, Yang Liu, Siqi Sun, Ruochen Xu, Chenguang Zhu, and Michael Zeng. 2022. Training data is more valuable than you think: A simple and effective method by retrieving from training data. In *Proceedings of the* 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3170–3179. Wenya Wang, Sinno Jialin Pan, Daniel Dahlmeier, and Xiaokui Xiao. 2017. Coupled multi-layer attentions for co-extraction of aspect and opinion terms. In Proceedings of the AAAI conference on artificial intelligence, volume 31. Zhen Wu, Chengcan Ying, Fei Zhao, Zhifang Fan, Xinyu Dai, and Rui Xia. 2020. Grid tagging scheme for aspect-oriented fine-grained opinion extraction. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 2576–2585, Online. Association for Computational Linguistics. Jitao Xu, Josep M Crego, and Jean Senellart. 2020a. Boosting neural machine translation with similar translations. In *Proceedings of the 58th Annual* Meeting of the Association for Computational Linguistics, pages 1580–1590. Lu Xu, Yew Ken Chia, and Lidong Bing. 2021. Learning span-level interactions for aspect sentiment triplet extraction. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4755–4766. ## A Statistics Of Dataset Hang Yan, Junqi Dai, Tuo Ji, Xipeng Qiu, and Zheng Zhang. 2021. A unified generative framework for aspect-based sentiment analysis. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2416–2429. Samson Yu, Bai Jian, Tapas Nayak, Navonil Majumder, and Soujanya Poria. 2021. Aspect sentiment triplet extraction using reinforcement learning. In *Proceedings of the 30th ACM International Conference* on Information & Knowledge Management, pages 3603–3607. Chen Zhang, Qiuchi Li, Dawei Song, and Benyou Wang. 2020. A multi-task learning framework for opinion triplet extraction. In *EMNLP (Findings)*. Wenxuan Zhang, Yang Deng, Xin Li, Yifei Yuan, Lidong Bing, and Wai Lam. 2021a. Aspect sentiment quad prediction as paraphrase generation. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 9209– 9219. Wenxuan Zhang, Xin Li, Yang Deng, Lidong Bing, and Wai Lam. 2021b. Towards generative aspect-based sentiment analysis. In *Proceedings of the 59th Annual Meeting of the Association for Computational* Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 504–510. Yichun Zhao, Kui Meng, Gongshen Liu, Jintao Du, and Huijia Zhu. 2022. A multi-task dual-tree network for aspect sentiment triplet extraction. In Proceedings of the 29th International Conference on Computational Linguistics, pages 7065–7074, Gyeongju, Republic of Korea. International Committee on Computational Linguistics. In order to quantitatively compare our method to prior work, we conduct our experiments on two widely used ASTE datasets Da and Db, which are released by Peng et al. (2020) and Xu et al. (2020b) and originate from Semeval2014 (Pontiki et al., 2014), Semeval2015 (Pontiki et al., 2015), and Semeval2016 (Pontiki et al., 2016). The statistics are shown in Table 5. Each of them consists of three datasets: Restaurant14, Restaurant15, Restaurant16, and Laptop14. The first three datasets are from the restaurant domain, in which each sentence describes a customer's evaluation of restaurant service, environment, food, etc. The Laptop14 | Dataset | Res14 | Lap14 | Res15 | Res16 | | |---------------|-------------------|-----------------|----------------|-----------------|----------------| | s/pos/neu/neg | s/pos/neu/neg | s/pos/neu/neg | s/pos/neu/neg | | | | Train | 1300/1575/143/427 | 593/703/25/195 | 842/933/49/307 | 920/664/117/484 | | | Dev | 323/377/32/115 | 148/179/9/50 | 210 /225/10/81 | 228/207/16/114 | | | Da | Test | 496/675/45/142 | 318/291/25/139 | 320 /362/27/76 | 339/335/50/105 | | Train | 1266/1692/166/480 | 906/817/126/517 | 605/783/25/205 | 857/1015/50/329 | | | Dev | 310/404/54/119 | 219/169/36/141 | 148/185/11/53 | 210/252/11/76 | | | Db | Test | 492/773/66/155 | 328/364/63/116 | 322/317/25/143 | 326/407/29/78 | datasets in Da and Db contain customer evaluations related to electronic products. All the datasets contain initialized training set, development set, and test set. Since existing popular methods are implemented on either Da or Db, evaluating our model on two datasets can compare it with existing methods as comprehensively as possible and get more reliable experimental conclusions. During the pre-training for retrieval, we adopt two document-level datasets named Yelp (Tang et al., 2015) and Amazon (McAuley et al., 2015) as external data, which are processed and released by He et al. (2018). For each dataset, we sort all the data according to the length of the document and select the shortest 10, 000 pieces of data for pre-training of our retriever. Specifically, Yelp is from the restaurant domain, which is used to generate pseudo-labeled data for Restaurant14, Restaurant15, and Restaurant16. Amazon is from the electronics domain, which is used to generate external data for Laptop14. ## B Experimental Settings We adopt the BERT-base model from *huggingface* Transformer library 3for all experiments. We pretrain the relevance scores for 10 epochs with batch size 8 and learning rate 1e − 5. We jointly train the full model for 30 epochs with batch size 1, and a learning rate of 1e − 5. We also use an early stopping and a linear warmup for 10% of the training step during the joint learning. We adopt the Adam optimizer and accumulate gradients for each batch. We set the dropout rate, the maximum span width, the number of candidate aspect and opinion terms, the number L of retrieved triplet, the loss coefficients α to 0.5, 8, half of the sentence length, 5, and 5. In each iteration, we first extract candidate aspect terms and opinion terms and pair them into candidate aspect-opinion pairs. Then we retrieve relevant triplets for each pair and help them predict if the pair is valid and further determine their sentiment polarities. Finally, we update the parameters by gradient descent. The code is implemented with PyTorch 1.9.0 and transformers 4.1.1 and launched on an Ubuntu server with an NVidia Tesla V100 (32G). In addition, we will test our model with Mindspore, which is a new deeplearning framework4. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section named limitations at the end of the paper. ✓ A2. Did you discuss any potential risks of your work? Section named limitations at the end of the paper. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 6 ✓ B1. Did you cite the creators of artifacts you used? section 6, appendix A, appendix B ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? section 6.1 and appendix A ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? section 6.1 and appendix A B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? appendix A ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. appendix A ## C ✓ **Did You Run Computational Experiments?** Section 6, Appendix B C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Not applicable. Left blank. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Appendix B ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 6 and Appendix B ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 6 and Appendix B D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
yang-etal-2023-multi
Multi-Domain Dialogue State Tracking with Disentangled Domain-Slot Attention
https://aclanthology.org/2023.findings-acl.304
As the core of task-oriented dialogue systems, dialogue state tracking (DST) is designed to track the dialogue state through the conversation between users and systems. Multi-domain DST has been an important challenge in which the dialogue states across multiple domains need to consider. In recent mainstream approaches, each domain and slot are aggregated and regarded as a single query feeding into attention with the dialogue history to obtain domain-slot specific representations. In this work, we propose disentangled domain-slot attention for multi-domain dialogue state tracking. The proposed approach disentangles the domain-slot specific information extraction in a flexible and context-dependent manner by separating the query about domains and slots in the attention component. Through a series of experiments on MultiWOZ 2.0 and MultiWOZ 2.4 datasets, we demonstrate that our proposed approach outperforms the standard multi-head attention with aggregated domain-slot query.
# Multi-Domain Dialogue State Tracking With Disentangled Domain-Slot Attention Longfei Yang1, Jiyi Li2, Sheng Li3**, Takahiro Shinozaki**1 1Tokyo Institute of Technology 2University of Yamanashi 3National Institute of Information and Communications Technology [email protected], [email protected], [email protected], [email protected] ## Abstract As the core of task-oriented dialogue systems, dialogue state tracking (DST) is designed to track the dialogue state through the conversation between users and systems. Multi-domain DST has been an important challenge in which the dialogue states across multiple domains need to consider. In recent mainstream approaches, each domain and slot are aggregated and regarded as a single query feeding into attention with the dialogue history to obtain domain-slot specific representations. In this work, we propose disentangled domain-slot attention for multi-domain dialogue state tracking. The proposed approach disentangles the domain-slot specific information extraction in a flexible and context-dependent manner by separating the query about domains and slots in the attention component. Through a series of experiments on MultiWOZ 2.0 and MultiWOZ 2.4 datasets, we demonstrate that our proposed approach outperforms the standard multi-head attention with aggregated domain-slot query. ## 1 Introduction Task-oriented dialogue system is designed to assist users to accomplish sorts of certain tasks. For example, by using dialogue-based automated customer service, users can online query information and make reservations. Multi-domain dialogue state tracking has been an important challenge introduced by Budzianowski et al. (2018), in which numerous mixed-domain conversations are involved. In this case, DST has to track the dialogue states at each turn through the conversation, which contains a huge space involving the combinations of the ontology of different domains, slots, and values. It is a challenging task since spoken language is not formal, in which ellipsis and cross-reference are barrier to handling the correlations among different domains and slots. Several studies have explored sorts of approaches to handle the correlations among domains and slots. In recent mainstream approaches, each domain and slot are aggregated into a single vector regarded as a query. The query and the dialogue history are fed into attention to generate domain-slot specific representations (Wu et al., 2019). Then the information interchange across different domains and slots are performed with them to model the correlation among different domain and slots (Hu et al., 2020; Wang and Lemon, 2013; Ye et al., 2021). However, these approaches introduce too much human prior knowledge and they only consider the correlations among domains and slots names or overestimate these correlations (Yang et al., 2022). To tackle this problem, we propose a disentangled domain-slot attention (DDSA), which disentangles information extraction about domains and slots in a flexible and context-dependent manner. In detail, we disentangle the query about domains and slots in the domain-slot attention component. Firstly, domain specific representations are obtained using the domain query and the dialogue history. Then the model utilizes these representations and slot query to retrieve slot specific information (in this context, slot means the slot only) and generate domain-slot specific representations. Finally, state prediction is performed with these domain-slot specific representations. We conduct experiments to verify our approach on MultiWOZ 2.0 and MultiWOZ 2.4 datasets. The experimental results show that the proposed approach can effectively improve the performance of multi-domain dialogue state tracking. The contributions of this work can be addressed as follows. (1) We propose a disentangled domain-slot attention mechanism to handle the correlations among domains and slots, in which the process of domainslot specific information extraction is disentangled in a flexible and context-dependent manner. (2) We demonstrate that the performance of DST benefits from our proposed approach and make a detailed empirical study that shows that our model performs 4928 better than the baseline models based on standard attention with aggregated domain-slot query1. ## 2 Related Works Dialogue state tracking (DST) is the core of taskoriented dialogue systems. In the early years, DST highly relies on hand-crafted semantic features to predict the dialogue states (Williams and Young, 2007; Thomson and Young, 2010; Wang and Lemon, 2013), which is hard to handle lexical and morphological variations in spoken language (Lee et al., 2019). Benefiting from the rapid development of deep learning methods, neural networkbased DST models have been explored. Mrkšic´ et al. (2017) proposes a novel neural belief tracking (NBT) framework with learning n-gram representations of utterances. Inspired by it, a lot of neural network models are investigated (Nouri and Hosseini-Asl, 2018; Ren et al., 2018; Zhong et al., 2018; Hu et al., 2020; Ouyang et al., 2020; Wu et al., 2019) and achieve further improvement. Pre-trained models have brought natural language processing to a new era in recent years. Many substantial works have shown that the pretrained models can learn universal language representations, which are beneficial for downstream tasks (Mikolov et al., 2013; Pennington et al., 2014; McCann et al., 2017; Sarzynska-Wawer et al., 2021; Devlin et al., 2019; Mittal et al., 2021). More recently, the very deep pre-trained language models, such as Bidirectional Encoder Representation from Transformer (BERT) (Devlin et al., 2019) and Generative Pre-Training (GPT) (Radford et al., 2018), trained with an increasing number of selfsupervised tasks have been proposed to make the models capturing more knowledge from a large scale of corpora, which have shown their abilities to produce promising results. In view of it, many pieces of studies about DST have explored to establish the models on the basis of these pre-trained language models (Hosseini-Asl et al., 2020; Kim et al., 2020; Lee et al., 2019; Zhang et al., 2020; Chen et al., 2020; Chao and Lane, 2019; Ye et al., 2021; Heck et al., 2020; Lin et al., 2020). Related to handling the correlations among domains and slots in multi-domain DST, several approaches have been investigated. In recent mainstream approaches, domain-slot specific representations are first achieved using attention mechanism with aggregated domain-slot query, and then the correlations are modeled with them. (Balaraman and Magnini, 2021) utilizes domain and slot information to extract both domain and slot specific representations and then combines such representations to predict the values. Chen et al. (2020) manually constructs a schema graph modeling the dependencies of different slots and introduces a graph attention matching network to mix the information from utterances and graphs to control the state updating. Hu et al. (2020) introduces a matrix representing the similarity among different slots and then perform slot information sharing among similar slots. The above two approaches are name-based since they only consider the semantics dependencies of slot names to measure the correlation among different slots, which may result in overlooking the dependencies of some slots. More recently, Ye et al. (2021) proposes a data-driven approach to handle these correlations, in which slot self-attention is introduced. However, this approach may inevitably result in overestimating some correlations (Yang et al., 2022). ## 3 Dialogue State Tracking With Disentangled Domain-Slot Attention Figure 1(a) presents the overview of the proposed model. It consists of a dialogue encoder, a domain, slot and value encoder, disentangled domain-slot attention (DDSA), and slot value matching. The context representations of dialogue history, domains, slots and values are firstly obtained by feeding dialogue history, domains, slots and values into encoders respectively. And then these representations are passed to our proposed disentangled domainslot attention, as shown detailedly in Figure 1b, to achieve domain-slot specific representations. Finally, the corresponding values are chosen to predict the state values with these representations and slot value matching. ## 3.1 Encoding We employ BERT as the encoder to generate semantic representations. The BERT*context* whose parameters are fine-tuned during training is used for encoding the dialogue context. Let's define the dialogue context history CT = {R1, U1, ..., RT , UT } as a set of system responses R and user utterances U in T turns of dialogue, where R = {Rt} T t=1 and U = {Ut} T t=1, 1 ≤ t ≤ T. We define ET = {B1*, ..., B*T } as the dialogue states of T 4929 ![2_image_0.png](2_image_0.png) turns, and each Etis a set of slot value pairs {(S1, V1), ...,(SJ , VJ )} of J slots. Although the dialogue history Ct = {Rt, Ut} contains integrated information for the conversation until the t-th turn, the previous study (Ye et al., 2021) has indicated that it is helpful to combine it along with a compact representation E′t−1 , which only includes the slots whose values are not none, as part of the input. In view of this, the context encoder accepts the dialogue history till turn t, which can be denoted as Xt = {Ct, E′t−1}, as the input and generates context vector representations Ht = BERT*context*(Xt). Another pre-trained model BERTdsv is employed to encode the domains, slots, and candidate values, in which the parameters of BERTdsv remain frozen. For those slots and values containing multiple tokens, the vector corresponding to the special token [CLS] is employed to represent them. For each domain Di slot Sj and value Vk, hdi = BERTdsv(Di), hsj = BERTdsv(Sj ), hvk = BERTdsv(Vk). ## 3.2 Disentangled Domain-Slot Attention Figure 1(b) demonstrates the structure of our proposed disentangled domain-slot attention. The extraction with query about domains and slots is disentangled into two stages. The domain specific representations are first obtained using the domain query and the dialogue context. The slot query is employed to retrieve slot specific information based on the output of the previous stage. Finally, domainslot specific context representations are achieved for the subsequent state prediction. ## 3.2.1 Domain Query Domain specific representations are achieved using the hidden representations of domains hd and that of dialogue context Ht 2. The process can be described as follows: $$\mathbf{Q}_{d}=\mathbf{W}_{dq}^{nd}\mathbf{h}_{d}+\mathbf{b}_{Q_{d}}\tag{1}$$ $$\mathbf{K}_{d}=\mathbf{W}_{K_{d}}^{nd}\mathbf{H}_{t}+\mathbf{b}_{K_{d}}$$ (2) $$\mathbf{V}_{d}=\mathbf{W}_{V_{d}}^{nd}\mathbf{H}_{t}+\mathbf{b}_{V_{d}}$$ (3) $$\boldsymbol{\alpha}_{d}^{nd}=softmax(\frac{\mathbf{Q}_{d}\mathbf{K}_{d}^{\mathsf{T}}}{\sqrt{k_{dim}}},axis=domain)$$ (4) $$\mathbf{h}_{d}^{nd}=\boldsymbol{\alpha}_{d}^{nd}\mathbf{V}_{d}\tag{5}$$ Where Wdq, bQd ,WKd , bKd ,WVd , bVd are the parameters of the linear layers for projecting query, key, and value respectively at the domain query stage. kdim = k*model*/nd, in which k*model* is the hidden size of the model and nd ∈ Nd is the heads of the multi-head dot-product attention at this stage. ## 3.2.2 Slot Query After the domain query stage, slot specific representations can be obtained using the output of the domain query stage and the hidden representations of slots hs. Note that here "slot" means the slot only rather than the concatenation or the average on the representations of domains and slots pairs. The process is shown as follows: $\mathbf{Q}_{s}=\mathbf{W}_{sq}^{ns}\mathbf{h}_{s}+\mathbf{b}_{Qs}$ (6) $\mathbf{K}_{s}=\mathbf{W}_{K_{s}}^{m_{s}}\mathbf{h}_{d}^{nd}+\mathbf{b}_{K_{s}}$ (7) $\mathbf{V}_{s}=\mathbf{h}_{d}^{nd}$ (8) $\pmb{\alpha}_{s}^{n_{s}}=softmax(\dfrac{\mathbf{Q}_{s}\mathbf{K}_{s}^{\mathsf{T}}}{\sqrt{k_{ddsa}}},axis=slot)$ (9) $\mathbf{h}_{ds}^{n_{s}}=\pmb{\alpha}_{s}^{n_{s}}\mathbf{V}_{s}$ (10) $\mathbf{h}_{ds}=\mathbf{W}_{os}Concat(\mathbf{h}_{ds}^{1},...,\mathbf{h}_{ds}^{N_{s}})$ (11) $\mathbf{{}^{2}}$Here we omit the indices of domains and slots for simplification. Where Wsq, bQs,W′Ks, bKs,WVs, bVs are the parameters of the linear layers for projecting query, key and value respectively at the slot query stage, and Wos is the parameters of the linear layer for aggregating the heads of slot query. k*ddsa* is a hyperparameter indicating the hidden dimension in this component, and ns ∈ Ns is the number of heads at this stage. Since the number of combinations of domains and slots is generally larger than that of the actual domain-slot pairs, a linear layer is employed to project domain-slot specific representation hds to the representation of the actual size. $$\mathbf{h}_{d s}=\mathbf{W}_{o d}C o n c a t(\mathbf{h}_{d s}^{1},...,\mathbf{h}_{d s}^{N_{d}})\tag{12}$$ $$\mathbf{h}^{\prime}{}_{d s}=L i n e a r(\mathbf{h}_{d s},a x i s=d o m a i n\times s l o t)\tag{13}$$ Where Wod is the parameters of the linear layer for aggregating the heads of domain query. ## 3.3 Slot Value Matching A Euclidean distance-based value prediction is performed for each slot. Firstly, the domain-slot specific vector is fed into a normalization layer. Then the distances between domain-slot specific vector and value are measured. Finally, the nearest value is chosen to predict the state value. $$\begin{array}{l}{{r_{t}^{D S_{m}}=L a y e r N o r m(L i n e a r(\mathbf{h}^{\prime}{}_{d s})),}}\\ {{p(V_{t}^{k}|X_{t},D S_{m})=\frac{\exp(-d(\mathbf{h}^{V_{k}},\mathbf{r}_{t}^{D S_{m}}))}{\sum_{V_{k}^{\prime}\in\nu_{k}}\exp(-d(\mathbf{h}^{V_{k}^{\prime}},\mathbf{r}_{t}^{D S_{m}}))}}}\end{array}$$ ′ds)), (14) (14) $\overset{\text{(}}{\underset{t}{\overline{DS_{m}}}}$)) (15) (15) where d(·) is Euclidean distance function, and νk denotes the value space of the actual domain-slot DSm. The model is trained to maximize the joint probability of all slots. The loss function at each turn t is denoted as the sum of the negative loglikelihood. $${\mathcal{L}}_{t}=\sum_{m=1}^{M}-\log(p(V_{t}^{k}|X_{t},D S_{m}))\qquad(16)$$ ## 4 Experimental Settings We conduct the experiments using MultiWOZ 2.0 and MultiWOZ 2.4 datasets in this work. MultiWOZ 2.0 (Budzianowski et al., 2018) is one of the largest open-source human-human conversational datasets of multiple domains. It contains over 10,000 dialogues in which each dialogue averages 13.68 turns. MultiWOZ 2.4 is the latest refined version (Ye et al., 2022). It mainly fixes the annotation errors in the validation and test set. To make a fair comparison with the models evaluated on these two datasets, we follow the pre-processing and evaluation procedure in several previous works (Wu et al., 2019; Lee et al., 2019; Wang et al., 2020; Ye et al., 2021) to keep consistent. We present the settings of the model in Appendix A. ## 5 Results And Discussions 5.1 Main Results Joint goal accuracy (JGA) and slot accuracy (SA) are employed to evaluate the overall performance. The joint goal accuracy is a strict measurement comparing the predicted values of each slot with ground truth for each dialogue turn, and the prediction is considered correct if and only if all the predicted values match the ground truth values without any error at each turn. The slot accuracy compares each value to the corresponding ground truth individually without seeing other turns. For the results of baselines, we use the results reported in the corresponding references. Table 1 presents the results of the different models on the test set of MultiWOZ 2.0 and 2.4 datasets. As shown in it, overall, our proposed model achieves the best performance on these two datasets. We utilize the Wilcoxon signed-rank test, the proposed method is statistically significantly better (p < 0.05) than baselines. Comparing to the previous SOTA models SAVN on the original MultiWOZ 2.0 dataset, which utilizes slot attention with the concatenated domain-slot query extracting slot specific information and value normalization on the ontologies to varying degrees, and STAR, which uses slot self-attention with the aggregated domain-slot query to model the correlations among different slots, our model obtains a JGA of 54.70% and a SA of 97.49% outperforming SAVN with a JGA of 54.52% and a SA of 97.42% , and STAR with a JGA of 54.53% and a SA of 97.38%. For the latest refined MultiWOZ 2.4 dataset, our proposed model improves the performance by a relatively larger margin comparing to the previous SOTA STAR model from a JGA of 73.62% to 75.58% and a SA of 98.87% to 98.94%. To have a better understanding, an error analysis, a discussion about the effects of different hyperparameter settings, and a case study are made and presented in | Model | JGA (%) | SA (%) | | | | |---------------------------------------|----------------------------------|----------|--------|-------|-------| | MWZ2.0 | MWZ2.4 | MWZ2.0 | MWZ2.4 | | | | TRADE (Wu et al., 2019) | 48.93 | 54.97 | 96.92 | 97.58 | | | Open | SOM (Kim et al., 2020) | 51.72 | 66.78 | - | 98.38 | | vocabulary | TripPy (Heck et al., 2020) | 53.11 | 59.62 | 97.25 | 97.94 | | SimpleTOD (Hosseini-Asl et al., 2020) | - | 66.78 | - | - | | | SUMBT (Lee et al., 2019) | 46.65 | 61.86 | 96.44 | 97.90 | | | DS-DST (Zhang et al., 2020) | 52.24 | - | - | - | | | Ontology- | DS-Picklist (Zhang et al., 2020) | 54.39 | - | - | - | | based | SAVN (Wang et al., 2020) | 54.52 | 60.55 | 97.42 | 98.38 | | SST (Chen et al., 2020) | 51.17 | - | - | - | | | STAR (Ye et al., 2021) | 54.53 | 73.62 | 97.38 | 98.87 | | | Our model with DDSA | 54.70 | 75.58 | 97.49 | 98.94 | | | Our model w/o DDSA | 50.89 | 70.52 | 97.03 | 98.61 | | Appendix B. These additional results also indicate the effectiveness of our approach. ## 5.2 Ablation Study A simple ablation study is performed to verify the effectiveness of our proposed disentangled domainslot attention. As we can see in Table 1. The performance on the two datasets drops seriously when removing the proposed DDSA , which verifies the effectiveness of our proposed approach. In this case of model w/o DDSA, the domain specific and the slot specific information are extracted by feeding into the dialogue context and the domains and slots to the traditional domain and slot attention respectively, then they are concatenated and sent to the slot value matching component to perform state prediction. ## 6 Conclusion In this work, we propose a model based on disentangled domain-slot attention for multi-domain dialogue state tracking to handle the correlation among different domains and slots. Unlike the conventional approach in recent mainstream models, we disentangle the query about domains and slots in a flexible and context-dependent manner. The experimental results on MultiWOZ 2.0 and MultiWOZ 2.4 datasets show that, comparing to the models based on conventional approaches of slot attention using the aggregated domain-slot pairs, our approach effectively improves the performance of multi-domain dialogue state tracking. In future works, we will investigate to utilize the proposed approach to generative models and generalize them to more complicated scenarios. ## Acknowledgement This work was supported by JSPS KAKENHI Grand Number JP22K12069 and partially supported by JSPS KAKENHI Grant Number 23K11227 and 23H03402. ## Limitations This paper shows the effectiveness of our proposed disentangled domain-slot attention mechanism in multi-domain dialogue state tracking. The limitation of this paper is that this work mainly focuses on ontology-based DST, which need a list of predefined candidate values in advance. The condition may be different in the case of generative DST since entire successive information involved in language modeling may be important for language generation. Therefore, how to tackle the problems in generated manners need to further investigate, which we intend to take up in future works. ## References Vevake Balaraman and Bernardo Magnini. 2021. Domain-aware dialogue state tracker for multidomain dialogue systems. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29:866– 873. Paweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Iñigo Casanueva, Stefan Ultes, Osman Ra- madan, and Milica Gašic. 2018. ´ MultiWOZ - a largescale multi-domain Wizard-of-Oz dataset for taskoriented dialogue modelling. In *Proceedings of the* 2018 Conference on Empirical Methods in Natural Language Processing, pages 5016–5026, Brussels, Belgium. Association for Computational Linguistics. Guan-Lin Chao and Ian Lane. 2019. BERT-DST: Scalable End-to-End Dialogue State Tracking with Bidirectional Encoder Representations from Transformer. In *Proc. Interspeech 2019*, pages 1468–1472. Lu Chen, Boer Lv, Chi Wang, Su Zhu, Bowen Tan, and Kai Yu. 2020. Schema-guided multi-domain dialogue state tracking with graph attention neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 7521–7528. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Michael Heck, Carel van Niekerk, Nurul Lubis, Christian Geishauser, Hsien-Chin Lin, Marco Moresi, and Milica Gasic. 2020. Trippy: A triple copy strategy for value independent neural dialog state tracking. In *Proceedings of the 21th Annual Meeting of the* Special Interest Group on Discourse and Dialogue, pages 35–44. Association for Computational Linguistics. Ehsan Hosseini-Asl, Bryan McCann, Chien-Sheng Wu, Semih Yavuz, and Richard Socher. 2020. A Simple Language Model for Task-Oriented Dialogue. In Advances in Neural Information Processing Systems, volume 33, pages 20179–20191. Curran Associates, Inc. Jiaying Hu, Yan Yang, Chencai Chen, Liang He, and Zhou Yu. 2020. SAS: Dialogue state tracking via slot attention and slot information sharing. In *Proceedings of the 58th Annual Meeting of the Association* for Computational Linguistics, pages 6366–6375, Online. Association for Computational Linguistics. Sungdong Kim, Sohee Yang, Gyuwan Kim, and SangWoo Lee. 2020. Efficient dialogue state tracking by selectively overwriting memory. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 567–582, Online. Association for Computational Linguistics. Hwaran Lee, Jinsik Lee, and Tae-Yoon Kim. 2019. SUMBT: Slot-utterance matching for universal and scalable belief tracking. In *Proceedings of the 57th* Annual Meeting of the Association for Computational Linguistics, pages 5478–5483, Florence, Italy. Association for Computational Linguistics. Zhaojiang Lin, Andrea Madotto, Genta Indra Winata, and Pascale Fung. 2020. MinTL: Minimalist transfer learning for task-oriented dialogue systems. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3391–3405, Online. Association for Computational Linguistics. Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Contextualized word vectors. In *Proceedings of the 31st* International Conference on Neural Information Processing Systems, NIPS'17, page 6297–6308, Red Hook, NY, USA. Curran Associates Inc. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In *Advances in neural information processing systems*, pages 3111–3119. Sarthak Mittal, Sharath Chandra Raparthy, Irina Rish, Yoshua Bengio, and Guillaume Lajoie. 2021. Compositional attention: Disentangling search and retrieval. Nikola Mrkšic, Diarmuid Ó Séaghdha, Tsung-Hsien ´ Wen, Blaise Thomson, and Steve Young. 2017. Neural belief tracker: Data-driven dialogue state tracking. In *Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics*, pages 1777– 1788, Vancouver, Canada. Association for Computational Linguistics. Elnaz Nouri and Ehsan Hosseini-Asl. 2018. Toward scalable neural dialogue state tracking. In NeurIPS 2018, 2nd Conversational AI workshop. Yawen Ouyang, Moxin Chen, Xinyu Dai, Yinggong Zhao, Shujian Huang, and Jiajun Chen. 2020. Dialogue state tracking with explicit slot connection modeling. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 34–40, Online. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In *Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)*, pages 1532–1543, Doha, Qatar. Association for Computational Linguistics. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. Liliang Ren, Kaige Xie, Lu Chen, and Kai Yu. 2018. Towards universal dialogue state tracking. In *Proceedings of the 2018 Conference on Empirical Methods* in Natural Language Processing (EMNLP), pages 2780–2786, Brussels, Belgium. Association for Computational Linguistics. Justyna Sarzynska-Wawer, Aleksander Wawer, Aleksandra Pawlak, Julia Szymanowska, Izabela Stefaniak, Michal Jarkiewicz, and Lukasz Okruszek. 2021. Detecting formal thought disorder by deep contextualized word representations. *Psychiatry Research*, 304:114135. Blaise Thomson and Steve Young. 2010. Bayesian update of dialogue state: A pomdp framework for spoken dialogue systems. *Computer Speech & Language*, 24(4):562–588. Yexiang Wang, Yi Guo, and Siqi Zhu. 2020. Slot attention with value normalization for multi-domain dialogue state tracking. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3019–3028, Online. Association for Computational Linguistics. Zhuoran Wang and Oliver Lemon. 2013. A simple and generic belief tracking mechanism for the dialog state tracking challenge: On the believability of observed information. In *Proceedings of the SIGDIAL 2013* Conference, pages 423–432, Metz, France. Association for Computational Linguistics. Jason D. Williams and Steve Young. 2007. Partially observable markov decision processes for spoken dialog systems. *Comput. Speech Lang.*, 21(2):393–422. Chien-Sheng Wu, Andrea Madotto, Ehsan Hosseini-Asl, Caiming Xiong, Richard Socher, and Pascale Fung. 2019. Transferable multi-domain state generator for task-oriented dialogue systems. In *Proceedings of the* 57th Annual Meeting of the Association for Computational Linguistics, pages 808–819, Florence, Italy. Association for Computational Linguistics. Longfei Yang, Jiyi Li, Sheng Li, and Takahiro Shinozaki. 2022. Multi-domain dialogue state tracking with top-k slot self attention. In Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 231–236, Edinburgh, UK. Association for Computational Linguistics. Fanghua Ye, Jarana Manotumruksa, and Emine Yilmaz. 2022. MultiWOZ 2.4: A multi-domain task-oriented dialogue dataset with essential annotation corrections to improve state tracking evaluation. In Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 351–360, Edinburgh, UK. Association for Computational Linguistics. Fanghua Ye, Jarana Manotumruksa, Qiang Zhang, Shenghui Li, and Emine Yilmaz. 2021. Slot selfattentive dialogue state tracking. In Proceedings of the Web Conference 2021, pages 1598–1608. Jianguo Zhang, Kazuma Hashimoto, Chien-Sheng Wu, Yao Wang, Philip Yu, Richard Socher, and Caiming Xiong. 2020. Find or classify? dual strategy for slot-value predictions on multi-domain dialog state tracking. In *Proceedings of the Ninth Joint Conference on Lexical and Computational Semantics*, pages 154–167, Barcelona, Spain (Online). Association for Computational Linguistics. Victor Zhong, Caiming Xiong, and Richard Socher. 2018. Global-locally self-attentive encoder for dialogue state tracking. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 1458–1467, Melbourne, Australia. Association for Computational Linguistics. ## A Experimental Settings The dialogue context encoder BERT*context* in this work is a pre-trained BERT-base-uncased model, which has 12 layers with 768 hidden units and 12 self-attention heads. We also employ another BERT-base-uncased model as the domain, slot and value encoder BERTdsv. For the proposed disentangled domain-slot attention, the number of heads of domains Nd and that of slots Ns in disentangled domain-slot attention are hyperparameters and investigated in the experiments. The dimension k*ddsa* in it is set to 768. Adam optimizer is adopted with a batch size of 8, which trains the model with a learning rate of 4e-5 for the encoder and 1e-4 for other parts. The hyperparameters are selected from the best-performing model over the validation set. We use a dropout with a probability of 0.1 on the dialogue history during training. The ground-truth states at previous turns are involved in the input during training. The previously predicted states are used as part of the input when inferring. ## B Supplementary Results B.1 Effects Of Different Hyperparameter Settings To investigate the effects of different hyperparameter settings, Table 2 presents the results of using different numbers of heads Nd for domain query and that Ns for slot query in the DDSA component in our model. It can be found that the model achieves the best performance when the number of heads for domain Nd = 16 and that for slot Ns = 32 in the experiment. These hyperparameters are selected by tuning on the validation set. ## B.2 Error Analysis An error analysis of each slot for the previous SOTA model STAR and our model on MultiWOZ 2.4 is shown in Figure 2, in which the lower the better. It can be observed that the error rates of several *name* and *area*-related slots are improved significantly. Specifically, the performance of restaurant−name, hotel−type, hotel−*area*, attraction − *area* and hotel − *bookstay* are improved to a relatively large margin. ## B.3 Case Study A case study below demonstrates some cases in MultiWOZ 2.4 dataset. Table 3 presents three dialogue episodes and the predicted dialogue states by the previous SOTA STAR and our proposed ![7_image_0.png](7_image_0.png) Table 2: The results of our models with different numbers of heads Nd for domain query and that of Ns for slot query on MultiWOZ 2.4 dataset. model. It can be found that, for the first example, the system recommends "downing college" to the user's request for an attraction. Although STAR captures the adjective phrase, the value for the slot attraction − *name* is not the referencing object "all saints church". Since there is a full slot selfattention is applied to the concatenated domain-slot query specific information, the mistake may be introduced from other domain-slot specific represen- Table 3: The dialogue state prediction for three dialogue episodes in the MultiWOZ 2.4 dataset. We omit some slots and values for simplification. | Dialogue context | STAR | DDSA | |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------|-------------------------| | SYS: I recommend downing college. USR: How far is it from the all saints | attraction-name=all saints | attraction-name=downing | | church? | church | college | | SYS: I completed your booking. Your reference number is 35w3xedl. Is there anything else I could do to help? USR: Yes, I also need to verify that this | hotel-area=none | hotel-area=east | | hotel is in the east area of the town. SYS: I have over 20 different options for you, was there a certain area or price range you would like me to find for you? USR: Let's see what is available cheap, same area as the restaurant makes most sense but I am open to any area. | hotel-area=south | hotel-area=do not care | tations. In the second case, the user would like to confirm the asked hotel in the east area while STAR fails to get the point. In the third case, the user is open to any area. But STAR still overestimated the correlation between *hotel* and the previously mentioned *restaurant*. Our model successfully predicts the dialogue states for it. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section Limitation A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and Introduction ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3; Cited In References ✓ B1. Did you cite the creators of artifacts you used? Section 3; cited in References ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Section 3; cited in References ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section 3; cited in References ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Section 3; cited in References ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 3; cited in References ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 3 states that we take the step as same in other works for consistency. ## C ✗ **Did You Run Computational Experiments?** Left blank. C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? No response. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? No response. C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? No response. C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? No response. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
feng-etal-2023-improved
Improved Visual Story Generation with Adaptive Context Modeling
https://aclanthology.org/2023.findings-acl.305
Diffusion models developed on top of powerful text-to-image generation models like Stable Diffusion achieve remarkable success in visual story generation. However, the best-performing approach considers historically generated results as flattened memory cells, ignoring the fact that not all preceding images contribute equally to the generation of the characters and scenes at the current stage. To address this, we present a simple method that improves the leading system with adaptive context modeling, which is not only incorporated in the encoder but also adopted as additional guidance in the sampling stage to boost the global consistency of the generated story. We evaluate our model on PororoSV and FlintstonesSV datasets and show that our approach achieves state-of-the-art FID scores on both story visualization and continuation scenarios. We conduct detailed model analysis and show that our model excels at generating semantically consistent images for stories.
# Improved Visual Story Generation With Adaptive Context Modeling Zhangyin Feng 1, Yuchen Ren 2**, Xinmiao Yu** 1, Xiaocheng Feng 1,3, Duyu Tang, Shuming Shi, **Bing Qin** 1,3 1 Harbin Institute of Technology, 2 Renmin University of China, 3 Peng Cheng Laboratory {zyfeng, xmyu, xcfeng, qinb}@ir.hit.edu.cn [email protected] ## Abstract Diffusion models developed on top of powerful text-to-image generation models like Stable Diffusion achieve remarkable success in visual story generation. However, the best-performing approach considers historically generated results as flattened memory cells, ignoring the fact that not all preceding images contribute equally to the generation of the characters and scenes at the current stage. To address this, we present a simple method that improves the leading system with adaptive context modeling, which is not only incorporated in the encoder but also adopted as additional guidance in the sampling stage to boost the global consistency of the generated story. We evaluate our model on PororoSV and FlintstonesSV datasets and show that our approach achieves state-of-theart FID scores on both story visualization and continuation scenarios. We conduct detailed model analysis and show that our model excels at generating semantically consistent images for stories. ## 1 Introduction Diffusion models trained on broad text-image data (Rombach et al., 2022; Bao et al., 2022; Feng et al., 2022; Ramesh et al., 2022; Saharia et al., 2022; Balaji et al., 2022; Nichol et al., 2021) achieved remarkable success in text-to-image generation and showed strong abilities to synthesize photorealistic images of high resolution and great semantic consistency to text prompts. Such a huge success drives the extension of modern diffusion text-toimage models into more scenarios like visual story generation, which is to generate a series of images for a story of multiple sentences. A recent work, AR-LDM (Pan et al., 2022), which is built upon open-sourced Stable Diffusion, achieves the state-of-the-art FID on the benchmark datasets for visual story generation. AR-LDM encodes previous text-image context as a sequence of additional conditions, which is then attended by ![0_image_0.png](0_image_0.png) Figure 1: A motivating example of a story with five sentences. Blue and purple lines indicate the dependencies between images. the UNet decoder for image generation. Despite its remarkable success, one limitation is that previous text-image pairs of the same story are all flattened as conditioning memories. This is different from the fact that not all the scenes/characters of sentences in the same story are closely related. Take Figure 1 as an example. The scene of the fourth sentence is not related to either the second or the third sentence. On the contrary, the generation of the fifth image should depend more on the second and third images than others. From this example, we can see that the dependency between images could be largely measured by the semantic relations between sentences. In this work, we present a simple approach 1 that selectively adopts historical text-image data from the same story in the generation of an image. Specifically, we freeze the text and image representations produced by off-the-shell encoders, and adaptively compute conditioning vectors of context 1We name our model as **ACM-VSG** (Adaptive **Context** Modeling for Visual Story Generation). 4939 by considering the semantic relation between the current sentence and all the history. Such resulting conditioning vectors will be queried by UNet in a traditional way. Furthermore, based on the consideration that images should have similar scenes and characters if their corresponding sentences are similar, we further add context-aware guidance like the use of classifier guidance or CLIP guidance (Nichol et al., 2021) in standard text-to-image generation. To validate the effectiveness of our approach, we evaluate our model on story visualization and continuation tasks. Experimental results on PororoSV and FlintstonesSV datasets show that both adaptive encoder and guidance improve the quality of the generated images as well as the global consistency of the visual story. The contributions of this work are as follows: - We present a diffusion model that adaptively uses context information in the encoder and sampling guidance for visual story generation. - Our approach achieves state-of-the-art results on benchmark datasets for both story visualization and continuation tasks. - We show that our model excels at generating semantically consistent images for stories. ## 2 Related Work 2.1 Text-To-Image Generation We group modern text-to-image generation approaches into three categories. The **first** category is generative adversarial network (Goodfellow et al., 2014; Reed et al., 2016; Zhang et al., 2017). They jointly learn a generator and a discriminator, where the generator is trained to generate images to fool the discriminator and the discriminator is trained to distinguish between real and (generated) fake images. The **second** category is encoder-decoder plus discrete variational autoencoder (dVAE). Methods are developed based on a well-trained discrete variational autoencoder (Van Den Oord et al., 2017), which is capable of mapping an image to discrete tokens and reconstructing an image from discrete tokens. Thus, the task of text-to-image generation could be viewed as a special translation task that converts natural language tokens to image tokens. Autoregressive models (Ramesh et al., 2021a; Ding et al., 2021; Gafni et al., 2022; Yu et al., 2022) typically use Transformer (Vaswani et al., 2017) to generate a visual token conditioned on the previously generated tokens, resulting in high latency in the inference stage. Muse (Chang et al., 2023) is a non-autoregressive model that tremendously speeds up the inference stage by generating image tokens in parallel. The **third** category is diffusion models - image generation is considered as an iterative refinement process, where two ends of the spectrum are the Gaussian noise and the real image, respectively. Some studies adopt a variational autoencoder to compress an image to the latent space and learn the diffusion process in the latent space of images (Rombach et al., 2022; Bao et al., 2022; Feng et al., 2022). Some works (Ramesh et al., 2022; Saharia et al., 2022; Balaji et al., 2022) directly learn the diffusion model over pixels and typically include cascaded up-sampling models (e.g., from 64×64 to 256×256 and from 256×256 to 1024×1024) to produce high-resolution images. ## 2.2 Visual Story Generation Visual story generation includes two settings: story visualization and story continuation. Story visualization was firstly introduced by Li et al. (2019), who proposes the StoryGAN model for sequential text-to-image generation. Based on the GAN network, they proposed to combine image and story discriminators for adversarial learning. To improve the global consistency across dynamic scenes and characters in the story, Zeng et al. (2019) jointly considers story-to-image-sequence, sentence-to-image, and word-to-image-patch alignment by proposing an aligned sentence encoder and attentional word encoder. Li et al. (2020) includes dilated convolution in the discriminators to expand the receptive field of the convolution kernel in the feature maps and weighted activation degree to provide a robust evaluation between images and stories. To improve the visual quality, coherence and relevance of generated images, Maharana et al. (2021a) extends the GAN structure by including a dual learning framework that utilizes video captioning to reinforce the semantic alignment between the story and generated images, and a copy-transform mechanism for sequentially consistent story visualization. Maharana and Bansal (2021a) improves the generation quality by incorporating constituency parse trees, commonsense knowledge, and visual structure via bounding boxes and dense captioning. Unlike the story visualization task, whose input only contains the text story, the story continuation ![2_image_0.png](2_image_0.png) task also includes the first image as input. Maharana et al. (2022) introduces story continuation and modifies the pre-trained text-to-image model DALL-E (Ramesh et al., 2021b) by adding a cross attention module for story continuation. Pan et al. (2022) employs a history-aware encoding to incorporate previously generated text-image history to diffusion model for visual story generation. ## 3 Model We introduce our approach in this section. We first present the model architecture of our approach (§3.1), and then describe three important components: adaptive encoder (§3.2), conditional diffusion model (§3.3) and adaptive guidance (§3.4). ## 3.1 Model Architecture An overview of the approach is depicted in Figure 2. It includes an adaptive encoder, a conditional diffusion model, and an adaptive guidance. Based on current text prompt and historical text-image context, the adaptive encoder represents them as conditional vectors. Then the conditional diffusion model transforms these vectors into the corresponding image. During the diffusion sampling process, the adaptive guidance component further guides each diffusion step by comparing it to similar preceding images in the current story to enhance the global consistency of the generated images. ## 3.2 Adaptive Encoder Given a story S which consists of a sequence of text prompts: S = {s1, s2*, ..., s*L}. Story visualization aims to generate a sequence of images X = {x1, x2*, ..., x*L}. Each image corresponds to a text prompt. Different from text-to-image generation, which only generates one isolated image for the text prompt, story visualization requires global consistency between the generated images. A natural idea is to combine historical text-image context when generating the current image. $$\begin{array}{c}{{P(\mathbf{X}|\mathbf{S})=\prod_{i=1}^{L}P(x_{i}|{\hat{x}}_{<i},\mathbf{S})}}\\ {{=\prod_{i=1}^{L}P(x_{i}|\tau_{\theta}({\hat{x}}_{<i},s_{\leq i}))}}\\ {{=1}}\end{array}$$ ![3_image_0.png](3_image_0.png) ![3_image_1.png](3_image_1.png) where τθ denotes the history-aware conditioning encoder. As shown in Figure 1, we find that some images in the history of the same story are similar to the current image, and some images are even completely irrelevant. The purpose of the adaptive encoder is to automatically find the relevant historical text-image pairs, and then encode them into the condition vectors. As shown in Figure 3, adaptive encoder consists of a CLIP text encoder, a BLIP text-image encoder and a cross attention module. Both CLIP (Radford et al., 2021) and BLIP (Li et al., 2022a) are multimodal pre-trained models. The difference is that CLIP encodes text and image respectively, and BLIP can jointly represent text-image pair. We use CLIP to get the current text prompt vector vi, and BLIP to get the historical vectors {h0*, ..., h*i−1}. Then a cross attention is equipped to filter history information and we can obtain the updated vectors {hˆ0*, ..., h*i−1}. In the cross attention module, the text vector viis the query, and each historical vector h<i is the key and value. Finally, we concatenate the current text vector and history vectors to get the final condition vector c = [vi; hˆ0; ...; hˆi−1]. ## 3.3 Conditional Diffusion Model Denoising diffusion probabilistic models are a class of score-based generative models, which have recently gained traction in the field of text-to-image generation (Ho et al., 2020). A diffusion model typically contains forward and reverse processes. Given a data x0 sampled from a real-world data distribution q(x), the forward process is implemented as a predefined Markov chain that gradually corrupts x0 into an isotropic Gaussian distribution xT ∼ N (0, I) in T steps: $x_{t}=\sqrt{\alpha_{t}}x_{t-1}+\sqrt{1-\alpha_{t}}\epsilon_{t},\quad t\in\{1,\ldots,T\}$ where $\alpha=\sqrt{\alpha_{t}}x_{t-1}+\sqrt{1-\alpha_{t}}\epsilon_{t},\quad t\in\{1,\ldots,T\}$ where ϵt ∼ N (0, I), and {αt ∈ (0, 1)} T t=1 is a predefined noise variance schedule. The reverse process aims to learn a denoising network ϵθ(·) to reconstruct the data distribution x0 from the Gaussian noise xT ∼ N (0, I). We can express an arbitrary sample xt from the initial data x0: $$x_{t}=\sqrt{\bar{\alpha}_{t}}x_{0}+\sqrt{1-\bar{\alpha}_{t}}\epsilon,$$ where α¯t =Qt i=1 αi and ϵ ∼ N (0, I). The denoising network ϵθ(·) is trained to recover x0 by predicting the noise ϵθ(xt, t). The corresponding learning objective can be formalized as a simple mean-squared error loss between the true noise and the predicted noise: $${\mathcal{L}}=\mathbb{E}_{x_{0},\epsilon,t,c}\left[||\epsilon-\epsilon_{\theta}(x_{t},t,c)||_{2}^{2}\right],$$ olded from $\{1,...,T\},c$ i. where t is uniformly sampled from {1*, ..., T*}, c is condition and ϵ ∼ N (0, I). The denoising network ϵθ(·) is typically implemented by U-Net (Ho et al., 2020). To make the diffusion process conditional on the input, condition c is fed into ϵθ(·) via a cross-attention layer implementing Attention(*Q, K, V* ) = sof tmax( QKT √d )· V , where the intermediate representations of the U-Net acting as the query Q, and the condition embeddings c acting as the key K and value V . Classifier-free guidance (Ho and Salimans, 2022) is a widely used technique to improve sample quality while reducing diversity in conditional diffusion models, which jointly trains a single diffusion model on conditional and unconditional objectives via randomly dropping c during training (e.g. with 10% probability). During sampling, the output of the model is extrapolated further in the direction of ϵθ(xt|c) and away from ϵθ(xt|∅) as follows: $${\hat{\epsilon}}_{\theta}(x_{t}|c)=\epsilon_{\theta}(x_{t}|\varnothing)+\gamma\cdot(\epsilon_{\theta}(x_{t}|c)-\epsilon_{\theta}(x_{t}|\varnothing))$$ $\gamma\geq1$. where γ ≥ 1 is the guidance scale. ## 3.4 Adaptive Guidance Previous work (Dhariwal and Nichol, 2021; Nichol et al., 2021; Li et al., 2022b) in text-to-image generation have explored to utilize a classifier or a CLIP (Radford et al., 2021) model to improve a diffusion generator. A CLIP model consists of two separate pieces: an image encoder and a caption encoder. The model optimizes a contrastive crossentropy loss that encourages a high dot-product if the image is paired with the given caption, or a low dot-product if the image and caption correspond to different parts in the training data. The denoising diffusion process can be perturbed by the gradient of the dot product of the image and caption. One of the primary challenges of visual story generation is to maintain consistent background and character appearances throughout the story. In order to achieve this goal, during the diffusion sample stage, we propose an adaptive guidance, which explicitly requires that the image generated later should be consistent with the preceding generated images. Considering that images whose corresponding sentences are similar should have similar scenes/characters. When generating the image xiin the story, we first use clip text encoder to calculate the similarity score for each historical text in {s1*, ..., s*i−1} with the current text si. After that, we select the text-image pair with the highest similarity score. When the similarity score exceeds the threshold, we believe that the selected image and the image to be generated currently have high similarity, and we can use this image to guide the sampling process of the diffusion model. When the similarity score is lower than the threshold, we think that images in history are not similar to the image to be generated at present, and do not add sampling guidance. The previous CLIP guided model (Nichol et al., 2021) needs to train an additional noisy CLIP model. It's time and computation costly, and difficult to classify noisied image. Following UPainting (Li et al., 2022b), we use normal CLIP for guidance, and modify the CLIP inputs as follows: $$\begin{array}{c}{{\hat{x}_{0}=\frac{1}{\sqrt{\hat{\alpha}_{t}}}(x_{t}-\sqrt{1-\hat{\alpha}_{t}}\epsilon_{t})}}\\ {{x_{i n}=\sqrt{1-\bar{\alpha}_{t}}\hat{x}_{0}+(1-\sqrt{1-\bar{\alpha}_{t}})x_{t}}}\end{array}$$ $$\mathbf{\Sigma}:\mathrm{scale}.$$ The denoising diffusion process can be formulated as follows: $$\begin{array}{c}{{\hat{\epsilon}_{\theta}^{\prime}(x_{t}|c)=\epsilon_{\theta}(x_{t}|\emptyset)+\gamma\cdot(\epsilon_{\theta}(x_{t}|c)-\epsilon_{\theta}(x_{t}|\emptyset))}}\\ {{-g\sqrt{1-\bar{\alpha}_{t}}\nabla_{x_{t}}(f(x_{i n})\cdot f(x_{h}))}}\end{array}$$ where γ ≥ 1 is the classifier weight, g ≥ 0 is adaptive guidance weight, f(.) is the CLIP image encoder and xh is the most similar image to the current image xt. ## 4 Experiments 4.1 Datasets And Metrics We carried out experiments on both story visualization and story continuation tasks. Given a sequence of sentences forming a narrative, story visualization is the task of generating a corresponding sequence of images. Story continuation, including an initial ground truth image as input, is a variant of story visualization. We use two popular datasets, PororoSV (Li et al., 2019) and FlintstonesSV (Gupta et al., 2018), to evaluate our model. We give the statistics of two datasets in Table 1 and show main characters in Figure 4 and Figure 5 to help the reader understand our examples. | Train | Valid | Test | | |---------------|---------|--------|-------| | PororoSV | 10,191 | 2,334 | 2,208 | | FlintstonesSV | 20,132 | 2,071 | 2,309 | Table 1: Statistics for PororoSV and FlintstonesSV datasets. We adopt the automatic evaluation metrics following existing works and report results using the evaluation script provided in prior work2. Frechet Inception Distance (FID) captures the level of similarity between two groups based on statistical analysis of visual features in their respective 2https://github.com/adymaharana/VLCStoryGan ![5_image_0.png](5_image_0.png) Figure 4: Main characters in PororoSV dataset. ![5_image_1.png](5_image_1.png) Figure 5: Main characters in FlintstonesSV dataset. raw images, using the inception v3 model. Lower FID scores indicate higher resemblance between the predicted images and the ground-truth images. Character F1 score calculates the proportion of characters present in the generated images that exactly match the characters in the story inputs. To achieve this, a pretrained inception v3 model (Szegedy et al., 2015) is fine-tuned on each dataset using a multi-label classification loss, enabling it to make predictions of characters in test images. Frame Accuracy evaluates whether all characters from a story are correctly represented in the corresponding images, utilizing the same model employed in the Character F1 score. While the Character F1 score measures the proportion of characters captured in a story, Frame Accuracy quantifies the percentage of samples where all characters are appropriately included. ## 4.2 Implementation Details Our model is fine-tuned from the pre-traiend Stable Diffusion text-to-image generation model. We use CLIP base model and BLIP base model. We only train the parameters of diffusion model and cross attention module, and freeze the parameters of variational auto-encoder, CLIP and BLIP, which could speed up training and save GPU memory. Follow previous work, we train our model for 50 epochs. We use Adam optimizer and set learning rate to 1e-4. For γ, we used the default value in stable diffusion without adjustment. For the threshold of similarity score, we randomly sampled 50 stories to calculate the similarity score, then manually observed the relationship between the similarity score and the image similarity, and finally set the threshold to 0.65. For the g , we chose the best value 0.15 from 0.1, 0.15, 0.2, 0.5. ## 4.3 Baselines StoryGAN (Li et al., 2019) uses the standard GAN technique, which includes a recurrent text encoder, an image generation module, and two discriminators - image and story discriminator. StoryGANc (Maharana et al., 2022) follows the general framework of the StoryGAN model and adds the source image as input for the story continuation task. CP-CSV (Song et al., 2020) tries to better preserve character information with three modules: story and context encoder, figure-ground segmentation, and figure-ground aware generation. DUCO-StoryGAN (Maharana et al., 2021b) utlizes a video captioning model to generate an additional learning signal forcing the alignment of image and text, and a memory-augmented transformer to model complex interactions between frames. VLC-StoryGAN (Maharana and Bansal, 2021b) incorporates constituency parse trees, commonsense information and visual information, including bounding boxes and dense captioning, to enhance the visual quality and image consistency. Word-Level (Li and Lukasiewicz, 2022) incorporates word information and extends word-level spatial attention to focus on all words and visual spatial locations in the entire story. StoryDALL-E (Maharana et al., 2022) modifies the pre-trained text-to-image model DALL-E by adding a cross attention module for story continuation. AR-LDM (Pan et al., 2022) employs a historyaware encoding module to incorporate the current text prompt and previously generated text-image history to diffusion model for visual story generation. ## 5 Results 5.1 Story Visualization We evaluate our model on PororoSV dataset for story visualization task. Results are shown in Table 3. We can observe that diffusion-based model | Model | PororoSV | FlintstonesSV | | | | | |--------------------------|------------|-----------------|-------|----------|--------|-------| | FID ↓ | Char-F1↑ | F-Acc↑ | FID ↓ | Char-F1↑ | F-Acc↑ | | | StoryGANc(BERT) | 72.98 | 43.22 | 17.09 | 91.37 | 70.45 | 55.78 | | StoryGANc (CLIP) | 74.63 | 39.68 | 16.57 | 90.29 | 72.80 | 58.39 | | StoryDALL-E(prompt) | 61.23 | 29.68 | 11.65 | 53.71 | 42.48 | 32.54 | | StoryDALL-E (finetuning) | 25.90 | 36.97 | 17.26 | 26.49 | 73.43 | 55.19 | | MEGA-StoryDALL-E | 23.48 | 39.91 | 18.01 | 23.58 | 74.26 | 54.68 | | AR-LDM | 17.40 | - | - | 19.28 | - | - | | ACM-VSG (Ours) | 15.36 | 45.71 | 22.62 | 18.41 | 94.95 | 88.89 | Table 2: Results on the test sets of PororoSV and FlintstonesSV datasets from various models. Scores are based on FID , character classification F1, and frame accuracy evaluations. ![6_image_0.png](6_image_0.png) Figure 6: Example of generated images from previous model AR-LDM and our proposed model. | Model | FID ↓ | |----------------|---------| | StoryGAN | 158.06 | | CP-CSV | 149.29 | | DUCO-StoryGAN | 96.51 | | VLC-StoryGAN | 84.96 | | VP-CSV | 65.51 | | Word-Level SV | 56.08 | | AR-LDM | 16.59 | | ACM-VSG (Ours) | 15.48 | outperforms the prior methods by a large margin, and our proposed ACM-VSG achieves the best FID score 15.48, indicating our model is able to generate high-quality images. ## 5.2 Story Continuation Table 2 shows the results for story continuation task. As we can see, our model can achieve the best results on both datasets, 15.36 and 18.41 FID for PororoSV and FlintstonesSV, respectively. And our model can greatly preserve characters to improve the consistency of the story. In addition, we show an example on FlintstonesSV and pororoSV dataset in Figure 6 and Figure 7. We can observe that our model is able to maintain the text-image alignment and consistency across images. ## 5.3 Ablation Study Table 4 shows ablation studies to ensure that each component in the our proposed method benefits visual story generation. -Guidance means removing the adaptive guidance. -Attention means removing the cross attention module in the adaptive encoder. ![7_image_0.png](7_image_0.png) ![7_image_1.png](7_image_1.png) Figure 7: Example of generated images from previous model AR-LDM and our proposed model. | Model | FID ↓ | Char-F1↑ | F-ACC↑ | |-------------|---------|------------|----------| | ACM-VSG | 15.36 | 45.71 | 22.62 | | - Guidance | 15.96 | 44.56 | 22.13 | | - Attention | 16.88 | 44.27 | 20.25 | Table 4: Ablation study results for story continuation task on PororoSV. ![7_image_3.png](7_image_3.png) ![7_image_4.png](7_image_4.png) ## 5.4 Error Analysis Our model significantly improves the performance of visual story generation, but there are still some limits. In order to solve these limitations in fu- ![7_image_2.png](7_image_2.png) ture work, we analyze the generated images. We randomly sample hundreds of stories from PororoSV and FlintstonesSV datasets and summarize the errors. Story Inconsistency. As shown in Figure 8, there are inconsistencies across generated images. The style of the bed and the color of the quilt between the second and third generated images are inconsistent. The environment around *Wilma* is inconsistent between the fourth and fifth images. Repetitive Character. As shown in Figure 9 case 1, the model may generate the same character repeatedly if it appears multiple times in the text. Character Action Error. As shown in Figure 9 case 2, the subtle actions of characters in the image are misaligned with the text. In case 2, Wilma holds onto Fred's shoulder in text, while Fred holds onto Wilma's shoulder in the generated image. an ensemble of expert denoisers. *arXiv preprint* arXiv:2211.01324. ## 6 Conclusion In this paper, we explore an effective adaptive context modeling method to improve visual story generation. First, we use an adaptive encoder to select the context closely related to the current image from the historical text-image pairs using the current text. Then we fed the context vectors and the text vector to the diffusion model, and use an adaptive guidance to guide the generation of the current image. Experimental results verify that adaptive context modeling could help generate higher quality images and more consistent stories. In addition, we analyze the generated images and find potential research directions in the future: (1) focus on the global consistency of the story, (2) pay attention to the action and expression of the character, (3) obtain the exact semantics of long stories. ## Limitations A limitation of this work is that it is only evaluated on synthesized datasets of cartoons with limited characters and scenes. In the real world application, there might be many different scenes/characters, posing new challenges to the proposed approach. Another limitation is the requirement of supervised training data and resources. Despite the number of trainable parameters of our approach (850M) is less than AR-LDM (∼1.5B), the model still needs many story-level training data and computing resources. ## Acknowledgements Xiaocheng Feng is the corresponding author of this work. We thank the anonymous reviewers for their insightful comments. Zhangyin Feng, Xinmiao Yu, Xiaocheng Feng and Bing Qin are supported by the National Key R&D Program of China via grant 2020AAA0106502, National Natural Science Foundation of China (NSFC) via grant 62276078, the Key R&D Program of Heilongjiang via grant 2022ZX01A32 and the International Cooperation Project of PCL, PCL2022D01. ## References Yogesh Balaji, Seungjun Nah, Xun Huang, Arash Vahdat, Jiaming Song, Karsten Kreis, Miika Aittala, Timo Aila, Samuli Laine, Bryan Catanzaro, et al. 2022. ediffi: Text-to-image diffusion models with Fan Bao, Chongxuan Li, Yue Cao, and Jun Zhu. 2022. All are worth words: a vit backbone for score-based diffusion models. *arXiv preprint arXiv:2209.12152*. Huiwen Chang, Han Zhang, Jarred Barber, AJ Maschinot, Jose Lezama, Lu Jiang, MingHsuan Yang, Kevin Murphy, William T Freeman, Michael Rubinstein, et al. 2023. Muse: Text-toimage generation via masked generative transformers. arXiv preprint arXiv:2301.00704. Prafulla Dhariwal and Alexander Nichol. 2021. Diffusion models beat gans on image synthesis. *Advances* in Neural Information Processing Systems, 34:8780– 8794. Ming Ding, Zhuoyi Yang, Wenyi Hong, Wendi Zheng, Chang Zhou, Da Yin, Junyang Lin, Xu Zou, Zhou Shao, Hongxia Yang, et al. 2021. Cogview: Mastering text-to-image generation via transformers. *Advances in Neural Information Processing Systems*, 34:19822–19835. Zhida Feng, Zhenyu Zhang, Xintong Yu, Yewei Fang, Lanxin Li, Xuyi Chen, Yuxiang Lu, Jiaxiang Liu, Weichong Yin, Shikun Feng, et al. 2022. Ernie-vilg 2.0: Improving text-to-image diffusion model with knowledge-enhanced mixture-of-denoising-experts. arXiv preprint arXiv:2210.15257. Oran Gafni, Adam Polyak, Oron Ashual, Shelly Sheynin, Devi Parikh, and Yaniv Taigman. 2022. Make-a-scene: Scene-based text-to-image generation with human priors. *arXiv preprint arXiv:2203.13131*. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. Advances in neural information processing systems, 27. Tanmay Gupta, Dustin Schwenk, Ali Farhadi, Derek Hoiem, and Aniruddha Kembhavi. 2018. Imagine this! scripts to compositions to videos. *arXiv: Computer Vision and Pattern Recognition*. Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising diffusion probabilistic models. *Advances* in Neural Information Processing Systems, 33:6840– 6851. Jonathan Ho and Tim Salimans. 2022. Classifierfree diffusion guidance. *arXiv preprint* arXiv:2207.12598. Bowen Li and Thomas Lukasiewicz. 2022. Word-level fine-grained story visualization. Chunye Li, Liya Kong, and Zhiping Zhou. 2020. Improved-storygan for sequential images visualization. Journal of Visual Communication and Image Representation, 73:102956. Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. 2022a. Blip: Bootstrapping language-image pretraining for unified vision-language understanding and generation. *arXiv preprint arXiv:2201.12086*. Wei Li, Xue Xu, Xinyan Xiao, Jiachen Liu, Hu Yang, Guohao Li, Zhanpeng Wang, Zhifan Feng, Qiaoqiao She, Yajuan Lyu, et al. 2022b. Upainting: Unified text-to-image diffusion generation with cross-modal guidance. *arXiv preprint arXiv:2210.16031*. Yitong Li, Zhe Gan, Yelong Shen, Jingjing Liu, Yu Cheng, Yuexin Wu, Lawrence Carin, David Carlson, and Jianfeng Gao. 2019. Storygan: A sequential conditional gan for story visualization. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pages 6329–6338. Adyasha Maharana and Mohit Bansal. 2021a. Integrating visuospatial, linguistic, and commonsense structure into story visualization. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6772–6786, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Adyasha Maharana and Mohit Bansal. 2021b. Integrating visuospatial, linguistic and commonsense structure into story visualization. *arXiv: Computation* and Language. Adyasha Maharana, Darryl Hannan, and Mohit Bansal. 2021a. Improving generation and evaluation of visual stories via semantic consistency. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2427–2442, Online. Association for Computational Linguistics. Adyasha Maharana, Darryl Hannan, and Mohit Bansal. 2021b. Improving generation and evaluation of visual stories via semantic consistency. *arXiv: Computation and Language*. Adyasha Maharana, Darryl Hannan, and Mohit Bansal. 2022. Storydall-e: Adapting pretrained text-toimage transformers for story continuation. In *European Conference on Computer Vision*, pages 70–87. Springer. Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob McGrew, Ilya Sutskever, and Mark Chen. 2021. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. *arXiv preprint* arXiv:2112.10741. Xichen Pan, Pengda Qin, Yuhong Li, Hui Xue, and Wenhu Chen. 2022. Synthesizing coherent story with auto-regressive latent diffusion models. *arXiv* preprint arXiv:2211.10950. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, pages 8748–8763. PMLR. Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. 2022. Hierarchical textconditional image generation with clip latents. *arXiv* preprint arXiv:2204.06125. Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. 2021a. Zero-shot text-to-image generation. In International Conference on Machine Learning, pages 8821–8831. PMLR. Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. 2021b. Zero-shot text-to-image generation. In *International Conference on Machine* Learning, pages 8821–8831. PMLR. Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. 2016. Generative adversarial text to image synthesis. In International conference on machine learning, pages 1060–1069. PMLR. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. Highresolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10684–10695. Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S Sara Mahdavi, Rapha Gontijo Lopes, et al. 2022. Photorealistic text-to-image diffusion models with deep language understanding. *arXiv preprint arXiv:2205.11487*. Yun Zhu Song, Zhi Rui Tam, Hung Jen Chen, Huiao Han Lu, and Hong-Han Shuai. 2020. Character-preserving coherent story visualization. Springer International Publishing eBooks. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. 2015. Rethinking the inception architecture for computer vision. *arXiv: Computer Vision and Pattern Recognition*. Aaron Van Den Oord, Oriol Vinyals, et al. 2017. Neural discrete representation learning. *Advances in neural* information processing systems, 30. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30. Jiahui Yu, Yuanzhong Xu, Jing Yu Koh, Thang Luong, Gunjan Baid, Zirui Wang, Vijay Vasudevan, Alexander Ku, Yinfei Yang, Burcu Karagol Ayan, et al. 2022. Scaling autoregressive models for content-rich text-to-image generation. arXiv preprint arXiv:2206.10789. Gangyan Zeng, Zhaohui Li, and Yuan Zhang. 2019. Pororogan: an improved story visualization model on pororo-sv dataset. In Proceedings of the 2019 3rd International Conference on Computer Science and Artificial Intelligence, pages 155–159. Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaogang Wang, Xiaolei Huang, and Dimitris N Metaxas. 2017. Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks. In *Proceedings of the IEEE international* conference on computer vision, pages 5907–5915. ## A Pororosv Cases Case 1: 1. Tongtong opens the door. Crong is now on the Pororo's car. They are entering the Tongtong's house. Tongtong tries to find where the magic wand is. 2. The pink magic wand is located behind the chair. Then the magic wand becomes to come out. 3. Tongtong finally finds out the magic wands. 4. As Tongtong finds out the magic wand Tongtong is confident to make Pororo normal. Tongtong says that Pororo will turn back to normal. 5. Pororo jumps high after Tongtong's promise. Tongtong asks Pororo not to mess up. Case 2: 1. Pororo and Crong is in Pororo's house. They are standing next to the bed. Pororo is pointing Crong. Crong looks sad. 2. Pororo and Crong is in Pororo's house standing next to the bed. Pororo is pointing drawer and Crong looks at it with a sad face. 3. Poby is in Pororo's house. Poby is approaching a drawer. Above the drawer there is a book which is slightly open. 4. Poby is in Pororo's house searching for something. Poby leans his head down to take a look. 5. Poby is in Pororo's house. Poby is thinking something standing still putting his right hand on his jaw. Case 3: 1. Pororo dust the snow off from Poby. 2. Petty is holding the green block. 3. Harry explains situation. Pororo walks toward chair. 4. Pororo sits and joins play. 5. Eddy and Crong are yelling. Case 4: 1. Poby looks at Harry and Harry is talking to Poby while sitting on Poby's shoulder. 2. Petty talks and smiles and mops the floor. 3. Petty is smiling and mobbing the floor. 4. Petty smiles and puts the stuff on the table. 5. Poby talks and opens Poby mouth. ## Case 5: 1. Pororo says it was so stinky. 2. feeling embarrassed Poby waved Poby hands. 3. Pororo thinks who farted just before. 4. everyone saw Crong pinching everyone noses. 5. Crong is sitting on the toilet. Case 6: 1. Poby is leaving Loopy house. 2. Poby says bye to Loopy. 3. Poby thinks Pororo might have fixed broken chair. 4. Poby smiles. Loopy walks toward the chair. 5. Loopy is satisfied with fixed chair. Case 7: 1. Poby is tired so Poby says to Harry that Poby wants to go to bed with sleepy eyes. 2. Harry is surprised. Harry looks at the window 3. there are two cactus on the shelf. outside the window is dark already. 4. light is turned off and Poby and Harry finish ready to sleep. Harry say good night to Poby. 5. light is turned off and Poby and Harry finish ready to sleep. Poby lays down on the bed. Case 8: 1. Poby notices that someone is skiing down. 2. Pororo is skiing away. Loopy is chasing. 3. Pororo notices that Poby is waiting for Pororo. 4. Loopy and Poby are lying down. Eddy approaches. 5. Poby and Loopy stands up. Case 9: 1. Poby comes out of the house and find out his friends. 2. Poby explains to the friends that Poby must have fallen asleep inside. 3. Eddy is happy to see that Poby is also already at Eddy's house along with other friends. 4. Eddy is inviting his friends to come into his house. everyone follows him. 5. Eddy is leading his friends into his house. everyone is getting inside. Case 10: 1. Pororo and friends came running toward Poby. Poby is watching them. 2. Pororo and friends are talking to Poby. Poby has no idea why his friends are acting like this. 3. Harry sat on Poby's head. Harry is saying sorry to Poby. Poby looks surprised. Harry looks sad. 4. Harry is feeling guilty. meanwhile Poby has no idea what Harry is talking about. Harry is sitting on Poby's head. 5. Harry is feeling very sorry to Poby. Harry is talking to Poby on Poby's head. Case 11: 1. Poby is brushing pole to Poby's noses for making Poby itchy. Pororo tries sneezing Poby to take out of the air. 2. Pororo shows that small helicopter is still working properly. 3. Pororo continually suggests Poby trying to sneeze again. 4. Poby tries to sneeze to get out of air from his body. However sneezing with Poby's free will is really difficult. 5. Poby tries to sneeze to get out of air from his body. However sneezing with Poby's free will is really difficult. Poby gives up sneezing and says Poby can't sneeze anymore. Case 12: 1. Poby was keep singing. Suddenly Poby falls down. 2. Poby feels ashamed and wants that nobody saw him falling down. 3. Seeing Poby through the telescope Eddy secretly smiles and talks to himself that Eddy saw Poby falling down. 4. Eddy is interested in seeing things and friends through telescope. Eddy brings telescope and goes to the mountain to observe his friends more. 5. Up on the mountain Eddy chooses a target. It is Pororo. Eddy looks through the telescope. ## B Flintstonessv Cases Case 1: 1.fred and barney stand outside holding blue lunch boxes.fred talks to barney 2.Fred stands in the kitchen having a friendly conversation with someone. 3.Fred is standing in a room. He is speaking while looking over his shoulder and smiling. 4.Fred is trying to kiss Wilma in a room. Wilma is holding a type of plant in her hand. 5.Wilma is in a living room adjusting the leafs on a house plant that is sitting on a table while talking then she stands up straight and turns her head. Case 2: 1.A Lounging creature is lounging around the room and talking. 2.Wilma is ironing a shirt in the laundry room. 3.Wilma is in the living room. Wilma is ironing. Wilma is bobbing her head. 4.Wilma is in a room. She talks to someone. 5.Wilma is in a room. She is talking. Case 3: 1.Fred sits in the living room and speaks to Barney, who waits to respond. 2.Barney stands and talks with someone in the living room. 3.Fred and Barney are in the quarry. Fred is speaking to Barney. 4.Fred and Barney look worried. Fred and Barney are behind the wall in the yard. Barney and Fred are talking to each other. 5.Barney is talking to Fred outside behind a stone fence. Fred begins to slump down and look sad. Case 4: 1.Fred is riding in the car thinking and talking to himself. 2.Fred touches his chin then crosses his arm while outside. 3.Fred is walking outside while speaking out loud. 4.Fred and Barney are riding in the car with golf clubs strapped to the bumper. 5.Fred is standing in the room, talking to someone off screen left. Case 5: 1.Fred and Barney are standing outside next to the stone wall. Barney is wearing an outfit that makes him look like a boy scout. Fred says something to Barney and then points at him. 2.Fred and Barney are standing on a sidewalk. Barney is speaking to Fred, while Fred listens silently with his hands on his hips. 3.Barney is outside talking. 4.Fred and Barney stand in the yard. They speak to each other. 5.Fred and barney are standing outside talking. They are in front of the wall and barney has a hate on. Case 6: 1.Fred sits in the living room and speaks to Barney, who waits to respond. 2.Barney stands and talks with someone in the living room. 3.Fred and Barney are in the quarry. Fred is speaking to Barney. 4.Fred and Barney look worried. Fred and Barney are behind the wall in the yard. Barney and Fred are talking to each other. 5.Barney is talking to Fred outside behind a stone fence. Fred begins to slump down and look sad. Case 7: 1.Wilma is in the room, she is talking. 2.Wilma is in the dining room talking to someone then she starts to laugh. 3.Barney slides towards doorway, and opens door. 4.Wilma is sitting in the dining room at the table while talking on the phone. 5.Wilma is in the dining room. She sits at the table on the phone. Fred is wheeled into the room laying in a bed. As Fred enters, Wilma lowers the phone and looks at him with concern and surprise. Case 8: 1.Fred is in a living room kneeling next to a blue chair. 2.Pebbles and Fred are standing in a room talking. Pebbles turns her head and Fred shrugs his shoulders. 3.Fred and wilma talk in the bedroom, fred laughs in response to what Wilma says. 4.Fred makes an angry comment while sitting in the room. 5.Fred stands in the kitchen with an ice block on his head. Then someone reaches up and pats the ice. Fred makes a face and the ice starts melting. Case 9: 1.Barney is talking in the living room. 2.Betty is standing in a room, hanging up balloons. She is talking to someone, as she hangs up a green balloon. 3.Betty is in a room. She stands on a stool and holds a balloon in one hand while talking to someone on the ground. The room is decorated for a party. 4.Wilma is in the living room. Wilma is talking. 5.There is a bird that has its head lowered in the room . Case 10: 1.Wilma and Betty are in the room. They are talking to one another while standing. 2.Betty and Wilma are standing in a room. Wilma has bones in her hair. Betty is talking to Wilma. 3.Wilma is wearing a bone curler in her hair while Betty talks to her in a room. 4.Betty and wilma are talking in a room. 5.Wilma and Betty are standing in a room by the window. They keep looking out the window while Wilma holds the curtain. Case 11: 1.Mr slate is driving his car and laughing. 2.A police officer in a police station sits at a desk and talks into a speaker while looking at a stack of papers. He turns the speaker away from his mouth. 3.A Small Policeman behind wheel and a Policeman with Brown Mustache sit in their police car blinking. 4.The officer that is driving the car is speaking to the officer with mustache. 5.Fred and Barney talk as they sit in the car. Case 12: 1.Barney is sitting outside in a chair reading out loud. 2.Barney is reading the news papers in the backyard. 3.The scene begins with no one in the picture. Barney emerges from hiding behind a stone wall that is in front of him. He is standing outside in the yard. The house is to his right. He says something and then points to himself with his thumb. 4.Barney is outside. He is sitting on a stone wall and is talking. 5.Fred and Barney are in the yard. Fred is yelling at Barney. Fred is holding Barney with a fist raised. ![13_image_0.png](13_image_0.png) ![14_image_0.png](14_image_0.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? limitation ✓ A2. Did you discuss any potential risks of your work? 4 ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? No response. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? No response. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? No response. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No response. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? No response. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. No response. ## C ✓ **Did You Run Computational Experiments?** 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 4 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 4 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 4 ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
li-etal-2023-question
Question-Interlocutor Scope Realized Graph Modeling over Key Utterances for Dialogue Reading Comprehension
https://aclanthology.org/2023.findings-acl.306
We focus on dialogue reading comprehension (DRC) that extracts answers from dialogues. Compared to standard RC tasks, DRC has raised challenges because of the complex speaker information and noisy dialogue context. Essentially, the challenges come from the speaker-centric nature of dialogue utterances {---} an utterance is usually insufficient in its surface form, but requires to incorporate the role of its speaker and the dialogue context to fill the latent pragmatic and intention information. We propose to deal with these problems in two folds. First, we propose a new key-utterances-extracting method, which can realize more answer-contained utterances. Second, based on the extracted utterances, we then propose a Question-Interlocutor Scope Realized Graph (QuISG). QuISG involves the question and question-mentioning speaker as nodes. To realize interlocutor scopes, utterances are connected with corresponding speakers in the dialogue. Experiments on the benchmarks show that our method achieves state-of-the-art performance against previous works.
# Question-Interlocutor Scope Realized Graph Modeling Over Key Utterances For Dialogue Reading Comprehension Jiangnan Li1,2,†,◦, Mo Yu3,†, Fandong Meng3**, Zheng Lin**1,2,∗ , Peng Fu1, Weiping Wang1**, Jie Zhou**3 1Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China 2School of Cyber Security, University of Chinese Academy of Sciences, Beijing, China 3Pattern Recognition Center, WeChat AI, Tencent Inc. {lijiangnan,linzheng,fupeng,wangweiping}@iie.ac.cn, [email protected] {fandongmeng,withtomzhou}@tencent.com ## Abstract We focus on dialogue reading comprehension (DRC) that extracts answers from dialogues. Compared to standard RC tasks, DRC has raised challenges because of the complex speaker information and noisy dialogue context. Essentially, the challenges come from the speaker-centric nature of dialogue utterances - an utterance is usually insufficient in its surface form, but requires to incorporate the role of its speaker and the dialogue context to fill the latent pragmatic and intention information. We propose to deal with these problems in two folds. First, we propose a new keyutterances-extracting method, which can realize more answer-contained utterances. Second, based on the extracted utterances, we then propose a Question-Interlocutor Scope Realized Graph (QuISG). QuISG involves the question and question-mentioning speaker as nodes. To realize interlocutor scopes, utterances are connected with corresponding speakers in the dialogue. Experiments on the benchmarks show that our method achieves state-of-the-art performance against previous works.1 ## 1 Introduction Beyond the formal forms of text, dialogues are one of the most frequently used media that people communicate with others to informally deliver their emotions (Poria et al., 2019), opinions (Cox et al., 2020), and intentions (Qin et al., 2021). Moreover, dialogue is also a crucial information carrier in literature, such as novels and movies (Kociský et al., 2018), for people to understand the characters and plots (Sang et al., 2022) in their reading behaviors. Therefore, comprehending dialogues is a key step for machines to act like humans. Despite the value of dialogues, reading comprehension over dialogues (DRC), which extracts an- ∗Zheng Lin is the corresponding author. † Authors contributed equally to this work. ◦Joint work with Pattern Recognition Center, WeChat AI, Tencent Inc. 1https://github.com/LeqsNaN/QuISG ![0_image_0.png](0_image_0.png) Figure 1: Two questions with related dialogue clips that the baseline SelfSuper (Li and Zhao, 2021) fails. Utter. \#9 is too long, so we omit some parts of the utterance. swer spans for independent questions from dialogues, lags behind those of formal texts like news and Wikipedia articles.2 The reason mainly comes from distinctive features of dialogues. Specifically, dialogues involve informal oral utterances which are usually short and incomplete, and thus understanding them highly depends on their loosely structured *dialogue context*. As a high-profile spot in the conversational-related domain, *dialogue context* modeling is also a major scientific problem in DRC. In previous works, Li and Zhao (2021) (abbreviated as SelfSuper) point out that *dialogue context* modeling in DRC faces two challenges: complex speaker information and noisy question-unrelated context. For speaker information, SelfSuper design a self-supervised task guessing who a randomly masked speaker is according to the dialogue context (e.g., masking "Monica Geller" of \#10 in Fig. 1). To reduce noise, another task is made to predict whether an utterance contains the answer. 2Note there is a direction of conversational QA (Reddy et al., 2018; Choi et al., 2018; Sun et al., 2019) differing from DRC here. For the former, the Question-Answer process is formed as a dialogue, and the model derives answers from Wikipedia articles or English exams. Although decent performance can be achieved, several urging problems still exist. Firstly, speaker guessing does not aware of the speaker information in questions and the interlocutor scope. As randomly masking is independent of the question, it cannot tell which speaker in the dialogue is related to the speaker mentioned in the question, e.g., Joey Tribbiani to Joey in Q1 of Fig. 1. As for the interlocutor scope, we define it as utterances said by the corresponding speaker. We point out that utterances have a speaker-centric nature: First, each utterance has target listeners. For example, in Utter. \#10 of Fig. 1, it requires to understand that Joey is a listener, so "you had the night" is making fun of Joey from Monica's scope. Second, an utterance reflects the message of the experience of its speaker. For example, to answer Q1 in Fig. 1, it requires understanding "stayed up all night talking" is the experience appearing in Joey's scope. Due to ignoring the question-mentioned interlocutor and its scope, SelfSuper provides a wrong answer. Secondly, answer-contained utterance (denoted as key utterance by SelfSuper) prediction prefers utterances similar to the question, failing to find key utterances not similar to the question. The reason is that answers are likely to appear in utterances similar to the question. For example, about 77% of questions have answers in top-5 utterances similar to the question according to SimCSE (Gao et al., 2021) in the dev set of FriendsQA (Yang and Choi, 2019). Furthermore, the utterances extracted by the key utterance prediction have over 82% overlaps with the top-5 utterances. Therefore, there are considerable key utterances have been ignored, leading to overrated attention to similar utterances, e.g., Q2 in Fig. 1. In fact, many key utterances are likely to appear near question-similar utterances because contiguous utterances in local contexts tend to be on one topic relevant to the question (Xing and Carenini, 2021; Jiang et al., 2023). However, the single utterance prediction cannot realize this. To settle the aforementioned problems, so that more answer-contained utterances can be found and the answering process realizes the question and interlocutor scopes, we propose a new pipeline framework for DRC. We first propose a new keyutterances-extracting method. The method slides a window through the dialogue, where contiguous utterances in the window are regarded as a unit. The prediction is made on these units. Based on utterances in predicted units, we then propose QuestionInterlocutor Scope Realized Graph (QuISG) modeling. QuISG constructs a graph over contextualized embeddings of words. The question and speaker names mentioned in the question are explicitly present in QuISG as nodes. To remind the model of interlocutor scopes, QuISG connects every speaker node in the dialogue with words from the speaker's scope. We verify our model on two popular DRC benchmarks. Our model achieves decent performance against baselines on both benchmarks, and further experiments indicate the efficacy of our method. ## 2 Related Work Dialogue Reading Comprehension. Unlike traditional Machine Reading Comprehension (Rajpurkar et al., 2016), Dialogue Reading Comprehension (DRC) aims to answer a question according to the given dialogue. There are several related but different types of conversational question answering: CoQA (Reddy et al., 2018) conversationally asks questions after reading Wikipedia articles. QuAC (Choi et al., 2018) forms a dialogue of QA between a student and a teacher about Wikipedia articles. DREAM (Sun et al., 2019) tries to answer multi-choice questions over dialogues of English exams. These works form QA pairs as a conversation between humans and machines. To understand the characteristics of speakers, Sang et al. (2022) propose TVShowGuess in a multi-choice style to predict unknown speakers in dialogues. Conversely, we focus on DRC extracting answer spans from a dialogue for an independent question (Yang and Choi, 2019). For DRC, Li and Choi (2020) propose several pretrained and downstream tasks on the utterance level. To consider the coreference of speakers and interpersonal relationships between speakers, Liu et al. (2020) introduce the two types of knowledge from other dialoguerelated tasks and construct a graph to model them. Besides, Li et al. (2021); Ma et al. (2021) model the knowledge of discourse structure of utterances in the dialogues. To model the complex speaker information and noisy dialogue context, two selfsupervised tasks, i.e., masked-speaker guessing and key utterance prediction, are utilized or enhanced by Li and Zhao (2021); Zhu et al. (2022); Yang et al. (2023). However, existing work ignores explicitly modeling the question and speaker scopes and suffers from low key-utterance coverage. ![2_image_0.png](2_image_0.png) ## 3.2 Conversational Context Encoder Dialogue Modeling With Graph Representations. In many QA tasks (Yang et al., 2018; Talmor et al., 2019), graphs are the main carrier for reasoning (Qiu et al., 2019; Fang et al., 2020; Yasunaga et al., 2021). As for dialogue understanding, graphs are still a hotspot for various purposes. In dialogue emotion recognition, graphs are constructed to consider the interactions between different parties of speakers (Ghosal et al., 2019; Ishiwatari et al., 2020; Shen et al., 2021). In dialogue act classification, graphs model the cross-utterances and cross-tasks information (Qin et al., 2021). In dialogue semantic modeling, Bai et al. (2021) extend AMR (Banarescu et al., 2013) to construct graphs for dialogues. As for DRC, graphs are constructed for knowledge propagation between utterances by works (Liu et al., 2020; Li et al., 2021; Ma et al., 2021) mentioned above. ## 3 Framework 3.1 Task Definition Given a dialogue consisting of N utterances: D = [utter1 , utter2*, ..., utter*N ], the task aims to extract the answer span a for a question q = [qw1, qw2*, ..., qw*Lq] from D, where qwiis the i-th word in q and Lq is the length of q. In D, each utterance *utter*i = {speaker : si,text : ti} contains its corresponding speaker (e.g., si ="Chandler Bing") and text content ti = [tw1, tw2*, ..., tw*Li ], where twj the j-th word in ti and Liis the length of ti. For some unanswerable questions, there is no answer span to be found in D. Under such a circumstance, a is assigned to be null. To encode words contextually using pretrained models (PTM), following previous work (Li and Zhao, 2021), we chronologically concatenate utterances in the same conversation to form a text sequence: C = "s1: t1 [SEP] ... [SEP] sN : tN " Holding the conversational context C, PTM can deeply encode C with the question q to make it question-aware by concatenating them as QC = "[CLS] q [SEP] C [SEP]" (it is okay that C goes first). Following Li and Zhao (2021), we utilize the ELECTRA discriminator to encode the sequence QC: $$H_{Q C}=\mathrm{ELECTRA}(Q C),$$ ## Where Hqc ∈ R Lqc ×Dh , Lqc Is The Length Of Qc, And Dh Is The Hidden Size Of Plm. Hqc Can Be Split Into Hq ∈ R Lq×Dh And Hc ∈ R Lc ×Dh According The Position Of [Sep] Between Q And C, Where Lc Is The Length Of C. 3.3 Key Utterances Extractor Treating every single utterance as a unit to pair with the question prefers utterances similar to the question. However, the utterance containing the answer is not always in the case, where it can appear near a similar utterance within several steps due to the high relevance of local dialogue topics. The key utterance extractor aims to extract more answer-contained utterances. We apply a window along the dialogue. Utterances in the window are treated as a unit so that the similar utterance and the answer-contained utterance can co-occur and more answer-contained utterances can be realized. 3.3.1 Training the Extractor With the window whose size is m, [*utter*i, utteri+1, ..., *utter*i+m] is grouped. Mapping the start (sti) and end (edi) position of the unit in C, the representation of the unit can be computed by: $$H_{u_{i}}^{k}=\mathrm{Maxpooling}(H_{C}[s t_{i}:e d_{i}]).$$ Similarly, the representation of the question is computed by Hk q = Maxpooling(HQ). The correlation score between them is then computed by: yi = sigmoid(Linear(Hk $$\mathrm{r}(H_{u_{i}}^{k}||H_{q}^{k})),$$ q)), (3) where Linear(·) is a linear unit mapping the dimension from R 2dh to R. For the unit, if any utterances in it contain the answer, the label y k i of this unit is set to 1, otherwise 0. Therefore, the training objective of the key utterances extractor on the dialogue D is: $${\mathcal{J}}_{k}=-\sum_{i=1}^{N-m}\,[(1-y_{i}^{k})log(1-y_{i})+y_{i}^{k}log(y_{i})].\tag{4}$$ ## 3.3.2 Extracting Key Utterances The extractor predicts whether a unit is related to the question. If yi > 0.5, the unit is regarded as a question-related unit, and utterances inside are all regarded as **key utterances**. To avoid involving too many utterances as key utterances, we rank all the units whose yi > 0.5 and pick up top-k units. For a question q, we keep a key utterance set key = (·) to store the extracted key utterances. Specifically, when the i-th unit satisfies the above condition, [*utter*i, ..., *utter*i+m] are all considered to be added into key. If *utter*i does not exist in key, then key.add(*utter*i) is triggered, otherwise skipped. After processing all the qualified units, key sorts key utterances by sort(key, 1→N), where 1→N denotes chronological order. We obverse that, in most cases, key utterances in key are consecutive utterances. When k=3 and m=2, the set is ordered as (*utter*i−m, ..., *utter*i, ..., *utter*i+m), where *utter*iis usually the similar utterance. ## 3.4 Question-Interlocutor Scope Realized Graph Modeling To guide models to further realize the question, speakers in the question, and scopes of speakers in D, we construct a Question-Interlocutor Scope Realized Graph (QuISG) based on key. QuISG is formulated as G = (V, A), where V denotes the set of nodes and A denotes the adjacent matrix of edges. After the construction of QuISG, we utilize a node-type realized graph attention network to process it. We elaborate on QuISG below. ## 3.4.1 Nodes We define several types of nodes for key utterances and the question. Question Node: Question node denotes the questioning word (e.g., "what") of the question. The **node representation** is initialized by meanpooling the representations of the question words: v.rep=mean(HQ[ what]). We denote this **type of** node as v.t=qw. Question Speaker Node: Considering speakers in the question can help models realize which speakers and their interactions are focused by the question. Question speaker node is derived from the speaker name recognized from the question. We use stanza (Qi et al., 2020) 3 performing NER to recognize person names (e.g. "ross") in the question and pick up those names appearing in the dialogue as interlocutors. Then, we have v.rep=HQ[ross] and v.t=qs. Additionally, if a question contains no speaker name or the picked name does not belong to interlocutors in the dialogue, no question speaker node will be involved. Dialogue Speaker Node: Speakers appearing in the dialogue are crucial for dialogue modeling. We construct speakers of key utterances as dialogue speaker nodes. As the speaker in the dialogue is identified by its full name (e.g., "Ross Gellar"), we compute the node embedding by meanpooling the full name and all key utterances of the speaker will provide its speaker name: v.rep=mean(HC[Ross1, Gellar1, ..., Rossx, Gellarx]), where x is the number of key utterance whose speaker name is "Ross Gellar". We set v.t=ds. Dialogue Word Node: As the main body to perform answer extraction, words from all key utterances are positioned in the graph as dialogue word nodes. The embedding is initialized from the corresponding item of HC. This type is set to v.t=dw. Scene Node: In some datasets, there is a kind of utterance that appears at the beginning of a dialogue and briefly describes the scene of the dialogue. If it is a key utterance, we set words in it as scene nodes. Although we define the scene node, it still acts as a dialogue word node with v.t=dw. The 3https://github.com/stanfordnlp/stanza only difference is the way to connect with dialogue speaker nodes. We state it in Sec. 3.4.2. ## 3.4.2 Edges Edges connect the defined nodes. The adjacent matrix of edges is initialized as A = O. As QuISG is an undirected graph, A is symmetric. We denote A[vx, vy] = 1 as A[v1, v2] = 1 and A[v2, v1] = 1. For the word node vx ∈ *utter*i, we connect it with other word nodes vy ∈ *utter*i (x − kw ≤ y ≤ x + kw) within a window whose size is kw, i.e., A[vx, vy] = 1. For word nodes in other utterances (e.g., vz ∈ *utter*i+1), no edge is set between vx and vz. To remind the model of the scope of speakers, we connect every word node with the *dialogue speaker node* vsi it belongs to, i.e., A[vx, vsi ] = 1. To realize the question, we connect all word nodes with the *question node* vq, i.e., A[vx, vq] = 1. For the speakers mentioned in the question, we fully connect their *question speaker nodes* to model interactions between these speakers, e.g., A[vqsm, vqsn] = 1. To remind the model which speaker in dialogue is related, we connect the *question speaker node* vqsm with its dialogue speaker node vsi , i.e., A[vqsm, vsi ] = 1. Furthermore, question speaker nodes is connected with the *question node*, e.g., A[vqsm, vq] = 1. If the scene description is selected as a key utterance, it will be regarded as an utterance without speaker identification. We treat a *scene node* as a word node and follow the same edge construction as word nodes. As the scene description may tell things about speakers, we utilize stanza to recognize speakers and connect all *scene nodes* with the corresponding *dialogue speaker nodes*. For every node in QuISG, we additionally add a self-connected edge, i.e., A[v, v] = 1. ## 3.4.3 Node-Type Realized Graph Attention Network Node-Type Realized Graph Attention Network (GAT) is a T-layer stack of graph attention blocks (Velickovic et al., 2017). The input of GAT is a QuISG and GAT propagates and aggregates messages between nodes through edges. We initial the graph representation by h 0 v = v.rep. A graph attention block mainly performs multi-head attention computing. We exemplify attention computing by one head. To measure how important the node vn to the node vm, the node type realized attentive weight is computed by: $$\alpha_{mn}=\frac{\exp\left(\mathrm{LRelLU}\left(c_{mn}\right)\right)}{\sum_{\mathrm{v}_{o}\in\mathcal{N}_{\mathrm{vm}}}\exp\left(\mathrm{LRelLU}\left(c_{mo}\right)\right)},\tag{5}$$ $$c_{mn}=\mathrm{a}\left[\left[h_{\mathrm{v}_{m}}^{t-1}||r_{\mathrm{vm},\mathrm{t}}\right]\mathrm{w}_{q}[h_{\mathrm{v}_{n}}^{t-1}||r_{\mathrm{v}_{n},\mathrm{t}}]\mathrm{w}_{k}\right]^{\mathrm{T}},\tag{6}$$ where rvm.t ∈ R 1×4is a one-hot vector denoting the node type of vm, and a ∈ R 1×2d*head* , wq ∈ R (d*head*+4)×d*head* , wk ∈ R (d*head*+4)×d*head* are trainable parameters. Furthermore, the graph attention block aggregates the weighted message by: $$h_{\mathrm{v}_{m}}^{t\cdot h e a d}=\mathrm{ELU}\left(\sum_{\mathrm{v}_{o}\in{\mathcal{N}}_{\mathrm{v}m}}\alpha_{m n}h_{\mathrm{v}_{o}}^{t-1}\mathrm{W}_{v}\right),\quad(7)$$ * [16] A. A. K. where Wo ∈ R dhead×d*head* is a trainable parameter. By concatenating weighted messages from all heads, the t-th graph attention block can update the node representation from h t−1 vm to h tvm . ## 3.5 Answer Extraction After graph modeling, nodes in the QuISG are then mapped back into the original token sequence. We locate the dialogue word (scene) node vx to its corresponding token representation HC[*utter*i[x]] in C, and then update the token representation by HC[*utter*i[x]] += h T vx . For the speaker token representation HC[Rossi, Gellari] in key utterances, the mapped dialogue speaker node vsi updates it by HC[Rossi, Gellari] += [h T vsi , h T vsi ]. As a speaker name si may appear several times, we repeat adding h T vsi to the corresponding token representations. We denote the updated HC as H ′ C . 3.5.1 Training Given H ′ C , the model computes the start and the end distributions by: $\begin{array}{l} Y_{str} = \mathrm{softmax}(\mathbf{w}_{str}{H_C}^\mathrm{T}), \\ Y_{end} = \mathrm{softmax}(\mathbf{w}_{end}{H_C}^\mathrm{T}), \\ \mathbf{v}_{end} \in \mathbb{R}^{1 \times L_C}, \mathbf{w}_{end} \in \mathbb{R}^{1 \times L_C}$ are two. (8) (9) $\frac{1}{2}$ T), (8) T), (9) where wsrt ∈ R 1×LC , wend ∈ R 1×LC are trainable parameters. For the answer span a, we denote its start index and end index as ast and aed. Therefore, the answer extracting objective is: $${\cal J}_{ax}=-\log\left(Y_{str}(a_{st})\right)-\log\left(Y_{end}(a_{ed})\right).\tag{10}$$ If there are questions without any answers, another header is applied to predict whether a question is answerable. The header computes the probability by pna = sigmoid(Linear(H ′ C [CLS])). By annotating every question with a label q ∈ {0, 1} to indicate answerability, another objective is added: Jna = −[(1 − q)log(1 − pna) + qlog(pna)]. In this way, the overall training objective is J = Jax + 0.5 ∗ Jna. ## 3.5.2 Inference Following Li and Zhao (2021), we extract the answer span by performing a beam search with the size of 5. We constrain the answer span in one utterance to avoid answers across utterances. To further emphasize the importance of key utterances, we construct a scaling vector S ∈ R 1×LC , where the token belonging to key utterances is kept with 1 and the token out of key utterances is assigned with a scale factor 0 ≤ f ≤ 1. The scaling vector is multiplied on Ysrt and Yend before softmax, and we then use the processed possibilities for inference. ## 4 Experimental Settings Datasets. Following Li and Zhao (2021), we conduct experiments on **FriendsQA** (Yang and Choi, 2019) and **Molweni** (Li et al., 2020). As our work does not focus on unanswerable questions, we construct an answerable version of Molweni (**Molweni-A**) by removing all unanswerable questions. FriendsQA is an open-domain DRC dataset collected from TV series. It contains 977/122/123 (train/dev/test) dialogues and 8,535/1,010/1,065 questions. Recognizing person names in questions, we find about 76%/76%/75% of questions contain person names in FriendsQA. Molweni is another dataset with topics on Ubuntu. It contains 8,771/883/100 dialogues and 24,682/2,513/2,871 questions, in which about 14% of questions are unanswerable. Dialogues in Mowelni are much shorter than in FriendsQA and contain no scene descriptions. Speaker names in Molweni are meaningless user ids (e.g., "nbx909"). Furthermore, questions containing user ids in Molweni, whose proportion is about 47%/49%/48%, are less than FriendsQA. In Molweni-A, there are 20,873/2,346/2,560 questions. Compared Methods. We compare our method with existing methods in DRC. **ULM+UOP** (Li and Choi, 2020) adapt several utterance-level tasks to pretrain and finetune BERT in the multitask setting. **KnowledgeGraph** (Liu et al., 2020) introduces and structurally models additional knowledge about speakers' co-reference and social relations from other related datasets (Yu et al., 2020). | Model | EM | F1 | |-----------------------------------|--------|--------| | ULM+UOP (Li and Choi, 2020) | 46.80 | 63.10 | | KnowledgeGraph (Liu et al., 2020) | 46.40 | 64.30 | | SelfSuper (Li and Zhao, 2021) | 46.90 | 63.90 | | Reimpl. ELECTRA | 54.62 | 71.29 | | Reimpl. EKIM (Zhu et al., 2022) | 56.45 | 72.45 | | SelfSuper (Li and Zhao, 2021) | 55.80 | 72.30 | | Ours | 57.79∗ | 75.22∗ | DADGraph (Li et al., 2021) is another graph-based method that introduces external knowledge about the discourse structure of dialogues. **ELECTRA** (Clark et al., 2020) is a vanilla fine-tuned ELECTRA. **SelfSuper** (Li and Zhao, 2021) is the SOTA method. It designs two self-supervised tasks to capture speaker information and reduce noise in the dialogue. **EKIM** (Zhu et al., 2022) is the Enhanced Key-utterance Interactive Model, which can be regarded as an enhanced SelfSuper with additional bi-attention to model the interaction between context, question, and key utterance. We reimplement EKIM in our experimental environment. Implementation. Our model is implemented based on ELECTRA-large-discriminator from Transformers. For key utterances extraction, the size of the window (i.e., m) is set to 2 and top-3 units are considered. Other hyper-parameters are the same as those in the question-answering training. For question answering, we search the size of the word node window (i.e., kw) in 1, 2, 3, and the number of attention heads in 1, 2, 4. We set the number of GAT layers to 5 for FriendsQA and 3 for Molweni; f is set to 0.5 for FriendsQA and 0.9 for Molweni. Other hyper-parameters are in Appendix A. We use the Exact Matching (EM) score and F1 score as the metrics. ## 5 Results And Discussion 5.1 Main Results Tab. 1 shows the results achieved by our method and other baselines on FriendsQA. The baselines listed in the first three rows are all based on BERT. We can see that SelfSuper achieves better or competitive results compared with ULM+UOP and KnowledgeGraph This indicates the effectiveness of the self-supervised tasks for speaker and key utterance modeling of SelfSuper. When it comes to ELECTRA, the performance reaches a new ele- | Model | EM | F1 | | |-------------------------------|---------------------------------|-------|-------| | BERT based | DADGraph (Li et al., 2021) | 46.50 | 61.50 | | SelfSuper (Li and Zhao, 2021) | 49.20 | 64.00 | | | Our Reimpl. ELECTRA | 57.85 | 72.17 | | | ELECTRA based | Reimpl. EKIM (Zhu et al., 2022) | 57.85 | 72.95 | | SelfSuper (Li and Zhao, 2021) | 58.00 | 72.90 | | | Ours | 59.32∗ | 72.86 | | | Human performance | 64.30 | 80.20 | | Table 2: Results on Molweni. | Model | EM | F1 | |-------------------------------|--------|-------| | ELECTRA | 61.02 | 77.62 | | EKIM (Zhu et al., 2022) | 61.76 | 78.26 | | SelfSuper (Li and Zhao, 2021) | 61.13 | 78.30 | | Ours | 62.54∗ | 78.65 | vated level, which shows that ELECTRA is more suitable for DRC. By comparing with SelfSuper and EKIM, our method can achieve significantly better performance. This improvement shows the advantage of both the higher coverage of answercontained utterances by our method and better graph representations to consider the question and interlocutor scopes by QuISG. Results on Molweni are listed in Tab. 2. Our approach still gives new state-of-the-art, especially a significant improvement in EM scores. However, the absolute improvement is smaller compared to that of FriendsQA. This is mainly for two reasons. First, the baseline results are close to the human performance on Molweni, so the space for improvement is smaller. Second, Molweni contains unanswerable questions, which are not the main focus of our work. To see how the unanswerable questions affect the results, we further show the performance of our method and baselines on Molweni-A in Tab. 3, i.e., the subset of Molweni with only answerable questions. We observe that our method still achieves a better EM score against baselines and gains a slightly better F1 score, which indicates that our method can better deal with questions with answers. As for unanswerable questions, we believe that better performance can be achieved with related techniques plugged into our method, which we leave to future work. By comparing the performance of our method in FriendsQA and Molweni, we can observe that our method is more significant in FriendsQA. We think the reason may be that (1) our key utterance extractor can cover more answer-contained utterances in FriendsQA, as will be shown in Fig. 3; (2) questions mentioning speakers show more frequently in FriendsQA than in Molweni, and therefore QuISG can help achieve better graph representations in FriendsQA. On all accounts, this further demonstrates that our method alleviates the problems that we focus on. | Model | FriendsQA | Molweni | | | |---------------|-------------|-----------|-------|-------| | EM | F1 | EM | F1 | | | full model | 57.79 | 75.22 | 59.32 | 72.86 | | w/o NodeType | 56.79 | 74.01 | 58.38 | 72.75 | | w/o KeyUttExt | 55.87 | 72.30 | 58.48 | 72.10 | | w/o Q | 56.37 | 73.55 | 58.20 | 72.52 | | w/o SpkScope | 57.29 | 74.26 | 58.62 | 72.29 | | w/o All | 53.12 | 70.05 | 56.32 | 71.08 | ## 5.2 Ablation Study To demonstrate the importance of our proposed modules, we adapt an ablation study. The results are shown in Tab. 4. We study the effects of node type information (NodeType), key utterances extractor and its scaling factor on logits (KeyUttExt); question and question speaker nodes (Q); edges between dialogue word nodes and dialogue speaker nodes to model interlocutor scope (SpkScope). We further remove both KeyUttExt and QuISG, leading to full connections between every two tokens in dialogues, and apply transformer layers to further process dialogues (w/o All). By removing NodeType, the performance drops, which demonstrates minding different node behaviors can help better model graph representations. Our method w/o KeyUttExt decreases the performance, which demonstrates that the key utterance extractor is a crucial module for our method to find more answer-contained utterances and guides our model to pay more attention to the key part in a dialogue. As for the model w/o KeyUttExt shows more performance drop in FriendsQA, we think the reason may be that dialogues in FriendsQA are much longer than Molweni. Therefore, KeyUttExt can reduce more question-unrelated parts of dialogues for further graph modeling in FriendsQA. Removing Q or SpkScope also shows a performance decline, which indicates the importance of realizing the question and interlocutor scopes. Replacing KeyUttExt and QuISG with transformer layers even performs worse than ELECTRA, which ![7_image_0.png](7_image_0.png) indicates that the further process of dialogues without speaker and question-realized modeling is redundant. ## 5.3 Accuracy Of Utterance Extraction As we claim that our method covers more answercontained utterances compared with SelfSuper (EKIM has a similar result as SelfSuper), in this section, we show the recall of answer-contained utterances by different methods. Besides our method and SelfSuper, we further consider retrieval methods appearing in other reading comprehension tasks. As the similarity-based seeker is usually used, we apply the SOTA model SimCSE (Gao et al., 2021) to compute the similarity between utterances and the question. However, directly using top similar utterances produces an extremely low recall. Therefore, we also add utterances around every picked top utterance as key utterances like ours. We consider top 3 similar utterances and 4 context utterances around them. The results are illustrated in Fig. 3. As shown in Fig. 3, we choosing top 3 units for our key utterances does not affect the recall a lot and can keep the average size of key utterance set to 4.13 for FriendsQA and 3.61 for Molweni4. Compared with our method, SelfSuper achieves undesirable recall for answer-contained utterance extraction, which indicates the efficacy of our method. As for SimCSE, equipped with our enhancement, it can achieve competitive recall to ours. Especially on the dev and test sets of Molweni. However, the average size of the key utterance set of SimCSE is 7.73, whereas the average length of dialogue in Molweni is 8.82. Additionally, SimCSE extracts key utterances for every question 4The average length of dialogues in FriendsQA is 21.92 and 7.73 in Molweni. ![7_image_1.png](7_image_1.png) Table 5: Results of our method and the variant with SimCSE searching for key utterances. ![7_image_2.png](7_image_2.png) regardless of the answerability, leading to a low recall for the Molweni training set. To further show that our method is more suitable for DRC against SimCSE, we run a variant with key utterances extracted by SimCSE. The results are shown in Tab. 5. Our method achieves better performance with high coverage of answer-contained utterances and fewer key utterances. ## 5.4 **Improvement On Questions With Speakers** As QuISG focuses on the question speaker information and dialogue interlocutor scope modeling, whether it can help answer questions mentioning speaker names is crucial to verify. We illustrate F1 scores of questions containing different speakers in FriendsQA and questions with or without mentioning speakers in Fig. 4. We can see that SelfSuper outperforms our method only on "Rachel" and is slightly better on "Joey" and "Monica". Our method can outperform SelfSuper by a great margin on "Ross", "Phoebe", "Chandler", and other casts. Furthermore, our method can improve the F1 score of speaker-contained questions by a wider margin compared to questions without speakers. This indicates that our speaker modeling benefits from our proposed method. ## 5.5 Case Study At the very beginning of the paper, Fig. 1 provides two cases in that SelfSuper fails. On the contrary, attributing to our proposed key utterances extractor and QuISG, our method can answer the two questions correctly. ## 6 Conclusion To cover more key utterances and make the model realize speaker information in the question and interlocutor scopes in the dialogue for DRC, we propose a new pipeline method. The method firstly adapts a new key utterances extractor with contiguous utterances as a unit for prediction. Based on utterances of the extracted units, a QuestionInterlocutor Scope Realized Graph (QuISG) is constructed. QuISG sets question-mentioning speakers as question speaker nodes and connects the speaker node in the dialogue with words from its scope. Our proposed method achieves decent performance on related benchmarks. ## Limitation As our method does not focus on dealing with unanswerable questions, our method may not show a great advantage over other methods when there are a lot of unanswerable questions. How to improve the recognition of this type of question, avoid overrating further modeling on them, and therefore give more accurate graph modeling on answerable questions will be left to our future work. Besides, our speaker modeling prefers questions focusing on speakers, and it may show limited improvement if a dataset contains few speaker-related questions. However, speakers are key roles in dialogues, and therefore, questions about speakers naturally appear frequently in DRC. The power of our key utterance extraction method to other QA fields remains unknown. It can be future work to extend it to other reading comprehension tasks like NarrativeQA (Kociský et al., 2018). Our method does not involve additional knowledge, such as speakers' co-reference and relations (Liu et al., 2020), discourse structures of dialogues (Li et al., 2021; Ma et al., 2021), and decoupled bidirectional information in dialogues (Li et al., 2022). These types of knowledge, which are orthogonal to our work, are key components of dialogues. Therefore, making full use of the additional knowledge in dialogues with our graph modeling can be an interesting direction to explore. ## Acknowledgement This work was supported by National Natural Science Foundation of China (No. 61976207). ## References Xuefeng Bai, Yulong Chen, Linfeng Song, and Yue Zhang. 2021. Semantic representation for dialogue modeling. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on* Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 4430–4445. Association for Computational Linguistics. Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In *LAW-ID@ACL*, pages 178–186. The Association for Computer Linguistics. Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wentau Yih, Yejin Choi, Percy Liang, and Luke Zettlemoyer. 2018. Quac: Question answering in context. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018*, pages 2174–2184. Association for Computational Linguistics. Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: pretraining text encoders as discriminators rather than generators. In *8th International Conference on* Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. Ramon Alfonso Villa Cox, Sumeet Kumar, Matthew Babcock, and Kathleen M. Carley. 2020. Stance in replies and quotes (SRQ): A new dataset for learning stance in twitter conversations. *CoRR*, abs/2006.00691. Yuwei Fang, Siqi Sun, Zhe Gan, Rohit Pillai, Shuohang Wang, and Jingjing Liu. 2020. Hierarchical graph network for multi-hop question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 8823– 8838. Association for Computational Linguistics. Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. Simcse: Simple contrastive learning of sentence embeddings. In *Proceedings of the 2021 Conference on* Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 6894– 6910. Association for Computational Linguistics. Deepanway Ghosal, Navonil Majumder, Soujanya Poria, Niyati Chhaya, and Alexander F. Gelbukh. 2019. Dialoguegcn: A graph convolutional neural network for emotion recognition in conversation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 154–164. Association for Computational Linguistics. Taichi Ishiwatari, Yuki Yasuda, Taro Miyazaki, and Jun Goto. 2020. Relation-aware graph attention networks with relational position encodings for emotion recognition in conversations. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 7360–7370. Association for Computational Linguistics. Junfeng Jiang, Chengzhang Dong, Akiko Aizawa, and Sadao Kurohashi. 2023. Superdialseg: A large-scale dataset for supervised dialogue segmentation. *CoRR*, abs/2305.08371. Tomás Kociský, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, Gábor Melis, and Edward Grefenstette. 2018. The narrativeqa reading comprehension challenge. *Trans. Assoc. Comput.* Linguistics, 6:317–328. Changmao Li and Jinho D. Choi. 2020. Transformers to learn hierarchical contexts in multiparty dialogue for span-based question answering. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 5709–5714. Association for Computational Linguistics. Jiaqi Li, Ming Liu, Min-Yen Kan, Zihao Zheng, Zekun Wang, Wenqiang Lei, Ting Liu, and Bing Qin. 2020. Molweni: A challenge multiparty dialogues-based machine reading comprehension dataset with discourse structure. In *Proceedings of the 28th International Conference on Computational Linguistics,* COLING 2020, Barcelona, Spain (Online), December 8-13, 2020, pages 2642–2652. International Committee on Computational Linguistics. Jiaqi Li, Ming Liu, Zihao Zheng, Heng Zhang, Bing Qin, Min-Yen Kan, and Ting Liu. 2021. Dadgraph: A discourse-aware dialogue graph neural network for multiparty dialogue machine reading comprehension. In *International Joint Conference on Neural* Networks, IJCNN 2021, Shenzhen, China, July 18-22, 2021, pages 1–8. IEEE. Yiyang Li and Hai Zhao. 2021. Self- and pseudo-selfsupervised prediction of speaker and key-utterance for multi-party dialogue reading comprehension. In Findings of the Association for Computational Linguistics: EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 16-20 November, 2021, pages 2053–2063. Association for Computational Linguistics. Yiyang Li, Hai Zhao, and Zhuosheng Zhang. 2022. Back to the future: Bidirectional information decoupling network for multi-turn dialogue modeling. CoRR, abs/2204.08152. Jian Liu, Dianbo Sui, Kang Liu, and Jun Zhao. 2020. Graph-based knowledge integration for question answering over dialogue. In Proceedings of the 28th International Conference on Computational Linguistics, COLING 2020, Barcelona, Spain (Online), December 8-13, 2020, pages 2425–2435. International Committee on Computational Linguistics. Xinbei Ma, Zhuosheng Zhang, and Hai Zhao. 2021. Enhanced speaker-aware multi-party multi-turn dialogue comprehension. *CoRR*, abs/2109.04066. Soujanya Poria, Devamanyu Hazarika, Navonil Majumder, Gautam Naik, Erik Cambria, and Rada Mihalcea. 2019. MELD: A multimodal multi-party dataset for emotion recognition in conversations. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 527–536. Association for Computational Linguistics. Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020. Stanza: A python natural language processing toolkit for many human languages. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics:* System Demonstrations, ACL 2020, Online, July 5-10, 2020, pages 101–108. Association for Computational Linguistics. Libo Qin, Zhouyang Li, Wanxiang Che, Minheng Ni, and Ting Liu. 2021. Co-gat: A co-interactive graph attention network for joint dialog act recognition and sentiment classification. In *Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, ThirtyThird Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021*, pages 13709–13717. AAAI Press. Lin Qiu, Yunxuan Xiao, Yanru Qu, Hao Zhou, Lei Li, Weinan Zhang, and Yong Yu. 2019. Proceedings of the 57th conference of the association for computational linguistics, ACL 2019, florence, italy, july 28august 2, 2019, volume 1: Long papers. In ACL, pages 6140–6150. Association for Computational Linguistics. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100, 000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 2383–2392. The Association for Computational Linguistics. Siva Reddy, Danqi Chen, and Christopher D. Manning. 2018. Coqa: A conversational question answering challenge. *CoRR*, abs/1808.07042. Yisi Sang, Xiangyang Mou, Mo Yu, Shunyu Yao, Jing Li, and Jeffrey Stanton. 2022. Tvshowguess: Character comprehension in stories as speaker guessing. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2022, Seattle, WA, United States, July 10-15, 2022, pages 4267–4287. Association for Computational Linguistics. Weizhou Shen, Siyue Wu, Yunyi Yang, and Xiaojun Quan. 2021. Directed acyclic graph network for conversational emotion recognition. In *Proceedings* of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 1551–1560. Association for Computational Linguistics. Kai Sun, Dian Yu, Jianshu Chen, Dong Yu, Yejin Choi, and Claire Cardie. 2019. DREAM: A challenge dataset and models for dialogue-based reading comprehension. *Trans. Assoc. Comput. Linguistics*, 7:217–231. Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. Commonsenseqa: A question answering challenge targeting commonsense knowledge. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4149–4158. Association for Computational Linguistics. Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. 2017. Graph attention networks. *CoRR*, abs/1710.10903. Linzi Xing and Giuseppe Carenini. 2021. Improving unsupervised dialogue topic segmentation with utterance-pair coherence scoring. In *Proceedings* of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue, SIGdial 2021, Singapore and Online, July 29-31, 2021, pages 167–177. Association for Computational Linguistics. Tianqing Yang, Tao Wu, Song Gao, and Jingzong Yang. 2023. Dialogue logic aware and key utterance decoupling model for multi-party dialogue reading comprehension. *IEEE Access*, 11:10985–10994. Zhengzhe Yang and Jinho D. Choi. 2019. Friendsqa: Open-domain question answering on TV show transcripts. In *Proceedings of the 20th Annual SIGdial* Meeting on Discourse and Dialogue, SIGdial 2019, Stockholm, Sweden, September 11-13, 2019, pages 188–197. Association for Computational Linguistics. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W. Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,* Brussels, Belgium, October 31 - November 4, 2018, pages 2369–2380. Association for Computational Linguistics. Michihiro Yasunaga, Hongyu Ren, Antoine Bosselut, Percy Liang, and Jure Leskovec. 2021. QA-GNN: reasoning with language models and knowledge graphs for question answering. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 535–546. Association for Computational Linguistics. Dian Yu, Kai Sun, Claire Cardie, and Dong Yu. 2020. Dialogue-based relation extraction. In *Proceedings* of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 4927–4940. Association for Computational Linguistics. Xingyu Zhu, Jin Wang, and Xuejie Zhang. 2022. An enhanced key-utterance interactive model with decouped auxiliary tasks for multi-party dialogue reading comprehension. In International Joint Conference on Neural Networks, IJCNN 2022, Padua, Italy, July 18-23, 2022, pages 1–8. ## A Computation Resource And Other Setup We use a piece of NVIDIA GeForce 3090 whose memory size is 24GB. All experiments require memory that is not more than 24GB. It takes 10-25 minutes for our model to finish an epoch training. As for other hyperparameters used in our experiment, we follow Li and Zhao (2021) to set the learning rate to 4e-6 for FriendsQA and search learning rate from [1.4e-5, 1.2e-5, 1e-5, 8e-6] for Molweni (Molweni-A). The batch size is set to 4 for FriendsQA and 8 for Molweni (Molweni-A). The number of epochs is set to 3 for FriendsQA and 5 for Molweni (Molweni-A). The evaluation is made every 1/5 epoch for FriendsQA and 1/2 epoch for Molweni (Molweni-A). For GAT, the dropout is set to 0.1. During the training process, the learning rate linearly warms up with the portion of 0.01 to all steps and then linearly decays to zero. AdamW with adam epsilon of 1e-6 is utilized as the optimizer. 4 runs are adapted and the max one is picked. For the utilization of SimCSE, *Transformers* version of sup-simcse-roberta-large, which achieves the best performance among all SimCSE variants on Avg. STS, is picked. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Sec. 7 A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abs, Sec. 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Sec. 3.4.1, Sec. 4 ✓ B1. Did you cite the creators of artifacts you used? Sec. 3.4.1, Sec. 4 B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Sec. 4 ## C ✓ **Did You Run Computational Experiments?** Appendix A ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix A The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Sec. 4, Appendix A ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Sec.4, Appendix A ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Sec.3.4.1 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
chen-etal-2023-speech
Speech-to-Speech Translation for a Real-world Unwritten Language
https://aclanthology.org/2023.findings-acl.307
We study speech-to-speech translation (S2ST) that translates speech from one language into another language and focuses on building systems to support languages without standard text writing systems. We use English-Taiwanese Hokkien as a case study, and present an end-to-end solution from training data collection, modeling choices to benchmark dataset release. First, we present efforts on creating human annotated data, automatically mining data from large unlabeled speech datasets, and adopting pseudo-labeling to produce weakly supervised data. On the modeling, we take advantage of recent advances in applying self-supervised discrete representations as target for prediction in S2ST and show the effectiveness of leveraging additional text supervision from Mandarin, a language similar to Hokkien, in model training. Finally, we release an S2ST benchmark set to facilitate future research in this field.
# Speech-To-Speech Translation For A Real-World Unwritten Language Peng-Jen Chen1, Kevin Tran1, Yilin Yang1, Jingfei Du1**, Justine Kao**1 Yu-An Chung1, Paden Tomasello1**, Paul-Ambroise Duquenne**12 Holger Schwenk1, Hongyu Gong1, Hirofumi Inaguma1**, Sravya Popuri**1 Changhan Wang1, Juan Pino1, Wei-Ning Hsu1**, Ann Lee**1 1Meta AI, 2Inria {pipibjc,annl}@meta.com ## Abstract We study speech-to-speech translation (S2ST) that translates speech from one language into another language and focuses on building systems to support languages without standard text writing systems. We use English↔Taiwanese Hokkien as a case study, and present an endto-end solution from training data collection, modeling choices to benchmark dataset release. First, we present efforts on creating human annotated data, automatically mining data from large unlabeled speech datasets, and adopting pseudo-labeling to produce weakly supervised data. On the modeling, we take advantage of recent advances in applying self-supervised discrete representations as target for prediction in S2ST and show the effectiveness of leveraging additional text supervision from Mandarin, a language similar to Hokkien, in model training. Finally, we release an S2ST benchmark set to facilitate future research in this field. 1 2 ## 1 Introduction Speech-to-speech translation (S2ST) aims at translating speech from one language into speech in another language. S2ST technology can not only enable communication between people speaking different languages but also help knowledge sharing across the world. While more than 40% of the languages in the world do not have text written forms3, S2ST for unwritten languages still remains a research area with little exploration mainly due to the lack of training data. The majority of the previous work on this topic conducts experiments on datasets built from applying TTS on S2T corpora to generate synthetic target speech for model training (Tjandra 1The demo can be found at https://huggingface. co/spaces/facebook/Hokkien_Translation. 2We open source our code and model at https://github.com/facebookresearch/ fairseq/tree/ust/examples/hokkien. 3https://www.ethnologue.com et al., 2019; Zhang et al., 2021). Lee et al. (2022b) presents the first textless S2ST system trained on real S2ST data, while it only investigates translation between high-resource and similar language pairs (English↔Spanish, English↔French). In this work, we take Taiwanese Hokkien as an example of an unwritten language and study S2ST between English (En) and Taiwanese Hokkien. Taiwanese Hokkien (hereafter Hokkien) is one of the official languages in Taiwan spoken by over 70% of the population (approximately 15.8 million people). Hokkien lacks a unified writing system that is widely adopted by its native speakers, though a few possible writing systems exist, e.g. Chinese characters (Hanji), or romanization systems such as Pehoe-j ¯ ¯ı (POJ) and Tâi-lô, etc. In addition, Hokkien is a tonal language that has complex tone sandhi rules (Cheng, 1968). Wang et al. (2004) investigates Mandarin-Taiwanese Hokkien S2ST with a cascaded template matching approach. In our work, we focus on En↔Hokkien, a distant language pair, and build one-stage S2ST systems. We take advantage of the discrete unit-based S2ST approach (Lee et al., 2022a) to translate source speech into target discrete units, where we convert the target speech into a sequence of integers by a self-supervised speech encoder. First, to support En→Hokkien translation, we extend HuBERTbased discrete unit extraction (Hsu et al., 2021) and examine the feasibility of unit-to-waveform generation (Polyak et al., 2021) for tonal languages. Second, we leverage the unit-based speech normalization technique proposed in Lee et al. (2022b) to remove the non-linguistic variations in speech from multiple speakers. The original study takes advantage of synthetic speech generated from TTS as the reference target for normalization, while we build the normalizer with real Hokkien speech data. Last but not least, we study two S2ST model training strategies, speech-to-unit translation (S2UT) with a single decoder (Lee et al., 2022a) or a two-pass decoding process (Inaguma et al., 2022) that leverages Mandarin (Zh) as a written language similar to Hokkien to provide extra text supervision. As no En↔Hokkien S2ST dataset is available, we also leverage Mandarin to assist the S2ST data creation process and create a 60-hr human annotated training set and an open benchmark set. Nevertheless, this is still a low-resource problem. To tackle the data scarcity issue, we further apply En↔Zh MT to create weakly supervised data (Popuri et al., 2022; Dong et al., 2022) and learn a joint embedding space for English and Hokkien through Mandarin to support data mining from unlabeled English and Hokkien data (Duquenne et al., 2021). The contributions of this work are as follows: - We present empirical studies that consolidate various state-of-the-art techniques for S2ST that were previously studied in a controlled setup with synthetic speech and verify their effectiveness in En↔Hokkien translation, where Hokkien is a language without a widely adopted standard text writing system. - A benchmark set on En↔Hokkien S2ST 4 and the evaluation model for Hokkien speech 5 will be released to encourage future research in this direction. - To the best of our knowledge, we are the first to build one-stage S2ST systems for an unwritten language in a real-world scenario. ## 2 Related Work Conventionally, S2ST can be achieved via the concatenation of three systems: automatic speech recognition (ASR), machine translation (MT) and text-to-speech synthesis (TTS) (Lavie et al., 1997; Nakamura et al., 2006). In recent years, the advancement from end-to-end speech-to-text translation (S2T) (Bérard et al., 2016) or text-to-speech translation (T2ST) (Zhang et al., 2021; Lee et al., 2022a) have simplified the S2ST pipeline into two stages, which reduces error propagation issues and improves efficiency (Lee et al., 2022a). Most recently, researchers have built one-stage S2ST systems that can be categorized in several aspects. First, systems that model directly from source to target speech, with Jia et al. (2019, 2022a,b) predicting spectrogram outputs directly, and Lee et al. (2022a,b); Huang et al. (2022); Popuri et al. (2022); Inaguma et al. (2022) leverage self-supervised speech model such as HuBERT (Hsu et al., 2021) to encode the target speech into a sequence of discrete units and apply knowledge from speech-totext modeling to S2ST. Second, the textless setup, where Jia et al. (2019, 2022b) require extra supervision from target text or phonemes during model training, while Tjandra et al. (2019); Lee et al. (2022b); Popuri et al. (2022) show the possibility of model training with speech data only without going through text. Finally, multiple decoders with multi-pass decoding, where Kano et al. (2021); Inaguma et al. (2022) concatenate multiple decoders learned with additional text targets or speech units with different granularity and perform multi-pass decoding during inference. While the modeling choices vary, S2ST model training often faces the challenge of data scarcity. Jia et al. (2022c) applies high-quality English TTS and creates an X→En S2ST dataset with synthetic target speech for 21 languages. To create S2ST datasets with real speech, Wang et al. (2021a) aligns ASR transcripts for more than 100 language pairs, and Duquenne et al. (2022a) applies distancebased bitext mining to audio, producing a mined S2ST dataset between 17 European languages. Weakly supervised data created from TTS (Jia et al., 2022a) or a cascaded pipeline with ASR and MT models (Dong et al., 2022; Popuri et al., 2022) is often combined with the S2ST data. In addition, self-supervised pre-training with large-scale unlabeled data also effectively improves S2ST model performance (Jia et al., 2022a; Popuri et al., 2022). ## 3 Methodology In this section, we first present two types of backbone architectures for S2ST modeling. Then, we describe our efforts on creating parallel S2ST training data from human annotations as well as leveraging speech data mining (Duquenne et al., 2021) and creating weakly supervised data through pseudolabeling (Popuri et al., 2022; Jia et al., 2022a). ## 3.1 Model Architectures As illustrated in Fig. 1, we study one model architecture that applies a single-pass decoding process and directly translates source speech to the target, and the second one relies on target text (Mandarin ![2_image_0.png](2_image_0.png) text in the case of Hokkien speech) to provide extra supervision and performs two-pass decoding. Both architectures predict discrete units as the target, and the speech encoder and text or unit decoders are pre-trained with unlabeled speech or text data. ## 3.1.1 Speech-To-Unit Translation (S2Ut) We follow the S2UT approach proposed in Lee et al. (2022a) and adopt HuBERT (Hsu et al., 2021) to convert target speech into discrete units via k-means on intermediate representation. While Hokkien→En systems can be trained on target English speech generated from single-speaker TTS to remove variations in accents from multiple speakers or noises from different recording conditions, when training En→Hokkien systems, we first apply a unit-based speech normalizer (Lee et al., 2022b) on the real Hokkien target speech. The speech normalizer is built by applying Connectionist Temporal Classification (CTC) (Graves et al., 2006) finetuning with the Hokkien HuBERT model using multi-speaker speech as input and the corresponding discrete units extracted from real Hokkien speech from a reference speaker as target. The resulting S2ST system consists of a sequence-to-sequence S2UT model and a unitbased HiFi-GAN vocoder (Polyak et al., 2021) for unit-to-waveform conversion. For both model architectures, we pre-train the speech encoder with Conformer-based (Gulati et al., 2020) wav2vec 2.0 (Baevski et al., 2020; Popuri et al., 2022) using a large amount of unlabeled speech. To speed up model training, we replace the multilayer convolutional feature encoder with the precomputed 80-dimensional log-mel filterbank features. Preliminary experiments show no performance degradation with filterbank input. ## 3.1.2 Single-Pass Decoding S2Ut Lee et al. (2022a) proposes to use a single unit decoder, which can be trained with standard crossentropy loss. Following Popuri et al. (2022), we apply mBART training (Liu et al., 2020), a denoising autoencoder trained with monolingual text in multiple langauges, using discrete units extracted from unlabeled speech with consecutive duplicate units removed, and use the pre-trained decoder to initialize the unit decoder. During decoding, we perform beam search with the unit decoder. ## 3.1.3 Two-Pass Decoding S2Ut: Unity UnitY model (Inaguma et al., 2022) also performs speech-to-unit translation, while it includes a target text decoder and a target text to target unit encoderdecoder and incorporates an auxiliary target text prediction task during training. All the modules are trained jointly. In En→Hokkien direction, we use Mandarin as the target text due to its proximity to Hokkien and abundance in text data. We follow Inaguma et al. (2022) to apply R-Drop (Wu et al., 2021) regularization during training as well as initializing the target text decoder with a text mBART model (Liu et al., 2020) pre-trained on the combination of En and Zh monolingual text data. ## 3.2 Training Data In the following sections, we describe three different efforts on creating parallel En↔Hokkien data for model training. ## 3.2.1 Supervised Human Annotated Data Since En↔Hokkien bilingual speakers are scarce, we use Mandarin as a pivot language during the data creation process whenever possible. We sample from the following data sources and adopt different strategies to create human annotated parallel data: (1) Hokkien dramas, which include Hokkien speech and aligned Mandarin subtitles 6, (2) Taiwanese Across Taiwan (TAT) (Liao et al., 2020b), a Hokkien read speech dataset containing transcripts in Tâi-lô and Hanji, and (3) MuST-C v1.2 En-Zh S2T data (Cattoni et al., 2021). We ask Zh-En bilinguals to translate the subtitles of the Hokkien dramas into English to create Hokkien→En S2T data. For the TAT dataset, we leverage a small group of En↔Hokkien bilinguals to translate the Hokkien speech and transcripts directly into English text. For MuST-C, we ask ZhHokkien bilinguals to translate the Mandarin text into a mix of Tâi-lô and Hanji script and then record the Hokkien speech7. The non-standardized script helps to improve the fluency and accuracy of the recorded Hokkien speech, while no Hokkien transcripts are used during S2ST training. In the end, we build S2ST training sets, where the En→Hokkien set is from MuST-C. For Hokkien→En training, we apply an English textto-unit (T2U) model (Lee et al., 2022b), which is a sequence-to-sequence Transformer model trained on English characters as input and units extracted from the corresponding speech as target, on the English text collected for Hokkien dramas and TAT, as well as the English transcriptions provided in MuST-C, to convert the text into units. ## 3.2.2 Mined Data To build a shared embedding space for Hokkien and English speech and text data for performing speechto-text or speech-to-speech mining at scale, we again take advantage of Mandarin text as the bridge between the two languages. First, to encode En and Zh text in the same embedding space, we apply the method proposed in Duquenne et al. (2022b) to finetune XLM-R LARGE (Conneau and Lample, 2019) to fit LASER (Artetxe and Schwenk, 2019) English text space using Zh-En parallel MT data. Then, we minimize the mean squared error (MSE) loss between the max-pooled output of the learned text encoder and that of a speech encoder using aligned Hokkien speech and Mandarin or English text8. The text encoder is fixed during speech encoder training, where the latter is initialized with Conformer-based wav2vec 2.0 pre-trained with Hokkien speech, and this process further encodes the Hokkien speech, Mandarin and 6Hokkien drama data is obtained from the collaboration with National Taiwan University. 7The annotators pointed out that it is easier to leverage both systems, which is another evidence of Hokkien lacking a commonly adopted text writing system. 8A subset of the Hokkien dramas data has English subtitles. English text in the same embedding space. Similarly, we also leverage the fixed text encoder to train an En speech encoder using speech and text pairs from En ASR data. In the end, we create a shared embedding space for En speech and text, Mandarin text, and Hokkien speech, which supports En text and Hokkien speech or En speech and Hokkien speech mining based on cosine similarity. ## 3.2.3 Weakly Supervised Data We take advantage of cascaded systems to create weakly supervised data from ASR and S2T data (Popuri et al., 2022; Dong et al., 2022). For En→Hokkien, we apply En→Zh MT on the En ASR transcriptions, followed by a Zh→Hokkien text-to-unit-translation (T2UT) model, which is a Transformer-based sequence-tosequence model trained with Mandarin characters as input and the corresponding Hokkien normalized units as targets. For Hokkien→En, we apply the Zh→En MT model on the Hokkien drama Mandarin subtitle, followed by En T2U to create pseudo-labeled data. ## 4 Experimental Setup In this section, we describe the data, model training details, as well as baseline systems and the evaluation protocol. All experiments are conducted using fairseq (Ott et al., 2019). ## 4.1 Data 4.1.1 Supervised Human Annotated Data We carry out the annotation process in Sec. 3.2.1, and Table 4 summarizes the statistics of the training data. In the end, we create a 61.4-hr human annotated training set for Hokkien→En, and 35-hr for En→Hokkien. We do not combine the synthetic English speech created for Hokkien→En with the real En→Hokkien S2ST dataset during training. ## 4.1.2 Tat-S2St: En↔**Hokkien S2St** Evaluation Dataset As a part of the effort on creating human annotated data, we also create an En↔Hokkien S2ST benchmark set to facilitate future research in the field. The English text translation we collect for the TAT dev and test sets are proofread first, and we recruit native speakers to record the English text translations, producing En↔Hokkien parallel speech data. Table 5 shows the statistics of this benchmark set. While Hokkien does not have a standardized and widely adopted writing system, TAT provides Tâi-lô transcripts, which is a standardized romanization system for Hokkien, which can be leveraged as reference text in evaluation (Sec. 4.4). 4.1.3 Mined data We train the En and Zh joint text encoder on CCMatrix (Schwenk et al., 2019), the Hokkien speech encoder on Hokkien dramas, and the English speech encoder on English ASR data from CommonVoice (Ardila et al., 2020), CoVoST-2 (Wang et al., 2021b), Europarl-ST (Iranzo-Sánchez et al., 2020), MuST-C (Di Gangi et al., 2019), Voxpopuli (Wang et al., 2021a) and Librispeech (Panayotov et al., 2015). The learning rate is set to 10−4, with an inverse square root schedule. The maximum number of tokens is set to 640k (equivalent to 40 seconds with 16kHz sampling rate), with a maximum number of sentences set to 32. We train the models with 48 GPUs for 60k steps. With the trained text and speech encoders, we perform data mining between Hokkien speech from Hokkien dramas and English Common Crawl text, and between the former and Librivox English audio9. We post-process the mined data in order to have a maximum of 20% overlap between any two audio segments. In the end, we obtain 8.1k-hr Hokkien→En S2T mined data and 197-hr En↔Hokkien S2ST mined data. The difference in the volume is mainly due to the domain mismatch in audiobooks from Librivox and Hokkien dramas. ## 4.1.4 Weakly Supervised Data For En→Hokkien, we apply En→Zh MT on the combination of the English transcripts from Librispeech (Panayotov et al., 2015) and TEDLIUM3 (Hernandez et al., 2018), totaling 1.5khr of English speech. The En→Zh MT model is a 12-layer Transformer model trained on CCMatrix (Schwenk et al., 2019) using disjoint BPEs for En and Zh encoded by the sentencepiece toolkit (Kudo and Richardson, 2018), each of size 32768. We use 16 GPUs, a batch size of 14,336 tokens and a learning rate of 10−3 during training. The Zh→Hokkien T2UT model following the En→Zh translation step is trained on Hokkien dramas and the aligned Mandarin subtitles. We filter out speech containing Mandarin code-switching by applying Mandarin ASR and computing the Levenshtein distance between the ASR output and the subtitles, as well as short sentences with less than 9https://librivox.org/api/ three characters, resulting in 1k-hr Hokkien speech for training. For Hokkien→En, we apply Zh→En MT on the Mandarin subtitles from 8k-hr Hokkien drama data, followed by an En T2U trained on LJSpeech (Ito and Johnson, 2017). The Zh→En MT is trained with the same setup as En→Zh MT. ## 4.2 Model Training 4.2.1 Hokkien Hubert Units To encode En target speech, we use the multilingual HuBERT model, the k-means quantizer and the unit vocoder released from Lee et al. (2022b). Below we focus on how we build Hokkien units and the corresponding unit-based speech normalizer and unit vocoder. We train a Hokkien HuBERT model using the combination of 10k-hr Mandarin speech from WenetSpeech (Zhang et al., 2022) and 2k-hr Hokkien speech from the combination of Hokkien dramas, TAT and 600-hr of Hokkien speech with various accents in addition to Taiwanese Hokkien, licensed from SpeechOcean10. When modeling Hokkien speech as discrete units, we empirically find that combining Mandarin with Hokkien speech during HuBERT training allows the units to better capture the tones and produce higher-quality speech output in the unit-to-waveform conversion stage. The HuBERT model is of the BASE architecture and pre-trained for three iterations following Hsu et al. (2021); Lakhotia et al. (2021). In the beginning of each iteration, we randomly sample 300-hr Mandarin and Hokkien speech, respectively, for k-means clustering, and apply temperature sampling to balance the amount of speech from the two languages during training. We use T = 20, and the probability of sampling from a language $l$ is $\tilde{p_{l}}=\frac{p_{l}^{\frac{1}{T}}}{\sum_{i}p_{i}^{\frac{1}{T}}}$, where $p_{i}=\frac{n_{i}}{\sum_{j}n_{j}}$, and $n_{i}$ is the number of samples from a language. No ex the number of samples from a language. No extra language information is required during pretraining. In each iteration, model weights are randomly initialized and optimized for 400k steps. We use K = 2500 with features from the 12-th layer of the model from the third iteration for extracting Hokkien units. The Hokkien speech normalizer is trained on 2hr speech from TAT. We select speaker *THF022* as the reference speaker, i.e. the normalization target, 10https://en.speechocean.com/ and create speech pairs by sampling from other speakers reading the same content in TAT. We use mask probability of 0.5, mask channel probability of 0.25 and learning rate of 3 × 10−5and train for 25k updates. Finally, the Hokkien unit-based HiFi-GAN vocoder is trained on the TTS subset of the TAT dataset, which contains a total of 36 hours of clean speech from two male and two female speakers, following the training procedure in Lee et al. (2022a). ## 4.2.2 Wav2Vec 2.0 Encoder We pre-train the Conformer En wav2vec 2.0 LARGE encoder (Baevski et al., 2020) with the Libri-light corpus (Kahn et al., 2020), which contains around 54k hours of read speech audio. The encoder is trained with a batch size of 2.1-hr for 1M updates, with 32k warmup steps and a peak learning rate of 5 × 10−4. For masking, we sample a probability of 0.065 of all time-steps to be starting indices and mask the subsequent 10 time steps. For the Hokkien wav2vec 2.0 encoder, we pre-train it with 30k-hr Hokkien drama data using the same hyper-parameters as the En wav2vec 2.0 encoder. ## 4.2.3 Single-Pass Decoding S2Ut The Hokkien unit mBART is trained with 30k-hr Hokkien dramas and 10k-hr Mandarin data from WenetSpeech. The model is trained on 64 GPUs with a batch size of 3072 units, learning rate of 3 × 10−4 with Adam and 10k warmup steps. The model is trained with 500k updates with dropout 0.1. We use the En unit mBART released by Popuri et al. (2022) for training Hokkien→En models. With the pre-trained wav2vec 2.0 encoder and the unit mBART decoder, we follow the best finetuning strategy in Popuri et al. (2022), where the whole encoder and the LayerNorm and both encoder and self attention in the decoder are finetuned with the parallel S2ST data. The models are trained on 32 GPUs with a batch size of 160k tokens. We used 0.1 dropout for all models and 0.2 LayerDrop (Fan et al., 2019). The models are trained using Adam optimizer with 3 × 10−4learning rate, 10k warmup steps an 50k maximum updates. ## 4.2.4 Two-Pass Decoding S2Ut: Unity The text mBART model is pre-trained on the combination of Mandarin and English text data from CC100 (Conneau et al., 2020), Newscrawl (Akhbardeh et al., 2021), Leipzig Corpora (Goldhahn et al., 2012), NewsCommentary (Tiedemann, 2012). There are 2B English sentences and 230M Mandarin sentences. We learn BPE of size 65536 jointly on both languages and apply temperature sampling with 1 T = 0.7 during training. We combine the pre-trained wav2vec 2.0 encoder, the text mBART decoder, and two randomly initialized Transformer layers for the text encoder and the unit decoder, respectively, to build the UnitY model. We train our two-pass models on 16 GPUs with a batch size of 120k tokens, dropout 0.1 for all models except for the human annotated data only setup where we use dropout 0.3. We use LayerDrop (Fan et al., 2019) 0.1 and label smoothing 0.1, and train the model with a learning rate of 5 × 10−4, 2k warmup steps, and a maximum update of 50k steps. The weight on the auxiliary loss from the text decoder is set to 8.0. ## 4.3 Baselines We build two-stage and three-stage cascaded baseline systems for both En↔Hokkien directions. The two-stage cascaded system consists of a source speech (En or Hokkien) to target text (Mandarin or En) end-to-end S2T model and a target text to target speech unit T2U model (T2UT in the case of Zh→Hokkien). The three-stage cascaded system further breaks down the En→Zh S2T model into En ASR followed by En→Zh MT, and the Hokkien→En S2T model is split into a Hokkien→Zh S2T step and a Zh→En MT step. All the speech encoders for the En ASR and S2T models are initialized with wav2vec 2.0 (Sec. 4.2.2). The text decoders of S2T models are initialized with the text mBART (Sec. 4.2.4). We use the En↔Zh MT models, the En T2U model and the Zh→Hokkien T2UT model described in Sec. 4.1.4 for building the cascaded systems. ## 4.4 Evaluation To evaluate the translation quality, we compute ASR-BLEU on the TAT-S2ST evaluation set (Sec. 4.1.2) by applying ASR on the generated speech and computing 4-gram BLEU against the reference text using SACREBLEU (Post, 2018). We use an open-sourced En ASR model11 when evaluating Hokkien→En systems. For En→Hokkien systems, we build an ASR model to transcribe Hokkien speech into Tâilô. The Hokkien ASR is initialized with a w2v- BERT (Chung et al., 2021) LARGE model pretrained on 10k-hr Mandarin speech from WenetSpeech and 30k-hr Hokkien speech from Hokkien drama, followed by finetuning with CTC loss on 480-hr Hokkien speech and Tâi-lô scripts from TAT (Liao et al., 2020b). Each Tâi-lô syllable is split into initial and final with tone as the target. The resulting Hokkien ASR model achieves 6.8% syllable error rate (SER) on the TAT-Vol1-testlavalier set. To evaluate En→Hokkien translation quality, we compute syllable-level ASR-BLEU. To evaluate the naturalness of the speech output, we collect mean opinion scores (MOS) ranges from 1 (the worst) to 5 (the best) from human listening tests. Each item is labeled by three annotators. ## 5 Results 5.1 Single-Pass Vs. Two-Pass Decoding We first study the model architecture choice in both En↔Hokkien directions. Table 1 summarizes the results. We include ASR-BLEU from the target reference speech as an indication of the effect from the unit vocoder and the ASR errors (row 7). We start from training on human annotated data, and it results in very low BLEU score in both directions (row 3, 5), indicating that pre-training, including wav2vec 2.0 and unit or text mBART, is not enough for building a S2ST system under low-resource for distant language pairs. With extra supervision from text, the UnitY model works slightly better than single-pass S2UT by 3.7 BLEU in Hokkien→En (row 3 vs. 5). We then combine the human annotated data with weakly supervised data. Both systems achieve significant gain (6.2-7.5 BLEU) in both directions, indicating the effectiveness of combining self-supervised pre-training and data augmentation with weakly supervised data in low-resource S2ST for a distant language pair. In addition, we find that UnitY outperforms single-pass S2UT in Hokkien→En direction (row 4 vs. 6) by 2.9 BLEU. However, in En→Hokkien, UnitY is merely 0.4 BLEU higher than single-pass S2UT. The larger impact from the additional text supervision in Hokkien→En may be due to the fact that the target text and speech are of the same language, or the larger amount of training data available. As the focus of this work is to present a data creation and model training strategy, we leave the investigation to future work. For the cascaded baselines, the two-stage system is worse than the three-stage system in both En↔Hokkien directions (row 1 vs. 2). Our best one-stage system performs similarly to the best cascaded systems (row 2 vs. 6). For MOS, the cascaded systems and single-stage S2UT systems have similar naturalness in both En→Hokkien and Hokkien→En directions. ## 5.2 Mined Data In this section, we study how to leverage mined Hokkien→En S2T and En↔Hokkien S2ST data. ## 5.2.1 Leveraging Mined En↔**Hokkien S2St** in En→**Hokkien direction** In Table 2, we show the results of leveraging the mined En↔Hokkien S2ST data in En→Hokkien direction. In order to train the UnitY model, we apply Hokkien→Zh S2T to generate pseudo-labeled Mandarin text for the mined Hokkien speech as the auxiliary task target. We first train both one-stage models with mined data and the human annotated data. While the single-pass decoding S2UT model still yields very low BLEU score (row 8), the UnitY model achieves 4.8 BLEU improvement with the extra 197-hr of mined S2ST data (row 5 vs. 10), showing that noisy Mandarin text generated from pseudo-labeling still provides useful signals in model training. We then further combine with weakly supervised data but do not see significant gain with the additional mined data (row 4 vs. 9, 6 vs. 11). Note that the size of mined data is only 13% of the total amount of weakly supervised data we have. As discussed in Sec. 4.1.3, the limited amount of mined data available is mainly due to the domain mismatch issue. In the future, we plan to explore mined data from more similar domains and aim to increase the amount of data for better S2ST performance. We convert the mined Hokkien→En S2T data to S2ST data with the En T2U model and train UnitY models with the combination of human annotated data and optionally the 8k-hr weakly supervised data to examine the effect of mined data on model performance. Table 3 shows the ASR-BLEU scores on the TAT-S2ST test set with respect to different thresholds on the similarity scores of the mined pairs. We see that adding 4.7k-hr mined S2T data (t = 1.065) in Hokkien→En is the most helpful and improves the model quality by 3.6 BLEU when only human annotated data is available. With 8.1khr mined data (t = 1.06), the BLEU gain drops | En→Hokkien | Hokkien→En | | | | | | | | | | | |---------------------------------------------------|---------------------------|-----------|---------------|----------|-------------|-------------|-------|--------|------|-------------|-------------| | Training data | ASR-BLEU | MOS | Training data | ASR-BLEU | MOS | | | | | | | | ID | Model | Human | Weakly | Dev | Test | Test | Human | Weakly | Dev | Test | Test | | (35-hr) | (1.5k-hr) | (61.4-hr) | (8k-hr) | | | | | | | | | | Cascaded systems: 1 Three-stage | ✓ | ✓ | 8.9 | 7.5 | 3.54 ± 0.05 | ✓∗∗ | ✓ | 10.7 | 10.0 | 3.22 ± 0.06 | | | 2 | Two-stage | ✓ | ✓ | 8.4 | 6.9 | 3.52 ± 0.05 | ✓ | ✓ | 11.4 | 8.1 | 3.09 ± 0.06 | | Single-stage S2UT systems: 3 Single-pass decoding | ✓ | ✗ | 0.1 | 0.0 | - | ✓ | ✗ | 0.1 | 0.1 | - | | | 4 | Single-pass decoding | ✓ | ✓ | 8.6 | 7.4 | 3.58 ± 0.05 | ✓ | ✓ | 8.1 | 7.1 | 3.06 ± 0.06 | | 5 | Two-pass decoding (UnitY) | ✓ | ✗ | 1.0 | 0.3 | - | ✓ | ✗ | 4.2 | 3.8 | - | | 6 | Two-pass decoding (UnitY) | ✓ | ✓ | 9.3 | 7.8 | 3.69 ± 0.05 | ✓ | ✓ | 11.8 | 10.0 | 3.15 ± 0.06 | | 7 | Synthetic target∗ | ✗ | ✗ | 61.9 | 61.8 | 3.85 ± 0.05 | ✗ | ✗ | 76.4 | 78.5 | 3.24 ± 0.05 | Table 2: Results of En→Hokkien models trained with mined En↔Hokkien S2ST data. We report dev / test ASR-BLEU on TAT-S2ST dataset. | Training data | ASR-BLEU | | | | | | |-----------------|-------------|----------|--------|-------|-----|------| | ID | Model | Human | Weakly | Mined | Dev | Test | | (35-hr) | (1.5k-hr) | (197-hr) | | | | | | 3 | ✓ | ✗ | ✗ | 0.1 | 0.0 | | | 8 | Single-pass | ✓ | ✗ | ✓ | 0.1 | 0.1 | | 4 | decoding | ✓ | ✓ | ✗ | 8.6 | 7.4 | | 9 | ✓ | ✓ | ✓ | 7.2 | 7.3 | | | 5 | ✓ | ✗ | ✗ | 1.0 | 0.3 | | | 10 | Two-pass | ✓ | ✗ | ✓ | 5.9 | 5.1 | | 6 | (UnitY) | ✓ | ✓ | ✗ | 9.3 | 7.8 | | 11 | ✓ | ✓ | ✓ | 9.0 | 7.7 | | | Data combined | No filter | t=1.08 | t=1.07 | t=1.065 | t=1.06 | |------------------|-------------|----------|-----------|-----------|-----------| | with mined | (0-hr) | (356-hr) | (2274-hr) | (4732-hr) | (8101-hr) | | human (61.4-hr) | 4.2/3.8 | 8.2/7.1 | 7.6/6.3 | 9.0/7.4 | 6.1/4.7 | | human (61.4-hr) | 11.8/10.0 | 11.6/9.9 | 12.0/10.7 | 12.3/10.5 | 12.2/10.8 | | + weakly (8k-hr) | | | | | | to 0.9 BLEU. In addition, it is 5.3 BLEU lower than the UnitY model trained with human annotated data and 8k-hr of weakly supervised data (Table 1 row 6). As the Hokkien speech for both weakly supervised data and mined data comes from the same Hokkien dramas dataset, the gap implies that pseudo-labeling is a generally effective data augmentation technique for low-resource scenarios, while the quality of the mined data is constrained by the content of the data available for mining. However, combining all three types of data together is still beneficial. We obtain 0.5 BLEU gain by adding 4.7k-hr mined data to the combination of human annotated and weakly supervised data. ## 6 Conclusions We present the first En↔Hokkien S2ST systems, where Hokkien is an oral language that does not have standard and widely adopted text writing systems, i.e. an unwritten language. To tackle the challenges of speech translation for unwritten languages and the lack of parallel training data, we present an end-to-end study. First, we explore three options of training data creation including human annotation, weakly supervised data from pseudolabeling and data mining. Second, we investigate two modeling choices including direct speech-tounit translation with a single speech unit decoder and two-pass decoding that leverages extra supervision from target text. Experimental results show that leveraging a similar high-resource written language (Mandarin in the case of Hokkien) is effective in both the data creation process and model training. Finally, we release the benchmark dataset and ASR evaluation model to facilitate research in this field. In the future, we aim to expand study and establish an S2ST model building strategy that works for a diverse set of unwritten languages. ## 7 Limitation In our research, we have focused on one language pair, English↔Hokkien, and experimenting in both directions. In the future, we plan to apply the same methodology to additional unwritten languages to evaluate its broad applicability. Our approach leverages parallel speech-to-text data between the unwritten language and a linguistically similar written language. There remains a question of whether there are unwritten languages without similar written languages. ## 8 Acknowledgements We would like to thank Koklioong Loa for consulting on Taiwanese Hokkien; Eric Lam, KaiWei Chang, Iu-thîng Kang and Hung-yi Lee ¯ from National Taiwan University for providing Hokkien drama data; Janice Lam for writing and fact-checking the information about Hokkien language; Brian Bui and Carleigh Wood for coordinating the data annotation effort; Ilia Kulikov for setting up the evaluation script; Pengwei Li for the HuggingFace demo integration. ## References Farhad Akhbardeh, Arkady Arkhangorodsky, Magdalena Biesialska, Ondˇrej Bojar, Rajen Chatterjee, Vishrav Chaudhary, Marta R. Costa-jussa, Cristina España-Bonet, Angela Fan, Christian Federmann, Markus Freitag, Yvette Graham, Roman Grundkiewicz, Barry Haddow, Leonie Harter, Kenneth Heafield, Christopher Homan, Matthias Huck, Kwabena Amponsah-Kaakyire, Jungo Kasai, Daniel Khashabi, Kevin Knight, Tom Kocmi, Philipp Koehn, Nicholas Lourie, Christof Monz, Makoto Morishita, Masaaki Nagata, Ajay Nagesh, Toshiaki Nakazawa, Matteo Negri, Santanu Pal, Allahsera Auguste Tapo, Marco Turchi, Valentin Vydrin, and Marcos Zampieri. 2021. Findings of the 2021 conference on machine translation (WMT21). In *Proceedings of* the Sixth Conference on Machine Translation, pages 1–88, Online. Association for Computational Linguistics. Rosana Ardila, Megan Branson, Kelly Davis, Michael Kohler, Josh Meyer, Michael Henretty, Reuben Morais, Lindsay Saunders, Francis M. Tyers, and Gregor Weber. 2020. Common voice: A massivelymultilingual speech corpus. In *Proceedings of The* 12th Language Resources and Evaluation Conference, LREC 2020, Marseille, France, May 11-16, 2020, pages 4218–4222. European Language Resources Association. Mikel Artetxe and Holger Schwenk. 2019. Massively multilingual sentence embeddings for zeroshot cross-lingual transfer and beyond. *TACL*, pages 597–610. Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations. Advances in Neural Information Processing Systems, 33. Alexandre Bérard, Olivier Pietquin, Laurent Besacier, and Christophe Servan. 2016. Listen and translate: A proof of concept for end-to-end speech-to-text translation. In *NIPS Workshop on end-to-end learning for* speech and audio processing. Roldano Cattoni, Mattia Antonino Di Gangi, Luisa Bentivogli, Matteo Negri, and Marco Turchi. 2021. MuST-C: A multilingual corpus for end-to-end speech translation. *Computer Speech & Language*, 66:101155. Robert L. Cheng. 1968. Tone sandhi in taiwanese. *Linguistics*, 41:19–42. Yu-An Chung, Yu Zhang, Wei Han, Chung-Cheng Chiu, James Qin, Ruoming Pang, and Yonghui Wu. 2021. w2v-bert: Combining contrastive learning and masked language modeling for self-supervised speech pre-training. In *2021 IEEE Automatic Speech* Recognition and Understanding Workshop (ASRU), pages 244–250. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 8440– 8451, Online. Association for Computational Linguistics. Alexis Conneau and Guillaume Lample. 2019. Crosslingual language model pretraining. *Advances in* neural information processing systems, 32. Mattia A. Di Gangi, Roldano Cattoni, Luisa Bentivogli, Matteo Negri, and Marco Turchi. 2019. MuST-C: a Multilingual Speech Translation Corpus. In *Proceedings of the 2019 Conference of the North American* Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2012–2017, Minneapolis, Minnesota. Association for Computational Linguistics. Qianqian Dong, Fengpeng Yue, Tom Ko, Mingxuan Wang, Qibing Bai, and Yu Zhang. 2022. Leveraging pseudo-labeled data to improve direct speech-tospeech translatio. *arXiv preprint arXiv:2205.08993*. Paul-Ambroise Duquenne, Hongyu Gong, Ning Dong, Jingfei Du, Ann Lee, Vedanuj Goswani, Changhan Wang, Juan Pino, Benoît Sagot, and Holger Schwenk. 2022a. Speechmatrix: A large-scale mined corpus of multilingual speech-to-speech translations. Paul-Ambroise Duquenne, Hongyu Gong, Benoît Sagot, and Holger Schwenk. 2022b. T-modules: Translation modules for zero-shot cross-modal machine translation. In *Proceedings of the 2022 Conference* on Empirical Methods in Natural Language Processing, pages 5794–5806, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Paul-Ambroise Duquenne, Hongyu Gong, and Holger Schwenk. 2021. Multimodal and multilingual embeddings for large-scale speech mining. *Advances in* Neural Information Processing Systems, 34. Angela Fan, Edouard Grave, and Armand Joulin. 2019. Reducing transformer depth on demand with structured dropout. *arXiv preprint arXiv:1909.11556*. Dirk Goldhahn, Thomas Eckart, and Uwe Quasthoff. 2012. Building large monolingual dictionaries at the leipzig corpora collection: From 100 to 200 languages. In *Proceedings of the Eighth International* Conference on Language Resources and Evaluation (LREC'12), pages 759–765. Alex Graves, Santiago Fernández, Faustino Gomez, and Jürgen Schmidhuber. 2006. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In *Proceedings of the* 23rd international conference on Machine learning, pages 369–376. Anmol Gulati, James Qin, Chung-Cheng Chiu, Niki Parmar, Yu Zhang, Jiahui Yu, Wei Han, Shibo Wang, Zhengdong Zhang, Yonghui Wu, and Ruoming Pang. 2020. Conformer: Convolution-augmented transformer for speech recognition. In *Interspeech*. François Hernandez, Vincent Nguyen, Sahar Ghannay, Natalia Tomashenko, and Yannick Esteve. 2018. TED-LIUM 3: twice as much data and corpus repartition for experiments on speaker adaptation. In *International conference on speech and computer*, pages 198–208. Springer. Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, and Abdelrahman Mohamed. 2021. HuBERT: Self-supervised speech representation learning by masked prediction of hidden units. *arXiv preprint arXiv:2106.07447*. Rongjie Huang, Zhou Zhao, Jinglin Liu, Huadai Liu, Yi Ren, Lichao Zhang, and Jinzheng He. 2022. TranSpeech: Speech-to-speech translation with bilateral perturbation. *arXiv preprint arXiv:2205.12523*. Hirofumi Inaguma, Sravya Popuri, Ilia Kulikov, PengJen Chen, Changhan Wang, Yu-An Chung, Yun Tang, Ann Lee, Shinji Watanabe, and Juan Pino. 2022. UnitY: Two-pass direct speech-to-speech translation with discrete units. *arXiv preprint arXiv:2212.08055*. Javier Iranzo-Sánchez, Joan Albert Silvestre-Cerdà, Javier Jorge, Nahuel Roselló, Adrià Giménez, Albert Sanchís, Jorge Civera, and Alfons Juan. 2020. Europarl-ST: A multilingual corpus for speech translation of parliamentary debates. In 2020 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2020, Barcelona, Spain, May 4-8, 2020, pages 8229–8233. IEEE. Keith Ito and Linda Johnson. 2017. The LJ speech dataset. Ye Jia, Yifan Ding, Ankur Bapna, Colin Cherry, Yu Zhang, Alexis Conneau, and Nobuyuki Morioka. 2022a. Leveraging unsupervised and weaklysupervised data to improve direct speech-to-speech translation. *arXiv preprint arXiv:2203.13339*. Ye Jia, Michelle Tadmor Ramanovich, Tal Remez, and Roi Pomerantz. 2022b. Translatotron 2: High-quality direct speech-to-speech translation with voice preservation. In International Conference on Machine Learning, pages 10120–10134. PMLR. Ye Jia, Michelle Tadmor Ramanovich, Quan Wang, and Heiga Zen. 2022c. CVSS corpus and massively multilingual speech-to-speech translation. In *Proceedings of the Thirteenth Language Resources and* Evaluation Conference, pages 6691–6703, Marseille, France. European Language Resources Association. Ye Jia, Ron J Weiss, Fadi Biadsy, Wolfgang Macherey, Melvin Johnson, Zhifeng Chen, and Yonghui Wu. 2019. Direct speech-to-speech translation with a sequence-to-sequence model. *Proc. Interspeech* 2019, pages 1123–1127. Jacob Kahn, Morgane Rivière, Weiyi Zheng, Evgeny Kharitonov, Qiantong Xu, Pierre-Emmanuel Mazaré, Julien Karadayi, Vitaliy Liptchinsky, Ronan Collobert, Christian Fuegen, et al. 2020. Libri-light: A benchmark for ASR with limited or no supervision. In *ICASSP*. Takatomo Kano, Sakriani Sakti, and Satoshi Nakamura. 2021. Transformer-based direct speech-to-speech translation with transcoder. In 2021 IEEE Spoken Language Technology Workshop (SLT), pages 958– 965. IEEE. Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66–71, Brussels, Belgium. Association for Computational Linguistics. Kushal Lakhotia, Eugene Kharitonov, Wei-Ning Hsu, Yossi Adi, Adam Polyak, Benjamin Bolte, Tu-Anh Nguyen, Jade Copet, Alexei Baevski, Abdelrahman Mohamed, and Emmanuel Dupoux. 2021. On generative spoken language modeling from raw audio. Transactions of the Association for Computational Linguistics, 9:1336–1354. Alon Lavie, Alex Waibel, Lori Levin, Michael Finke, Donna Gates, Marsal Gavalda, Torsten Zeppenfeld, and Puming Zhan. 1997. JANUS-III: Speech-tospeech translation in multiple languages. In *1997* IEEE International Conference on Acoustics, Speech, and Signal Processing, volume 1, pages 99–102. IEEE. Ann Lee, Peng-Jen Chen, Changhan Wang, Jiatao Gu, Sravya Popuri, Xutai Ma, Adam Polyak, Yossi Adi, Qing He, Yun Tang, et al. 2022a. Direct speech-tospeech translation with discrete units. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3327–3339. Ann Lee, Hongyu Gong, Paul-Ambroise Duquenne, Holger Schwenk, Peng-Jen Chen, Changhan Wang, Sravya Popuri, Juan Pino, Jiatao Gu, and Wei-Ning Hsu. 2022b. Textless speech-to-speech translation on real data. Yuan-Fu Liao, Chia-Yu Chang, Hak-Khiam Tiun, Huang-Lan Su, Hui-Lu Khoo, Jane S. Tsay, LeKun Tan, Peter Kang, Tsun-guan Thiann, Un-Gian Iunn, Jyh-Her Yang, and Chih-Neng Liang. 2020a. Formosa speech recognition challenge 2020 and taiwanese across taiwan corpus. In *2020 23rd Conference of the Oriental COCOSDA International Committee for the Co-ordination and Standardisation of* Speech Databases and Assessment Techniques (OCOCOSDA), pages 65–70. Yuan-Fu Liao, Chia-Yu Chang, Hak-Khiam Tiun, Huang-Lan Su, Hui-Lu Khoo, Jane S Tsay, Le-Kun Tan, Peter Kang, Tsun-guan Thiann, Un-Gian Iunn, et al. 2020b. Formosa speech recognition challenge 2020 and Taiwanese across Taiwan corpus. In *2020* 23rd Conference of the Oriental COCOSDA International Committee for the Co-ordination and Standardisation of Speech Databases and Assessment Techniques (O-COCOSDA), pages 65–70. IEEE. Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pretraining for neural machine translation. *Transactions of the Association for Computational Linguistics*, 8:726–742. Satoshi Nakamura, Konstantin Markov, Hiromi Nakaiwa, Gen-ichiro Kikui, Hisashi Kawai, Takatoshi Jitsuhiro, J-S Zhang, Hirofumi Yamamoto, Eiichiro Sumita, and Seiichi Yamamoto. 2006. The ATR multilingual speech-to-speech translation system. IEEE Transactions on Audio, Speech, and Language Processing, 14(2):365–376. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations. Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. 2015. Librispeech: an ASR corpus based on public domain audio books. In *ICASSP*. Adam Polyak, Yossi Adi, Jade Copet, Eugene Kharitonov, Kushal Lakhotia, Wei-Ning Hsu, Abdelrahman Mohamed, and Emmanuel Dupoux. 2021. Speech resynthesis from discrete disentangled selfsupervised representations. Sravya Popuri, Peng-Jen Chen, Changhan Wang, Juan Pino, Yossi Adi, Jiatao Gu, Wei-Ning Hsu, and Ann Lee. 2022. Enhanced direct speech-to-speech translation using self-supervised pre-training and data augmentation. *arXiv preprint arXiv:2204.02967*. Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186– 191. Holger Schwenk, Guillaume Wenzek, Sergey Edunov, Edouard Grave, and Armand Joulin. 2019. CCMatrix: Mining billions of high-quality parallel sentences on the WEB. *CoRR*, abs/1911.04944. Jörg Tiedemann. 2012. Parallel data, tools and interfaces in OPUS. In *Proceedings of the Eight International Conference on Language Resources and* Evaluation (LREC'12), Istanbul, Turkey. European Language Resources Association (ELRA). Andros Tjandra, Sakriani Sakti, and Satoshi Nakamura. 2019. Speech-to-speech translation between untranscribed unknown languages. In *2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)*, pages 593–600. IEEE. Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, and Emmanuel Dupoux. 2021a. VoxPopuli: A large-scale multilingual speech corpus for representation learning, semi-supervised learning and interpretation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 993–1003, Online. Association for Computational Linguistics. Changhan Wang, Anne Wu, Jiatao Gu, and Juan Pino. 2021b. CoVoST 2 and massively multilingual speech translation. In *Interspeech*, pages 2247–2251. Jhing-Fa Wang, Shun-Chieh Lin, Hsueh-Wei Yang, and Fan-Min Li. 2004. Multiple-translation spotting for Mandarin-Taiwanese speech-to-speech translation. In International Journal of Computational Linguistics & Chinese Language Processing, Volume 9, Number 2, August 2004: Special Issue on New Trends of Speech and Language Processing, pages 13–28. Lijun Wu, Juntao Li, Yue Wang, Qi Meng, Tao Qin, Wei Chen, Min Zhang, Tie-Yan Liu, et al. 2021. RDrop: Regularized dropout for neural networks. In Proceedings off NeurIPS, volume 34, pages 10890– 10905. Binbin Zhang, Hang Lv, Pengcheng Guo, Qijie Shao, Chao Yang, Lei Xie, Xin Xu, Hui Bu, Xiaoyu Chen, Chenchen Zeng, et al. 2022. Wenetspeech: A 10000+ hours multi-domain mandarin corpus for speech recognition. In *ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal* Processing (ICASSP), pages 6182–6186. IEEE. Chen Zhang, Xu Tan, Yi Ren, Tao Qin, Kejun Zhang, and Tie-Yan Liu. 2021. Uwspeech: Speech to speech translation for unwritten languages. Proceedings of the AAAI Conference on Artificial Intelligence, 35(16):14319–14327. ## A Dataset Stats | Data source | # samples | Source | Target | | |----------------|--------------|--------------|----------------|-----------| | speech (hrs) | speech (hrs) | | | | | Hokkien dramas | 6,125 | 5.8∗ | synthetic | | | Hokkien→En | TAT | 1,673 | 4.6 (74M, 86F) | synthetic | | MuST-C | 13,733 | 51 (8M, 14F) | synthetic | | | En→Hokkien | 35∗ | 51 (8M, 14F) | | | Table 4: Statistics of the human annotated training sets. (M: male, F: female, ∗: no gender information available) | # samples | Duration (hrs) | # speakers | | | |-------------|------------------|---------------|------|---------------| | Dev | En | 722 | 1.62 | 10 (5 M, 5 F) | | Hokkien | 1.46 | 10 (8 M, 2 F) | | | | Test | En | 686 | 1.47 | 10 (5 M, 5 F) | | Hokkien | 1.42 | 10 (3 M, 7 F) | | | Table 5: Statistics of the TAT-S2ST benchmark set. (M: male, F: female) Table 6: Statistics of datasets (train/dev/test splits) used in pre-training, data augmentation and cascade systems. TTS data is used to build the unit vocoder to synthesize waveform from discrete unit. | TTS data is used to build the unit vocoder to synthesize waveform from discrete unit. Dataset type # samples source (hrs) | target (hrs) | | | |-----------------------------------------------------------------------------------------------------------------------------|--------------------|-------------------|----| | usage ASR, Hokkien-TaiLo TAT (Liao et al., 2020a) | 133k | 480 | - | | Hokkien HuBERT, Hokkien ASR ASR, Chinese WenetSpeech (Zhang et al., 2022) | 17.8M | 10k | - | | Hokkien HuBERT ASR, En Librispeech (Panayotov et al., 2015) | 282k / 5.6k / 5.5k | 960 / 10.5 / 10.7 | - | | TED-LIUM3 (Hernandez et al., 2018) | 268k / 507 / 1.2k | 452 / 1.6 / 2.6 | - | | Unlabeled Speech, Hokkien Hokkien drama | 26M | 23k | - | | Hokkien HuBERT SpeechOcean | 679k | 597 | - | | Hokkien HuBERT Unlabeled Speech, En VoxPopuli (Wang et al., 2021a) | 1.8M | 14k | - | | Librilight (Kahn et al., 2020) | 18.6M | 60k | - | | Parallel Text, Zh-En & En-Zh CC-Matrix (Schwenk et al., 2019) | 38M | - | - | | MT Unlabelled Text, Zh Newscrawl (Akhbardeh et al., 2021) | 14M | - | - | | Leipzig Corpora (Goldhahn et al., 2012) | 7M | - | - | | NewsCommentary (Tiedemann, 2012) | 0.5M | - | - | | CC-100 (Conneau et al., 2020) | 208M | - | - | | Unlabelled Text, En Newscrawl (Akhbardeh et al., 2021) | 260M | - | - | | NewsCommentary (Tiedemann, 2012) | 0.7M | - | - | | CC-100 (Conneau et al., 2020) | 2.1B | - | - | | TTS, Hokkien TAT-TTS (4 speakers) | 45k | 40 | - | | TTS, English LJSpeech (Ito and Johnson, 2017) | 13.1k | 24 | - | ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Left blank. ✗ A2. Did you discuss any potential risks of your work? The speech to speech translation application has been broadly study in the field. In our work, we expand the application to a new unwritten language, Taiwanese-Hokkien to English, and except that we did not enable new application in our work. Therefore, we did not discuss potential risk of our work. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Left blank. ✓ A4. Have you used AI writing assistants when working on this paper? I use translation tool (Google Translate) for checking if my English sentences are fluent. I used it for the limitation section. ## B ✓ **Did You Use Or Create Scientific Artifacts?** use scientific artifacts - we use FAIRSEQ to development our model (section 4), and open source datasets for training and evaluation (full datasets listed in the appendix). Create scientific artifacts - we release our models and benchmark datasets. The link is in the introduction. ✓ B1. Did you cite the creators of artifacts you used? FAIRSEQ - section 4 open source datasets - appendix ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? We list the terms for use in our open source page. ✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? We did not discuss this in the paper, but we did check and specify the the intended use in our open source page. ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? We did not discuss the steps in the paper. But we did remove the content about uniquely identifies individual people and offensive content during the data collection process. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? We provide the demographic groups represented stats in the appendix. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. We provide the relevant stats about the splits, including number of examples and durations in the appendix. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ## C ✓ **Did You Run Computational Experiments?** Section 5 Results. ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Yes, we report the number of parameters in each models and the training parameters including number of updates and number of GPUs in section 4. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Yes, we reported the best-found hyper-parameter values in section 4. ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Yes, we provide the error bar for the MOS score. For translation quality (BLEU), we have done multiple rounds of experiments but on the resulting table that we only reported one single run. The description is clear that it is single run result. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? We write our own normalization function and we open source it. The link is in the abstraction. ## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** It Is In Section 5 For The Mos Score. In The 3.2.1 About The S2St Dataset. ✗ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? We didn't report the full text of instructions to the participants but just describe in high level that what the MOS score meant to cover to measure. ✗ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No, we did not disclose. Our vendor didn't expose the information to us. ✗ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No we didn't discuss, but we did have a consent from the user that we collect the data. ✓ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? We have internal review to make sure we collect data with the right consent. ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Yes, we report the demographic in the data set table in appendix.
liu-etal-2023-code
Code Execution with Pre-trained Language Models
https://aclanthology.org/2023.findings-acl.308
Code execution is a fundamental aspect of programming language semantics that reflects the exact behavior of the code. However, most pre-trained models for code intelligence ignore the execution trace and only rely on source code and syntactic structures. In this paper, we investigate how well pre-trained models can understand and perform code execution. We develop a mutation-based data augmentation technique to create a large-scale and realistic Python dataset and task for code execution, which challenges existing models such as Codex. We then present CodeExecutor, a Transformer model that leverages code execution pre-training and curriculum learning to enhance its semantic comprehension. We evaluate CodeExecutor on code execution and show its promising performance and limitations. We also demonstrate its potential benefits for code intelligence tasks such as zero-shot code-to-code search and text-to-code generation. Our analysis provides insights into the learning and generalization abilities of pre-trained models for code execution.
# Code Execution With Pre-Trained Language Models Chenxiao Liu1∗ , Shuai Lu2, Weizhu Chen2**, Daxin Jiang**2, Alexey Svyatkovskiy2, Shengyu Fu2, Neel Sundaresan2**, Nan Duan**2 1 Peking University 2 Microsoft [email protected] {shuailu, wzchen, djiang}@microsoft.com {alsvyatk, shengyfu, neels, nanduan}@microsoft.com ## Abstract Code execution is a fundamental aspect of programming language semantics that reflects the exact behavior of the code. However, most pre-trained models for code intelligence ignore the execution trace and only rely on source code and syntactic structures. In this paper, we investigate how well pre-trained models can understand and perform code execution. We develop a mutation-based data augmentation technique to create a large-scale and realistic Python dataset and task for code execution, which challenges existing models such as Codex. We then present CodeExecutor, a Transformer model that leverages code execution pre-training and curriculum learning to enhance its semantic comprehension. We evaluate CodeExecutor on code execution and show its promising performance and limitations. We also demonstrate its potential benefits for code intelligence tasks such as zero-shot code-tocode search and text-to-code generation. Our analysis provides insights into the learning and generalization abilities of pre-trained models for code execution. ## 1 Introduction Pre-trained models have achieved remarkable results in natural language (NL) tasks (Radford et al., 2018; Devlin et al., 2019; Raffel et al., 2020), inspiring the development of pre-trained models for programming language (PL) tasks (Kanade et al., 2020; Feng et al., 2020; Svyatkovskiy et al., 2020; Wang et al., 2021b; Guo et al., 2021, 2022). These models leverage source code and code structures, such as abstract syntax tree (AST) (Wang et al., 2021a; Guo et al., 2022) and data flow (Guo et al., 2021), to learn code-related tasks. These structures, while useful, are not sufficient to represent the dynamic behavior of code during execution, which is reflected in the execution trace. Using Figure 1 as ∗Work done during internship at Microsoft. Shuai Lu and Nan Duan are corresponding authors. an example, the execution trace shows how code behaves during execution, reflecting the control flow and the state changes of variables. On the other hand, as stated by Casalnuovo et al. (2020), source code contains two channels of information: natural & formal. The natural channel (Hindle et al., 2012), such as identifiers and comments, enables language models to be leveraged to understand code-related tasks. The formal channel is used by interpreters and compilers to specify execution and has precise semantics. The formal channel is unique to code and is what makes it executable. Execution trace falls into the second category since it reveals the formal channel of information that distinguishes code from natural language, as well as enabling code execution precisely (Casalnuovo et al., 2020; Chakraborty et al., 2022). In this work, we aim to teach pre-trained models the real-world code execution process. We propose CodeExecutor, a Transformer-based model that learns to execute arbitrary programs and predict their execution traces. To support pre-training on large-scale data, we construct the Python CodeNetMut dataset by producing mutations based on submissions to competitive programming problems from CodeNet (Puri et al., 2021), along with single-line Python transformations and programs adapted from Python official tutorial. We design a pre-training task that predicts both the line order and the intermediate states of the execution trace, and apply curriculum learning to gradually increase the difficulty of the programs. We evaluate CodeExecutor on code execution tasks and show that it outperforms existing models and demonstrates promising capabilities. We also conduct an in-depth analysis of the model's performance and reveal its strengths and weaknesses. Furthermore, we show that CodeExecutor can improve downstream tasks like zero-shot codeto-code search and text-to-code generation, indicating the potential of leveraging execution trace to ![1_image_1.png](1_image_1.png) 1 2 3 4 5 6 7 8 (a) Source Code (b) Execution **Trace** ![1_image_0.png](1_image_0.png) enhance code intelligence. Our models and datasets are publicly available1. In summary, the contributions of this paper are: - We present the first attempt at building a largescale pre-training dataset for real-world code execution using a mutation-based data augmentation approach. - We propose a novel pre-trained model named CodeExecutor that learns to predict the execution traces using a code execution pre-training task and curriculum learning. - We conduct a comprehensive evaluation of CodeExecutor for code execution tasks, providing a detailed understanding of the model's performance. - CodeExecutor significantly improves code intelligence tasks like zero-shot code-to-code search and text-to-code generation. ## 2 Related Work 2.1 Learning To Execute Previous works form the *learning to execute* task as a problem that reads a program and computes the program's output. These works leverage architectures such as recurrent neural networks (Zaremba and Sutskever, 2014), graph neural networks (Bieber et al., 2020; Wang et al., 2020) and Transformers (Dehghani et al., 2019; Yan et al., 2020; Austin et al., 2021; Nye et al., 2021). Another related task *algorithm induction* is to read a short program, such as integer addition or polynomial evaluation, and computes the output. Algorithm induction task (Graves et al., 2014; Kurach et al., 2016; Kaiser and Sutskever, 2016; Graves 1https://github.com/microsoft/CodeBERT/tree/ master/CodeExecutor et al., 2016; Reed and de Freitas, 2016; Dehghani et al., 2019; Velickovic et al., 2020a,b; Nye et al., 2021) targets a particular algorithm with direct algorithm-specific supervision compared with arbitrary programs in our code execution task. Some emerging works also employ pre-trained models to tackle the two tasks. Lu et al. (2022) fine-tunes a small fraction of the weights in GPT2 (Radford et al., 2019) on non-language tasks, including simple algorithm induction tasks like Bit XOR. Austin et al. (2021) evaluates models pretrained on web documents and dialog data ranging in size from 2 million to 137 billion parameters and shows that largest models are generally unable to predict the output of a program, whether few-shot or fine-tuning. Nye et al. (2021) uses a "scratchpad" to store intermediate computation steps to perform multi-step computations, improving the ability of models in Austin et al. (2021). Different from previous works that predict program's output and mainly deal with specific algorithms, we predict the program's whole execution trace and focus on imitating the real-world arbitrary program execution behavior. Besides, by using execution to capture code semantics, our work is beneficial for tasks related to code intelligence. ## 2.2 Mathematical Problem Solving Mathematical problem solving is a related domain of code execution. Recent works show the ability of language models to solve math problems, which requires learning to execute a soft algorithm to arrive at a deterministic answer. Amini et al. (2019); Ling et al. (2017) map math problems to operation programs and focus on sequence-to-program generation. Saxton et al. (2019) introduce the DeepMind Mathematics dataset, which contains plugand-chug problems such as addition, list sorting, and function evaluation. Henighan et al. (2020) | Operator | Description | | |------------|---------------------------------|--------------------------------------------------------------------------------------| | CRP | Constant Replacement | Change numeric and string literals. | | AOD | Arithmetic Operator Deletion | Delete a unary arithmetic operator '+' or '-'. | | AOR | Arithmetic Operator Replacement | Replace an arithmetic operator with another one. E.g. x * y can be mutated to x / y. | | ASR | Assignment Operator Replacement | Substitute an extended assignment operator with another. | | BCR | Break Continue Replacement | Swap keywords break and continue in a loop body. | | COD | Conditional Operator Deletion | Delete unary negation operator not or the negation of an membership operator not in. | | LCR | Logical Connector Replacement | Swap logical operators and with or and vice versa. | | ROR | Relational Operator Replacement | Substitutes relational operators. E.g. x <= y can be mutated to x > y. | | SIR | Slice Index Removal | Delete one argument of collection[start:end:step]. | | OIL | One Iteration Loop | Execute a loop only once by adding a break statement. | | RIL | Reverse Iteration Loop | Change direction of loop iteration by the function reversed(). | | ZIL | Zero Iteration Loop | Interrupt realization of a loop during its first iteration. | shows that the majority of problems in the DeepMind Mathematics dataset can be straightforwardly solved with large Transformers. Hendrycks et al. (2021) introduces the MATH dataset, consisting of competition math problems with step-by-step solutions written in LATEX and natural languages. Cobbe et al. (2021) releases GSM8K, including grade school math questions and natural language solutions. Recently, Zhou et al. (2022) proposes algorithmic prompting to improve the performance of large language models on math problem solving, which starts from learning skills containing addition, subtraction, multiplication, and parity. Code execution involves calculations such as addition, subtraction, multiplication, division, exponentiation, and modulus, which are similar to solving math problems. With the added complexity of managing variables, data structures, control flows, and other programming concepts, learning code execution requires a different set of skills and knowledge from learning mathematics, although some overlap exists. ## 3 Mutation-Based Data Augmentation The goal of *code execution* task is to learn to emulate the execution without running a program by an interpreter. We treat the task as a generation task: given a source code c, the execution trace t is required to be generated. Execution trace consists of two components: one is the order in which the computer executes statements, and the other is how the states of the variables change when jumping from one statement to another. Normally, the statements inside a program are not executed sequentially, especially in a real-world scenario where programs embody complex logic and rich semantics. Moreover, variables relate to various types of data structures with diverse characteristics and operations. Given the complexity and difficulty of this task, it is of great importance to build a large-scale dataset and explore the capabilities and boundaries of large language models for code execution. ## 3.1 Mutating Source Code Constructing a large-scale Python dataset for realworld code execution is very challenging. Programs retrieved from software development platforms such as GitHub 2are mostly not executable at scale, as they depend on specific external resources which are not easily available. Examples of external resources include program inputs, file contents, external modules, and third-party packages. For the same reason, it is not practical to collect programs from posts in coding question-answering websites like StackOverflow 3. We build the Python code execution dataset based on submissions to competitive programming problems from CodeNet benchmark (Puri et al., 2021). We run each submission in a sandbox environment to get the execution trace and filter out programs that exceed time and trace limits or result in runtime errors. To construct a large-scale dataset of executable programs, we propose a mutation-based data augmentation approach. For each submission, the approach modifies some parts of a program to generate diverse mutants, leading to different execution traces. Specifications of these modifications are called mutation operators. It is inspired by mutation testing (Hamlet, 1977; Jia and Harman, 2011) in software engineering, a popular technique that supports the design of high-quality test suites for programs. Following Derezinska and Hałas ´ (2014) that applies mutation testing technique to Python programs, we first present a set of mutation operators as shown in Table 1. Most of them correspond to selected operators used in strongly typed general purpose languages and are adopted to the Python language. Operators designed for Python features are also included, such as Slice Index Removal (SIR) and Reverse Iteration Loop (RIL). Then we convert a program into an AST and extract its node type information to get a candidate list of all mutable literals, operators and statements. Finally, we generate mutants and eliminate those that are not executable. We use the CodeNet Mutants (CodeNetMut) to build the pre-training dataset. Greater detail of the dataset generation process can be found in Appendix A. ## 3.2 Dataset Construction Given the difficulty of training the model on realworld complete programs, we build two simpler datasets along with CodeNetMut for pre-training. The first is the Python SingleLine dataset collected by Fraser Greenlee 4, which consists of nearly nine million examples of single-line transformations. Each example contains several variables specified in initial values, a single line of Python code, and the new set of variables and values resulting from executing that line. We combine the first two as the input code, and use the last one as the target trace. We do not re-execute the dataset. When pre-training on SingleLine data, we only ask the model to predict the final states of the last code line without line-by-line illustration. Figure 2 (a)(b) show examples of these data. Since individual lines of code constitute real-world complex programs, the dataset serves as a foundation for learning about code execution. The second is the Python Tutorial dataset. This dataset is created by crawling and filtering all the executable code examples that appear in the official Python tutorial 5. The official tutorial introduces the basic concepts and most noteworthy features of the Python language. To generate this dataset, we apply the Constant Replacement operator (first row in Table 1) to change numeric literals into diverse values. This approach results in 3.4 million pro-4https://www.kaggle.com/frasergreenlee/ python-state-changes 5https://docs.python.org/3/tutorial ![3_image_1.png](3_image_1.png) ![3_image_0.png](3_image_0.png) grams. Figure 2 (c) shows an example of a mutant. While the Tutorial dataset is not comprehensive and does not cover every single feature, it provides a good representation of Python's flavor and style, which offers valuable supervision for modeling the execution of commonly used code blocks. Therefore, the Python Code Execution datasets are a series of datasets following an easy-to-hard paradigm, including the SingleLine dataset, Tutorial dataset, and CodeNetMut dataset. ## 4 Codeexecutor Our CodeExecutor utilizes a Transformer-based framework to learn code execution through pretraining. We will first describe the model architecture (§4.1), then the pre-training task (§4.2), and finally, the curriculum learning strategy (§4.3). ## 4.1 Model Architecture The model is based on Transformer and adopts the same architecture as UniXcoder (Guo et al., 2022). UniXcoder is a unified cross-modal pre-trained model for programming language which has encoder-only, decoder-only and encoder-decoder modes. It utilizes mask attention matrices (Dong et al., 2019) with prefix adapters to control the behavior. We take the encoder-decoder manner by using a special token [E2D] as the prefix in front of the input. CodeExecutor consists of 12 Transformer layers. Each transformer layer is architecturally identical, containing a multi-headed self-attention pooling (Vaswani et al., 2017) followed by a feed forward network. ## 4.2 Pre-Training Task We propose a new pre-training task called code execution. Our motivation for the task is to improve the ability of our model to understand and execute code. Traditional pre-training tasks such as language modeling or denoising objective do not involve code execution, and thus, models trained on these tasks have limited ability to execute code. By pre-training our model on the task of code execution, we aim to improve its ability by learning useful patterns from bimodal data of code and trace. This will enable our model to generate more accurate traces and understand the behavior of the code, which is crucial for a wide range of code intelligence applications that require code understanding. With the knowledge of how the code works, the model can better understand the underlying logic of the code and use that understanding to better perform these tasks. We continue pre-training UniXcoder on the task. At the pre-training stage, our model receives code as inputs and learns to generate traces. To facilitate a better understanding of code, special tokens [i] indicating line numbers and [*INDENT*] [*DET ENT*] indicating indentation are inserted into the code. Each line in trace can be represented as [LINE], [i], [ST AT E], v1, : , s1, [DICT SEP], ..., [DICT SEP], vk, :, sk, [*ST AT EEND*], where k denotes the number of variables and the state of k-th variable vk is sk. The symbol [*DICT SEP*] separates the pairs within the dictionary and [*ST AT EEND*] indicates the end of the states. This representation allows our model to learn the state of variables at each step of the execution, which is crucial for understanding the behavior of the code. ## 4.3 Curriculum Learning To improve the generalization capacity, we follow the curriculum learning strategy during pre-training. Curriculum learning (Bengio et al., 2009) (CL) is a learning strategy that starts from easy instances and then gradually handles harder ones, which imitates the meaningful learning order in human curricula. In our pre-training process, we organize the learning of the Python code execution datasets according to a curriculum that starts with simple instances, i.e. SingleLine data. First, we employ all the 9 million SingleLine transformations to pre-train CodeExecutor until convergence. To achieve a balanced dataset, we then reserve 3 million instances in Sin- | SingleLine | Tutorial | CodeNetMut | | |------------------|------------|--------------|-----------| | Difficulty Level | Easy | Medium | Hard | | Language | Python | Python | Python | | Pre-train # | 8,950,959 | 3,422,943 | 2,838,644 | | Test # | 7,968 | 13,744 | 19,541 | | Avg Code Len | 3.28 | 4.90 | 8.26 | | Avg Trace Len | 1.00 | 11.89 | 22.80 | | Avg State Num | 2.44 | 1.34 | 3.67 | gleLine that are most difficult for our model to generate and add Tutorial data into the pre-training corpus. We further add CodeNetMut data into the pretraining corpus and pre-train the model to converge on all the examples. To help distinguish difficulty level, we add a prefix p ∈ {[*SINGLELINE*], [T UT ORIAL], [*CODENETMUT*]} in front of the input, indicating the kind of data, e.g. [*SINGLELINE*] means receiving SingleLine data. More details about pre-training settings and model configurations can be found in Appendix B. ## 5 Experimental Setup 5.1 Dataset We build our pre-training dataset as described in Section 3. Table 2 shows some basic statistics. The 19,541 examples in CodeNetMut test split are from 39 unseen programming problems in CodeNet and have not undergone the mutation process. Additionally, we held out 10k programs from each dataset as a validation split during pre-training. For Tutorial and CodeNetMut, the ground truth trace is the execution result of the whole program. For SingleLine, since the instances are simple programs consisting of variable declarations and one-line transformations, the model is only asked to predict the final states of variables, which is presented in the form of a one-line trace. We observe the average length of code and trace in CodeNetMut are about twice as long as those in Tutorial. Also, executing programs in CodeNetMut requires managing a larger number of variables in varying states. ## 5.2 Models We evaluate several models on code execution task. **Codex** model code-cushman-001 is a specialized GPT model fine-tuned on GitHub code (Chen et al., 2021). We use few-shot learning | Dataset | Model | General | Line | Identifier | | | | | | |--------------|------------|-----------|--------|--------------|-----------|--------|-------|-------|-------| | Output Acc. | Trace Acc. | Precision | Recall | F1 | Precision | Recall | F1 | | | | SingeLine | Codex | - | 36.87 | 36.87 | 36.87 | 36.87 | 71.87 | 69.34 | 70.58 | | CEL-S1 | - | 93.32 | 93.32 | 93.32 | 93.32 | 96.94 | 96.86 | 96.90 | | | CodeExecutor | - | 94.03 | 94.03 | 94.03 | 94.03 | 97.28 | 97.18 | 97.23 | | | Codex | 13.07 | - | - | - | - | - | - | - | | | CEL-S2 | 79.51 | 85.59 | 95.94 | 84.24 | 89.71 | 97.29 | 87.30 | 92.02 | | | Tutorial | CEL-S3 | 7.89 | 8.35 | 26.58 | 21.33 | 23.67 | 26.36 | 19.47 | 22.40 | | CodeExecutor | 76.42 | 80.09 | 94.49 | 76.74 | 84.70 | 95.91 | 69.15 | 80.36 | | | CodeNetMut | Codex | 17.45 | - | - | - | - | - | - | - | | CEL-S3 | 43.80 | 29.44 | 59.32 | 41.76 | 49.01 | 68.30 | 41.69 | 51.78 | | | CodeExecutor | 48.06 | 33.38 | 58.70 | 43.48 | 49.96 | 67.81 | 45.29 | 54.31 | | | -w/o CL | 45.93 | 30.98 | 60.21 | 42.45 | 49.79 | 68.55 | 41.58 | 51.76 | | by giving Codex three code and execution trace pairs for the code execution task. CodeExecutorLimited (CEL) is a three-stage model pre-trained with the code execution objective. CEL can only access limited data in each stage, as opposed to CodeExecutor which can utilize all the datasets simultaneously (see Appendix C for a detailed comparison). It is initialized using the publicly available checkpoint of UniXcoder and continues to be trained with SingleLine data, resulting in the model CodeExecutorLimited-Stage1, which we call **CELS1**. In the second stage, we initialize it with CELS1 and employ Tutorial data to pre-train, so we get the model **CEL-S2**. By continuing pre-training CEL-S2, we use CodeNetMut to improve the capacity of executing real-world programs at the third stage. **CEL-S3** is produced after these stages mentioned above. CodeExecutor without Curriculum Learning(**CodeExecutor w/o CL**) is a single-stage model trained on all three datasets together. ## 5.3 Evaluation Metrics We test model capabilities of executing code on the test sets from three datasets. We measure functional correctness of the sampled trace from three perspectives. We report output accuracy and trace accuracy to evaluate the general aspect. **Output accuracy** checks if the model prints the same message as the code execution, calculated only for programs with standard output. **Trace accuracy** checks if the model produces the same trace as the code execution, regardless of the order of states in a line of the trace. To evaluate the correctness of each line and the states of identifiers in the trace, we also assess per-line score and identifier score. **Line precision** is determined by the ratio of correctly identified lines among all the lines in the traces generated by the model. **Line recall** is the ratio of correctly identified lines predicted by the model among all the lines in the ground truth traces. Similarly, we also calculate scores for the identifiers in the trace. To deepen our understanding of model behavior and error modes, we also conduct a qualitative analysis by examining samples. We randomly sample 50 code-trace pairs from the test set and ask two programmers with at least 5 years of experience to evaluate whether CodeExecutor executes a program correctly in 7 aspects. The category *Basic* includes basic knowledge for a Python beginner like math operators, augmented assignment operators, comparison operators, variables. The category *Lists, Tuples, etc.* consists of typical Python data structures, such as lists, tuples, dictionaries, sets, and related manipulation functions. As shown in Table 4, we build the taxonomy, along with a handbook to guide classification. Each reviewer examines the generated trace line by line and counts the occurrence frequency of each category. They count all these categories if a trace line involves multiple categories. When an error occurs, they identify which kind of knowledge category the model mistakes. Finally, they work together to discuss the divergence of error attribution and come to an agreement. ## 6 Results And Analysis In this section, we evaluate CodeExecutor on code execution task(§6.1), conduct an in-depth analysis to understand model behavior and error mode (§6.2), followed by two downstream tasks (§6.3). Figure 3: An Example from CodeNetMut test split, where CodeExecutor produces an imperfect prediction, with the mistake highlighted by an underline. ## 6.1 Overall Results We evaluate the performance of models on SingleLine, Tutorial and CodeNetMut datasets. We show the result of **SingleLine** in Table 3 (top). CodeExecutor is able to execute around 94% of single-line transformations correctly, while Codex fails to do so in most cases. CodeExecutor also brings a 0.7% improvement over CEL-S1, indicating learning hard programs during pre-training helps better solve easier examples. Since each SingleLine program always produces a one-line trace without standard outputs, we do not report output accuracy, and the line precision/recall scores are equal to trace accuracy. For the **Tutorial** experiments in Table 3 (medium), CodeExecutor significantly outperforms Codex on output accuracy (76.42% vs.13.07%). The lower score of CodeExecutor compared to CEL-S2 suggests a discrepancy between code examples in tutorials and CodeNet since the Tutorial dataset is composed of mutants from only a few programs in tutorial websites, limiting its diversity. CEL-S3 struggles to produce traces, indicating that it forgets most knowledge acquired in Tutorial data in the last training stage. CodeNetMut results are much lower than those in SingleLine and Tutorial datasets, which shows that it is more challenging to generate traces in real-world scenarios. CodeExecutor produces the correct output for nearly half of the examples (48.06%), and about a third of the traces are the exact match for the ground truth (33.38%). By pretraining on the code execution task, CodeExecutor boosts the performance of output by 30.6% absolute points over Codex. Besides, CodeExecutor yields 4.3% output accuracy score and 3.9% trace accuracy score improvement than CEL-S3, which indicates the effectiveness of the training strategy described in 4.3. After removing curriculum learning, the output accuracy score drops from 48.06% to 45.93% and the trace accuracy score drops from 33.38% to 30.98%, which shows the contribution | Category | Total | Correct | Accuracy | |------------------------|---------|-----------|------------| | Basic | 204 | 183 | 89.71 | | Built-in Functions | 42 | 35 | 83.33 | | Lists, Tuples, etc. | 44 | 34 | 77.27 | | Strings | 19 | 10 | 52.63 | | Conditional Statements | 60 | 57 | 95.00 | | Loops | 25 | 21 | 84.00 | | Function Calls | 5 | 5 | 100.00 | of curriculum learning. These results demonstrate that the code execution task is challenging for pre-trained models on source code like Codex. However, our CodeExecutor model can achieve high performance to execute simple programs and are capable of predicting complex execution traces for real-world programs. ## 6.2 In-Depth Study On Model Performance We conduct a qualitative analysis of model performance by examining samples (Table 4), resulting in the following findings. More examples can be found in Appendix D. The Model Typically Has a Basic Sense of Control Flows Conditional statements, loops, and function calls reveal the control flow of the program. Control flow reflects the order in which the program's code executes. It is important for understanding a program and is often complex, as it controls the code through certain decisions and monitors which statements need to be executed and which should be skipped. From Table 4, we find that CodeExecutor has a rudimentary understanding of high-level multi-line control flows, especially expert at conditional statements and function calls. 57 out of 60 conditional statements and all 5 calls to user-defined functions are predicted **Couet.** 1 rec = ['18', '3', '5'] 2 n, a, b = map(int, rec) 3 min = [a, b] 4 max = min(nn) 5 min = n - min(n, (n-min[0])+(n-min[1])) 6 print(str(max) + = + str(min)) Code: **Prediction:** <line> 1 <state> rec:[10, 3, 5] <output> 3 2 <line> 6 <state> rec:[10, 3, 5]; n:10; a:3; b:5; nin:[3, 5]; nmax:3; nmin:2 <line> 2 <state> rec:[10, 3, 5]; n:10; a:3; b:5 <line> 3 <state> rec:[10, 3, 5]; n:10; a:3; b:5; nin:[3, 5] <line> 4 <state> rec:[10, 3, 5]; n:10; a:3; b:5; nin:[3, 5] <line> 5 <state> rec:[10, 3, 5]; n:10; a:3; b:5; nin:[3, 5] <output> 3.2 | Model | MAP | |----------------|-------| | GraphCodeBERT | 23.08 | | + CodeExecutor | 55.94 | | UniXcoder | 71.86 | | + CodeExecutor | 79.13 | correctly. The accuracy of loops is 84%, while the incorrect loops undergo wrong iterative times. Take Figure 1 (a) as an example. CodeExecutor predicts exactly the same trace as the ground truth in (b). Our model recognizes that the for loop occurred on line 4 will execute several times. In the second iteration, "n" meets the condition of "n <= 0", resulting in the *"break"* statement and terminating the loop. The model behaves well on the code block in the for loop, showing its capacity of understanding control flows. ## The Model Struggles To Handle The Intricacies Of Operations, Particularly In Relation To Data Structures Complex programs often involve multiple categories of programming knowledge. Figure 3 shows an example that uses lists and strings. It determines the maximum and minimum possible number of people among "n", who subscribe to both Newspaper I and II, given that "a" people subscribe to I and "b" people subscribe to II. CodeExecutor incorrectly calculates *"nmin"* in line 5, expected 0 but got 2. This calculation involves retrieving values from a list, performing additions, subtractions, and using the "min" function. The compositionality of these operations makes it challenging for our model to fully comprehend the code and generate accurate states. Additionally, as presented by the relatively low accuracy on "Lists, Tuples, etc." (77.27%) and "Strings" (52.63%) in Table 4, we observe that the model falls short of understanding data structures like lists and strings. The understanding of data structures requires the model to learn the behavior of objects after they are created, modified, added or deleted. These operations can be changeable and challenging for the model to grasp. This suggests that the model may struggle with complex programs that involve multiple operations and data structures. ## 6.3 Downstream Tasks To verify the effectiveness of CodeExecutor in representing code semantics, we apply it to two code | Model | Pass@1 | Pass@10 | |----------------|----------|-----------| | Codex | 12.48 | 45.59 | | + CodeExecutor | 17.87 | 49.69 | intelligence tasks - the zero-shot code-to-codesearch task and text-to-code generation task. Zero-shot Code-to-code Search The task is introduced by Guo et al. (2022). To avoid duplication between the associate dataset and our pre-training corpus, we construct a new dataset by collecting 9,987 Python functions from CodeNet (Puri et al., 2021). Each function solves one of the 48 problems. Given one function, we retrieve all the functions that solve the same problem. We first use the mean vectors of last hidden states of a baseline model to calculate the similarity between two functions. To explore how code execution facilitates code-to-code-search, we execute each function by providing a test case. We then utilize the program outputs extracted from the execution trace generated by CodeExecutor, and sort the candidates according to the edit similarity compared with outputs of the query program. From table 5, we find that CodeExecutor boosts over 32.8 points compared with GraphCodeBERT (Guo et al., 2021), and provides about 7.2 points improvement compared with UniXcoder, showing that code execution can significantly enhance the comprehension of code semantics. Text-to-code Generation We use HumanEval benchmark (Chen et al., 2021) which includes 164 human-written programming problems. We first leverage Codex (code-cushman-001) to generate 200 solutions for each problem. Then we use CodeExecutor to predict the outputs of each solution by feeding example test cases in problem descriptions. We rank the 200 solutions by the edit similarity between their outputs and expected outputs. Finally, we evaluate the correctness of the first 50 solutions for each problem. Note that different from other filtering strategies, our method doesn't need a real-world code executor but only uses models to predict the execution results. Table 6 demonstrates that with CodeExecutor as a solution filter, the performance of text-to-code generation is improved, indicating CodeExecutor is beneficial to other code intelligence tasks. ## 7 Conclusion We propose a mutation-based data augmentation method to create a large and realistic Python code execution dataset and task, which pose a significant challenge for current models such as Codex. We develop CodeExecutor, a Transformer model that leverages code execution as a pre-training objective and adopts a curriculum learning strategy. CodeExecutor not only outperforms existing models on code execution, but also demonstrates its generalizability to downstream tasks such as codeto-code search and text-to-code generation. Our work offers a novel and effective solution for code execution and other code intelligence tasks. ## Limitations Several limitations of CodeExecutor, such as its application to only Python, the lack of faithfulness in the results produced, and the maximum length limit for trace generation, point toward interesting directions for future work. Programming Language One limitation of our current model is that it is currently only applied to Python, which limits its use and effectiveness in executing programs written in other programming languages. This highlights the need for future work to expand the model's applicability to other languages. Faithfulness The result may not be faithful enough when handling difficult examples, such as those with complex logic, long loops, or many branches. For example, we observe that in two complicated programs that both contain the assignment "alpha = list('abcdefg')", our model correctly predicts the value of *"alpha"* in one case but incorrectly in the other. The lack of faithfulness needs to be studied for further research on code execution. Generation Window Size We limit the length of generated trace to 1024 tokens. It can be a limitation for programs with long execution traces, particularly those with loops. Improving the ability of Transformers to handle longer sequences (Tay et al., 2021, 2022) would likely be beneficial for the code execution task. ## Ethical Statement The work is conducted in compliance with ethical principles. The datasets introduced in this paper only used publicly available data. The annotation in human evaluation was conducted by two authors of the paper, and thus there are no associated concerns, e.g. regarding compensation. Therefore, there are no potential risks associated with the research. ## References Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik Koncel-Kedziorski, Yejin Choi, and Hannaneh Hajishirzi. 2019. Mathqa: Towards interpretable math word problem solving with operation-based formalisms. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 2357–2367. Association for Computational Linguistics. Jacob Austin, Augustus Odena, Maxwell I. Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie J. Cai, Michael Terry, Quoc V. Le, and Charles Sutton. 2021. Program synthesis with large language models. *CoRR*, abs/2108.07732. Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum learning. In Proceedings of the 26th Annual International Conference on Machine Learning, ICML 2009, Montreal, Quebec, Canada, June 14-18, 2009, volume 382 of ACM International Conference Proceeding Series, pages 41–48. ACM. David Bieber, Charles Sutton, Hugo Larochelle, and Daniel Tarlow. 2020. Learning to execute programs with instruction pointer attention graph neural networks. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. Casey Casalnuovo, Earl T. Barr, Santanu Kumar Dash, Prem Devanbu, and Emily Morgan. 2020. A theory of dual channel constraints. In *ICSE-NIER 2020:* 42nd International Conference on Software Engineering, New Ideas and Emerging Results, Seoul, South Korea, 27 June - 19 July, 2020, pages 25–28. ACM. Saikat Chakraborty, Toufique Ahmed, Yangruibo Ding, Premkumar T. Devanbu, and Baishakhi Ray. 2022. Natgen: generative pre-training by "naturalizing" source code. In *Proceedings of the 30th ACM Joint* European Software Engineering Conference and Symposium on the Foundations of Software Engineering, ESEC/FSE 2022, Singapore, Singapore, November 14-18, 2022, pages 18–30. ACM. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harrison Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Joshua Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. 2021. Evaluating large language models trained on code. *CoRR*, abs/2107.03374. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to solve math word problems. *CoRR*, abs/2110.14168. Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, and Lukasz Kaiser. 2019. Universal transformers. In *7th International Conference on* Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. Anna Derezinska and Konrad Hałas. 2014. Operators ´ for mutation testing of python programs. *Res. Rep*. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171–4186. Association for Computational Linguistics. Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understanding and generation. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 13042–13054. Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, and Ming Zhou. 2020. Codebert: A pre-trained model for programming and natural languages. In *Findings of the Association for* Computational Linguistics: EMNLP 2020, Online Event, 16-20 November 2020, volume EMNLP 2020 of *Findings of ACL*, pages 1536–1547. Association for Computational Linguistics. Alex Graves, Greg Wayne, and Ivo Danihelka. 2014. Neural turing machines. *CoRR*, abs/1410.5401. Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka GrabskaBarwinska, Sergio Gomez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John P. Agapiou, Adrià Puigdomènech Badia, Karl Moritz Hermann, Yori Zwols, Georg Ostrovski, Adam Cain, Helen King, Christopher Summerfield, Phil Blunsom, Koray Kavukcuoglu, and Demis Hassabis. 2016. Hybrid computing using a neural network with dynamic external memory. *Nat.*, 538(7626):471–476. Daya Guo, Shuai Lu, Nan Duan, Yanlin Wang, Ming Zhou, and Jian Yin. 2022. Unixcoder: Unified crossmodal pre-training for code representation. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long* Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 7212–7225. Association for Computational Linguistics. Daya Guo, Shuo Ren, Shuai Lu, Zhangyin Feng, Duyu Tang, Shujie Liu, Long Zhou, Nan Duan, Alexey Svyatkovskiy, Shengyu Fu, Michele Tufano, Shao Kun Deng, Colin B. Clement, Dawn Drain, Neel Sundaresan, Jian Yin, Daxin Jiang, and Ming Zhou. 2021. Graphcodebert: Pre-training code representations with data flow. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net. Richard G. Hamlet. 1977. Testing programs with the aid of a compiler. *IEEE Trans. Software Eng.*, 3(4):279– 290. Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. 2021. Measuring mathematical problem solving with the MATH dataset. In *Proceedings* of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks 2021, December 2021, virtual. Tom Henighan, Jared Kaplan, Mor Katz, Mark Chen, Christopher Hesse, Jacob Jackson, Heewoo Jun, Tom B. Brown, Prafulla Dhariwal, Scott Gray, Chris Hallacy, Benjamin Mann, Alec Radford, Aditya Ramesh, Nick Ryder, Daniel M. Ziegler, John Schulman, Dario Amodei, and Sam McCandlish. 2020. Scaling laws for autoregressive generative modeling. CoRR, abs/2010.14701. Abram Hindle, Earl T. Barr, Zhendong Su, Mark Gabel, and Premkumar T. Devanbu. 2012. On the naturalness of software. In 34th International Conference on Software Engineering, ICSE 2012, June 2-9, 2012, Zurich, Switzerland, pages 837–847. IEEE Computer Society. Yue Jia and Mark Harman. 2011. An analysis and survey of the development of mutation testing. *IEEE* Trans. Software Eng., 37(5):649–678. Lukasz Kaiser and Ilya Sutskever. 2016. Neural gpus learn algorithms. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings. Aditya Kanade, Petros Maniatis, Gogul Balakrishnan, and Kensen Shi. 2020. Learning and evaluating contextual embedding of source code. In *Proceedings of* the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of *Proceedings of Machine Learning* Research, pages 5110–5121. PMLR. Karol Kurach, Marcin Andrychowicz, and Ilya Sutskever. 2016. Neural random-access machines. In *4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May* 2-4, 2016, Conference Track Proceedings. Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. 2017. Program induction by rationale generation: Learning to solve and explain algebraic word problems. In *Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics,* ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 158–167. Association for Computational Linguistics. Kevin Lu, Aditya Grover, Pieter Abbeel, and Igor Mordatch. 2022. Frozen pretrained transformers as universal computation engines. In Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, Thirty-Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event, February 22 - March 1, 2022, pages 7628–7636. AAAI Press. Maxwell I. Nye, Anders Johan Andreassen, Guy GurAri, Henryk Michalewski, Jacob Austin, David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, Charles Sutton, and Augustus Odena. 2021. Show your work: Scratchpads for intermediate computation with language models. *CoRR*, abs/2112.00114. Ruchir Puri, David S. Kung, Geert Janssen, Wei Zhang, Giacomo Domeniconi, Vladimir Zolotov, Julian Dolby, Jie Chen, Mihir R. Choudhury, Lindsey Decker, Veronika Thost, Luca Buratti, Saurabh Pujar, Shyam Ramji, Ulrich Finkler, Susan Malaika, and Frederick Reiss. 2021. Codenet: A large-scale AI for code dataset for learning a diversity of coding tasks. In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks 2021, December 2021, virtual. Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. 2018. Improving language understanding by generative pre-training. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. *OpenAI* blog, 1(8):9. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21:140:1–140:67. Scott E. Reed and Nando de Freitas. 2016. Neural programmer-interpreters. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings. David Saxton, Edward Grefenstette, Felix Hill, and Pushmeet Kohli. 2019. Analysing mathematical reasoning abilities of neural models. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. Alexey Svyatkovskiy, Shao Kun Deng, Shengyu Fu, and Neel Sundaresan. 2020. Intellicode compose: code generation using transformer. In *ESEC/FSE* '20: 28th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering, Virtual Event, USA, November 8-13, 2020, pages 1433–1443. ACM. Yi Tay, Mostafa Dehghani, Samira Abnar, Yikang Shen, Dara Bahri, Philip Pham, Jinfeng Rao, Liu Yang, Sebastian Ruder, and Donald Metzler. 2021. Long range arena : A benchmark for efficient transformers. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net. Yi Tay, Mostafa Dehghani, Dara Bahri, and Donald Metzler. 2022. Efficient transformers: A survey. ACM Computing Surveys, 55(6):1–28. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30. Petar Velickovic, Lars Buesing, Matthew C. Overlan, Razvan Pascanu, Oriol Vinyals, and Charles Blundell. 2020a. Pointer graph networks. In *Advances* in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. Petar Velickovic, Rex Ying, Matilde Padovano, Raia Hadsell, and Charles Blundell. 2020b. Neural execution of graph algorithms. In *8th International* Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. Xin Wang, Yasheng Wang, Fei Mi, Pingyi Zhou, Yao Wan, Xiao Liu, Li Li, Hao Wu, Jin Liu, and Xin Jiang. 2021a. Syncobert: Syntax-guided multi-modal contrastive pre-training for code representation. arXiv preprint arXiv:2108.04556. Yu Wang, Ke Wang, Fengjuan Gao, and Linzhang Wang. 2020. Learning semantic program embeddings with graph interval neural network. Proc. ACM Program. Lang., 4(OOPSLA):137:1–137:27. Yue Wang, Weishi Wang, Shafiq R. Joty, and Steven C. H. Hoi. 2021b. Codet5: Identifier-aware unified pre-trained encoder-decoder models for code understanding and generation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 8696–8708. Association for Computational Linguistics. Yujun Yan, Kevin Swersky, Danai Koutra, Parthasarathy Ranganathan, and Milad Hashemi. 2020. Neural execution engines: Learning to execute subroutines. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. Wojciech Zaremba and Ilya Sutskever. 2014. Learning to execute. *CoRR*, abs/1410.4615. Hattie Zhou, Azade Nova, Hugo Larochelle, Aaron C. Courville, Behnam Neyshabur, and Hanie Sedghi. 2022. Teaching algorithmic reasoning via in-context learning. *CoRR*, abs/2211.09066. ## A Dataset Detail B Model Configurations out if they exceed the limits. We also remove the programs that result in runtime errors during parsing or execution, by catching Python exceptions raised in programs. This results in a dataset of 387k executable programs, each paired with a trace. To construct a large-scale dataset of executable programs, we propose a mutation-based data augmentation approach. we first present a set of mutation operators as shown in Table 1. Most of them correspond to selected operators used in strongly typed general purpose languages and are adopted to the Python language. Operators designed for Python features are also included, such as Slice Index Removal (SIR) and Reverse Iteration Loop (RIL). Then we leverage the tree-sitter8to convert a program into an abstract syntax tree and then extract its node type information to get a candidate list of all mutable literals, operators and statements. For each mutable candidate, we apply the related mutation operators with 50% probability. Specifically, we change a numeric literal x into a random number from a Gaussian distribution with mean x and standard deviation 100. We either extend a string with one or two random characters or shorten a string. We randomly pick one of the three looprelated operators or keep it as it is when handling each loop. All operators can be applied before a mutated program execution, and possible mutants with errors are to be detected and eliminated during execution. By mutating each program 20 times, we obtain 3.2M deduplicated programs, each paired with a trace. We use the CodeNet Mutants (CodeNetMut) to build the pre-training dataset. To prevent data leakage, all submissions to the same problem become part of the same split. We use submissions of 710 problems with their mutants to build the pretraining dataset. Since mutation greatly enhances diversity, these programs embody rich semantics and complex operations. Other submissions (without mutations) are used to build the validation and test dataset. These human-authored programs ensure the quality of evaluation data. To obtain executable programs, we build the Python Code Execution dataset based on submissions to competitive programming problems from CodeNet (Puri et al., 2021). These human-written programs with real-world complexity are derived from online judge websites AIZU 6and AtCoder 7. CodeNet contains 240k Python submissions, aiming to solve 8,00 distinct programming problems. Each submission is a single-file Python program that reads from stdin and writes to stdout. Each programming problem provides at least one sample input and at most four sample inputs. Since executing a program relies on an input, we replace the statements that read from input streams with assignment statements that assign input values to variables. We run each submission in a sandbox environment to get the execution trace for that program. Programs are restricted to one second of execution time and 1024 lines of execution trace, and will be filtered 6https://onlinejudge.u-aizu.ac.jp/ 7https://atcoder.jp/ We build our model based on 12 layers of Transformer with 768 dimensional hidden states and 12 attention heads. We add 210 additional special tokens into the vocabulary to represent 200 line numbers, 3 pre-training dataset names, and trace 8https://tree-sitter.github.io/tree-sitter/ | Model | Stage1 (S1) | Stage2 (S2) | Stage3 (S3) | |--------------|---------------------------|---------------------------------------|---------------| | CEL | SingleLine | Tutorial | CodeNetMut | | CodeExecutor | SingleLine (3M), Tutorial | SingleLine (3M), Tutorial, CodeNetMut | | | SingleLine | | | | Table 7: Datasets that CEL and CodeExecutor use for three-stage pre-training. "SingleLine (3M)" denotes 3 million instances within SingleLine that are most difficult for CodeExecutor to generate. Code: 1 2 3 4 5 6 7 8 19 10 111 1212 13 14 if 15 df main(): K = 25 ![12_image_0.png](12_image_0.png) s = ['a'] ![12_image_1.png](12_image_1.png) ![12_image_2.png](12_image_2.png) main() | <line> 1 <state> ascii_lowercase : abcdefghijklmnopqrstuvwxyz | |------------------------------------------------------------------------------------| | <line> 2 <state> ascii_lowercase : abcdefghijklmnopgrstuvwxyz ; main : <function> | | <line> 14 <state> ascii_lowercase : abcdefghijklmnopqrstuvwxyz ; main : <function> | | <line> 15 <state> | | <line> 2 <state> | | <line> 3 <state> s : [a] | | <line> 4 <state> s : [a] ; K : 25 | | <line> 5 <state> s : [a] ; K : 25 ; 1 : 0 ; c : a | | <line> 6 <state> s : [a] ; K : 25 ; i : 0 ; c : a ; num : 0 | | <line> 7 <state> s : [a] ; K : 25 ; i : 0 ; c : a ; num : 0 | | <line> 5 <state> s : [a] ; K : 25 ; i : 0 ; c : a ; num : 0 | | <line> 10 <state> s : [a] ; K : 25 ; i : 0 ; c : a ; num : 0 | | <line> 11 <state> s : [a] ; K : 25 ; 1 : 0 ; c : a ; num : 0 ; last : 25 | | <line> 12 <state> s : [z] ; K : 25 ; 1 : 0 ; c : a ; num : 0 ; last : 25 | | <output> z | | <line> 13 <state> s : [z] ; K : 25 ; i : 0 ; c : a ; num : 0 ; last : 25 | | <line> 13 <state> ascii_lowercase : abcdefghijklmnopqrstuvwxyz ; main : <function> | structure described in §4.2. During pre-training, we set the max length of input sequence and batch size to be 1024 and 256, respectively. We use the Adam optimizer to update model parameters with 4e-4 learning rate. We first employ the SingleLine dataset to pre-train the model with the code execution objective for 500k steps. We then reserve 3 million instances in SingleLine that are most difficult for our model to generate and add Tutorial data into the corpus, pre-training for 300k steps. We add CodeNetMut into the corpus and further pre-train for 300k steps. We pre-train the model on a cluster of 16 NVIDIA Tesla V100 with 32GB memory and the total training time is about a month. For inference, we set beam search as 10. ## C Three-Stage Pre-Training In table 7 , we list the datasets that CodeExecutor- Limited (CEL) and CodeExecutor use for threestage pre-training, respectively. The first stage of pre-training for CEL uses the SingleLine dataset, resulting in the model CEL-S1. In the second stage, CEL is initialized with CEL-S1 and pre-trained with the Tutorial dataset, resulting in the model CEL-S2. In the third stage, CEL is initialized with CEL-S2 and pre-trained with the CodeNetMut dataset, resulting in the model CEL- S3. On the other hand, CodeExecutor is first pretrained with the SingleLine dataset, then the 3 million most challenging SingleLine data is selected for later training stages based on the model's loss. In the second stage, CodeExecutor is pre-trained with the 3 million difficult SingleLine data, along with the Tutorial dataset. In the third stage, Code- Executor is pre-trained with the 3 million difficult SingleLine data, the entire Tutorial dataset, and the CodeNetMut dataset. ## D Qualitative Examples Additional examples are shown here. Figure 4 shows an example that covers all the categories of Python programming knowledge in Table 4. CodeExecutor generates the same trace as ground truth. Figure 5 is an example of performing division calculations with decimals. CodeExecutor is able to produce the correct first fifteen digits and makes errors in the remaining two digits. ## Code: x = 1.2379400392853809e-46 x /= 5 Ground truth: x : 2.475880078570762e-47 Prediction: x : 2.4758800785707618e-47 Figure 5: An example of division calculations with decimals, where CodeExecutor correctly produce the first fifteen digits, with mistakes highlighted by an underline. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? The "Limitations" section, which is after the conclusion. ✓ A2. Did you discuss any potential risks of your work? The "Ethical Statement" section, which is before the references. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and the first section. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3, 4 And 5. ✓ B1. Did you cite the creators of artifacts you used? Section 3, 4 and 5. ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Section 3, 4 and 5. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section 3, 4 and 5. ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? The "Ethical Statement" section, which is before the references. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 5. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 5. ## C ✓ **Did You Run Computational Experiments?** Section 5 And 6. ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix B. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 5 and appendix B. ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 6. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 3 and appendix A. D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Section 5. ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Section 5 and a handbook in the data package of supplemental material. ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Section 5, the "Ethical Statement" section and a handbook in the data package of supplemental material. ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Section 5 and a handbook in the data package of supplemental material. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
hao-etal-2023-bertnet
{B}ert{N}et: Harvesting Knowledge Graphs with Arbitrary Relations from Pretrained Language Models
https://aclanthology.org/2023.findings-acl.309
It is crucial to automatically construct knowledge graphs (KGs) of diverse new relations to support knowledge discovery and broad applications. Previous KG construction methods, based on either crowdsourcing or text mining, are often limited to a small predefined set of relations due to manual cost or restrictions in text corpus. Recent research proposed to use pretrained language models (LMs) as implicit knowledge bases that accept knowledge queries with prompts. Yet, the implicit knowledge lacks many desirable properties of a full-scale symbolic KG, such as easy access, navigation, editing, and quality assurance. In this paper, we propose a new approach of harvesting massive KGs of arbitrary relations from pretrained LMs. With minimal input of a relation definition (a prompt and a few shot of example entity pairs), the approach efficiently searches in the vast entity pair space to extract diverse accurate knowledge of the desired relation. We develop an effective search-and-rescore mechanism for improved efficiency and accuracy. We deploy the approach to harvest KGs of over 400 new relations, from LMs of varying capacities such as RoBERTaNet. Extensive human and automatic evaluations show our approach manages to extract diverse accurate knowledge, including tuples of complex relations (e.g., {``}A is capable of but not good at B{''}). The resulting KGs as a symbolic interpretation of the source LMs also reveal new insights into the LMs{'} knowledge capacities.
# Bertnet: Harvesting Knowledge Graphs With Arbitrary Relations From Pretrained Language Models Shibo Hao1∗ , Bowen Tan2∗, Kaiwen Tang1∗, Bin Ni1**, Xiyan Shao**1, Hengzhe Zhang1, Eric P. Xing2,3**, Zhiting Hu**1 1UC San Diego, 2Carnegie Mellon University, 3Mohamed bin Zayed University of Artificial Intelligence {s5hao,zhh019}@ucsd.edu, {btan2}@cs.cmu.edu ## Abstract It is crucial to automatically construct knowledge graphs (KGs) of diverse new relations to support knowledge discovery and broad applications. Previous KG construction methods, based on either crowdsourcing or text mining, are often limited to a small predefined set of relations due to manual cost or restrictions in text corpus. Recent research proposed to use pretrained language models (LMs) as implicit knowledge bases that accept knowledge queries with prompts. Yet, the implicit knowledge lacks many desirable properties of a full-scale symbolic KG, such as easy access, navigation, editing, and quality assurance. In this paper, we propose a new approach of harvesting massive KGs of *arbitrary* relations from pretrained LMs. With minimal input of a relation definition (a prompt and a few shot of example entity pairs), the approach efficiently searches in the vast entity pair space to extract diverse accurate knowledge of the desired relation. We develop an effective search-and-rescore mechanism for improved efficiency and accuracy. We deploy the approach to harvest KGs of over 400 new relations from different LMs. Extensive human and automatic evaluations show our approach manages to extract diverse accurate knowledge, including tuples of complex relations (e.g., "A is capable of but not good at B"). The resulting KGs as a symbolic interpretation of the source LMs also reveal new insights into the LMs' knowledge capacities. ## 1 Introduction Symbolic knowledge graphs (KGs) are a powerful tool for indexing rich knowledge about entities and their relationships, and are useful for information access (Google, 2012), decision making (Yang et al., 2021; Santos et al., 2022), and improving machine learning in general (Li et al., 2019; Wang et al., 2019; Tan et al., 2020; Xiong et al., 2017). ∗Equal contribution. Code available at https://github. com/tanyuqian/knowledge-harvest-from-lms. Demo available at https://lmnet.io ![0_image_0.png](0_image_0.png) It has been a long-term desire to construct KGs of diverse *relations* to comprehensively characterize the structures between entities. The traditional crowdsourcing-based approach (Speer et al., 2017; Fellbaum, 2000; Sap et al., 2019) tends to cover only a restricted relation set, such as ConceptNet (Speer et al., 2017) that contains a small set of 34 relations. The popular method based on text mining (Luan et al., 2019; Zhong and Chen, 2020; Wang et al., 2021b) has a similar limitation, as the text understanding models can often recognize only a predefined set of relations included in training data. Some open-schema text mining approaches (e.g., based on syntactic patterns) exist (Tandon et al., 2014; Romero et al., 2019; Zhang et al., 2020b; Nguyen et al., 2021), yet the extracted relations are limited to those explicitly stated in the text, missing all others that are not mentioned or do not have exact match with the text in the corpus. Similarly, KG completion approaches (Bordes et al., 2013; Bosselut et al., 2019; Yao et al., 2019) is restricted Text mining (Zhang et al., 2020a; Nguyen et al., 2021) NER, CR, RE, etc.1 KG ✗ LAMA (Petroni et al., 2019), LPAQA (Jiang et al., 2020) LMs tail entity ✓ COMET (Bosselut et al., 2019) Finetuned GPT-2 tail entity ✗ Symbolic Knowledge Distillation (West et al., 2022) GPT-3 KG ✓ 2 BertNet (ours) LMs KG ✓ Table 1: Categorization of works on automatic knowledge extraction. Compared to other categories of approaches, our method extracts full *explicit* KGs of *arbitrary new relations* from any LMs. | Method | Module(s) | Outcome | Arbitrary relation 2 | |----------------|-------------|-----------|------------------------| | BertNet (ours) | LMs | KG | ✓ | to the preexisting relations (Figure 1). On the other hand, large language models (LMs) pretrained on massive text corpus, such as BERT (Devlin et al., 2019) and GPT-3 (Brown et al., 2020), have been found to encode a significant amount of knowledge implicitly in their parameters. Recent research attempted to use LMs as flexible knowledge bases by querying the LMs with arbitrary prompts (e.g., "Obama was born in " for the answer "Hawaii") (Petroni et al., 2019). However, such implicit query-based knowledge falls short of many desirable properties of a full-scale KG such as ConceptNet (AlKhamissi et al., 2022), including easy access, browsing, or even editing (Zhu et al., 2020; Cao et al., 2021), as well as assurance of knowledge quality thanks to the symbolic nature (Anderson et al., 2020). Symbolic Knowledge Distillation (SKD, West et al., 2022) explicitly extracts a knowledge base from GPT-3. However, the approach exclusively relies on the strong in-context learning capability of GPT-3 and thus is not applicable to other rich LMs such as BERT (Devlin et al., 2019) and ROBERTA (Liu et al., 2019). Moreover, its use of a quality discriminator trained on existing KGs can limit its generalization to new relations not included in the training data. In this paper, we propose a new approach of harvesting massive KGs of arbitrary new relations from any pretrained LMs. Given minimal user input of a relation definition, including a prompt and a few shot of example entity pairs, our approach automatically searches within the LM to extract an extensive set of high-quality knowledge about the desired relation. To ensure search efficiency in the vast space of entity pairs, we devise an effective search-and-rescore strategy. We also adapt the previous prompt paraphrasing mechanism (Jiang et al., 2020; Newman et al., 2021) and enhance with our new rescore strategy for prompt weighting, leading to consistent and accurate outcome knowledge. We apply our approach on a range of LMs of varying capacities, such as ROBERTA, BERT, and DISTILBERT. In particular, we harvest knoweldge of over 400 new relations (an order of magnitude more than ConceptNet relations) not available in preexisting KGs and previous extraction methods. Extensive human and automatic evaluations show our approach successfully extracts diverse accurate knowledge, including tuples for complex relations such as "A is capable of, but not good at, B" and 3-ary relations such as "A can do B at C". Interestingly, the resulting KGs also serve as a symbolic interpretation of the source LMs, revealing new insights into their knowledge capacities in terms of varying factors such as model size, pretraining strategies, and distillation. ## 2 Related Work Knowledge graph construction Popular knowledge bases or KGs are usually constructed with heavy human labor. For example, WordNet (Fellbaum, 2000) is a lexical database that links words into semantic relations; ConceptNet (Speer et al., 2017) is a large commonsense knowledge graph presented as a set of knowledge triples; ATOMIC (Sap et al., 2019) is a crowd-sourced social commonsense KG of if-then statements. Recently, Automatic Knowledge Base Construction (AKBC) as a research focus has led to various approaches (summarized in Table 1). Text mining-based works aim for knowledge extraction from text. A typical information extraction system (Angeli et al., 2015) is composed of several sub-tasks like coreference resolution, named entity recognition, and relationship extraction. Some works on commonsense knowledge extraction include WebChild (Tandon et al., 2014), TransOMCS (Zhang et al., 2020a), DISCOS (Fang et al., 2021), Quasimodo (Romero et al., 2019), ASCENT (Nguyen et al., 2021). These extraction pipelines are based on linguistic pattern, and involve complex engineering such as corpus selection, term aggregation, filtering, etc. Recent attempts also utilize LMs for AKBC. Wang et al. 2021a finetuned LMs for link prediction. Feldman et al. 2019; Bouraoui et al. 2020 utilized LMs to score entity pairs collected from the Internet or missing edges in existing KGs. COMET (Bosselut et al., 2019) is a generative LM trained to predict tail entities given head entities and relations. West et al. 2021 distill the knowledge in GPT-3 to a generative LM. By prompting GPT-3 (Brown et al., 2020) with examples, they produced ATOMIC10x to teach the student model. Yet, this method requires the strong few-shot learning ability of GPT-3 and is not generally applicable to most LMs. To the best of our knowledge, our framework is the first to construct a KG by extracting purely from an LM (with the minimal definition of relations as input). The new paradigm can also be seen as optimizing a symbolic KG with (pretrained) neural models as supervision (Hu and Xing, 2022), which inverts the conventional problem of using symbolic knowledge to learn neural networks (Hu et al., 2016). LMs as knowledge bases Another line of works attempted to use LMs as knowledge bases (LAMA, Petroni et al. 2019). These works are also known as factual probing because they measured how much knowledge is encoded in LMs. This is usually implemented by prompting methods and leveraging the masked LM pretraining task. LPAQA (Jiang et al., 2020) proposes to use text mining and paraphrasing to find and select prompts to optimize the prediction of a single or a few correct tail entities, instead of extensively predicting all the valid entity pairs like in our framework. AutoPrompt (Shin et al., 2020), Qin and Eisner, 2021 and OPTIPrompt (Zhong et al., 2021) learn discrete or continuous prompts automatically with an additional training set. Though making prompts unreadable, these methods achieve higher accuracy on the knowledge probing tasks. Our framework differs from these works in that we aim to explicitly harvest knowledge graphs instead of measuring the knowledge in a simplified setting. Consistency of LMs Consistency is a significant challenge for LMs, which stresses that they should not produce conflicting predictions across inference sessions. For example, models should behave invariantly under inputs with different surface forms but the same meaning. Elazar et al. 2021 analyzed the consistency of pretrained LMs with respect to factual knowledge. Jiang et al. 2020 used paraphrasing to improve factual probing. Newman et al. 2021 trains an additional layer on top of word embedding to improve consistency. Recently, consistency is also shown helpful to improve the reasoning ability of large LMs (Wang et al., 2022; Jung et al., 2022; Hao et al., 2023). In our framework, the extracted entity pairs for each relation are enforced to consistently satisfy a diverse set of prompts and regularized by several scoring terms. ## 3 Harvesting Kgs From Lms This section presents the proposed framework for extracting a relational KG from a given pretrained LM, where the LM can be arbitrary fillin-the-blank models such as BERT (Devlin et al., 2019), ROBERTA (Liu et al., 2019), BART (Lewis et al., 2020), or GPT-3 (with appropriate instructions) (Brown et al., 2020). The KG consists of a set of knowledge tuples in the form ⟨HEAD EN-TITY (h), RELATION (r), TAIL ENTITY (t)⟩. Our approach utilizes the LM to automatically harvest a large number of appropriate entity pairs (h1, t1),(h2, t2)*, . . .*, for every given relation r. This presents a more challenging problem than traditional LM probing tasks, which typically predict a single tail entity or a small number of valid tail entities given a head entity and relation. Our approach for extracting knowledge tuples of a specific relation of interest, such as "potential_risk" as depicted in Figure 2, only requires minimal input information that defines the relation. This includes an initial prompt, such as "The potential risk of A is B" and a small number of example entity pairs, such as ⟨EATING CANDY, TOOTH DECAY⟩. The prompt provides the overall semantics of the relation, while the example entity pairs clarify possible ambiguities. For new relations not included in existing KGs, it is impractical to require a large set (e.g., hundreds) of example entity pairs as in previous knowledge probing or prompt optimization methods (Petroni et al., 2019; Jiang et al., 2020; Shi et al., 2019; Zhong et al., 2021). In contrast, our approach necessitates only a small number of example entity pairs, for example, as few as 2 in our experiments, which can easily be collected or written by users. In the following sections, we describe the core ![3_image_0.png](3_image_0.png) components of our approach, namely the automatic creation of diverse prompts with confidence weights (§3.1) and the efficient search to discover consistent entity pairs (§3.2) that compose the desired KGs. Figure 2 illustrate the overall framework. ## 3.1 Creating Diverse Weighted Prompts Our automated approach utilizes input information, specifically the initial prompt and several example entity pairs, to generate a set of semantically consistent but linguistically diverse prompts for describing the relation of interest. The generated prompts are assigned confidence weights to accurately measure consistency of knowledge in the subsequent step (§3.2). To generate diverse prompts for a desired relation, we begin by randomly selecting an entity pair from a example set and inserting it into an initial prompt to form a complete sentence. This sentence is then passed through an off-the-shelf text paraphrase model, which produces multiple paraphrased sentences with the same meaning. By removing the entity names, each paraphrased sentence results in a new prompt that describes the desired relation. To ensure a wide range of expressions of the relation, we retain only those prompts that are distinct from one another in terms of edit distance. This process is repeated by continuously paraphrasing the newly created prompts until a minimum of 10 prompts for the relation have been collected. The automatic generation of prompts can be imprecise, resulting in prompts that do not accurately convey the intended relation. To mitigate this, we propose a reweighting method that utilizes compatibility scores to calibrate the impact of each prompt in the subsequent knowledge search step. Specifically, we evaluate the compatibility of new prompts with example entity pairs by measuring the likelihood of the prompts under a LM, considering both the individual entities and the entity pair as a whole. This allows us to determine the appropriate weights for each prompt and improve the precision of the knowledge search process. Formally, the compatibility score between an entity pair (*h, t*) and a prompt p can be written as: $$f_{\rm LM}(\langle h,t\rangle,p)=\alpha\log P_{\rm LM}(h,t\mid p)$$ $$+(1-\alpha)\min\left\{\log P_{\rm LM}(h\mid p),\log P_{\rm LM}(t\mid p,h)\right\}.\tag{1}$$ (1) where the first term is the joint log-likelihood under the LM distribution PLM , the second term is the minimum individual log-likelihood given the prompt (and the other entity), and α is a balancing factor (α = 2/3 in our experiments). We compute the average compatibility score of each created prompt over all example entity pairs, and the weight of the prompt is then defined as the softmaxnormalized score across all prompts. ## 3.2 **Efficient Search For Consistent Knowledge** With the set of prompts and corresponding confidence weights obtained in the steps described in Section 3.1, we proceed to search entity pairs that consistently align with all prompts. To guide the searching process and evaluate the compatibility of searched-out entity pairs (h new, tnew), we reuse the previously defined prompt/entity-pair compati- ![4_image_0.png](4_image_0.png) bility function (Eq.1), and intuitively define consistency as the weighted average of its compatibility with the various prompts, i.e., consistency($(h^{\rm new},t^{\rm new}))=\sum_{p}w_{p}\cdot f_{\rm LM}((h^{\rm new},t^{\rm new}),p)$ where wp is the prompt weight and the sum is over all automatically created prompts as above, so that entity pairs compatible with all prompts are considered to be consistent. Based on the consistency criterion, we develop an efficient search strategy to search for consistent entity pairs. A straightforward approach involves enumerating all possible pairs of entities, calculating their respective consistency scores, and selecting the top-K entity pairs with the highest scores as the resulting knowledge. However, this approach can be computationally expensive due to the large vocabulary size V (e.g., V = 50, 265 for ROBERTA) and the high time complexity of the enumeration process (i.e., O(V 2) even when each entity consists of only one token). To overcome this limitation, we have proposed an appropriate approximation that leads to a more efficient search and re-scoring method. Specifically, we first use the minimum individual log-likelihoods (i.e., the second term in the compatibility score Eq.1) weighted averaged across different prompts (similar as in Eq.2), to propose a large set of candidate entity pairs. The use of the minimum individual log-likelihoods allows us to apply pruning strategies, such as maintaining a heap and eliminating entities ranked outside top-K in every single searching step. Once we have collected a large number of proposals, we re-rank them using the full consistency score in Eq.2 and select the top-K instances as the output knowledge. We describe more nuanced handling in the search procedure (e.g., the processing of multi-token entities, detailed pruning strategies) in the appendix. $\mathbf{a}=\mathbf{b}\cdot\mathbf{b}$. Generalization to complex relations Most existing KGs or knowledge bases include relations that are predicates connecting two entities, e.g., "A is capable of B". However, many real-life relations are more complex. Our approach is flexible and easily extensible to extract knowledge about these complex relations. We demonstrate this in our experiments by exploring two cases: (1) *highly* customized relations that have specific and sophisticated meanings, such as "A is capable of, but not good at, B". This type of sophisticated knowledge is often difficult for humans to write down on a large scale. Our automatic approach naturally supports harvesting this kind of knowledge given only an initial prompt and a few example entities that can be collected easily, e.g., ⟨DOG, SWIM⟩, ⟨CHICKEN, FLY⟩, etc.; (2) *N-ary* relations involving more than two entities, such as "A can do B at C". Our approach can straightforwardly be extended to handle n-ary relations by generalizing the compatibility score and search strategy accordingly to accommodate more than two entities. Symbolic interpretation of neural LMs The harvested knowledge tuples, as consistently recognized across varying prompts by the LM, can be considered as the underlying "beliefs" of the LM about the world (Stich, 1979; Hase et al., 2021). These fully symbolic and interpretable tuples provide a means for easily browsing and analyzing the knowledge capabilities of the black-box LM. For example, via these outcome KGs, one can compare | Paradigm | Method (Size) | Relation Set | #Relations | Accuracy (%) | Novelty (%) | |---------------------|-------------------|----------------|--------------|----------------|---------------| | RobertaNet (122.2k) | Auto | 487 | 65.3 | - | | | RobertaNet (2.2K) | Human | 12 | 81.8 | - | | | RobertaNet (7.3K) | Human | 12 | 68.6 | - | | | RobertaNet (23.6k) | Human | 12 | 58.6 | - | | | Ours | RobertaNet (6.7K) | ConceptNet | 20 | 88.0 | 64.4 | | RobertaNet (24.3K) | ConceptNet | 20 | 81.6 | 68.8 | | | RobertaNet (230K) | ConceptNet | 20 | 55.0 | 87.0 | | | KG Completion | COMET (6.7K) | ConceptNet | 20 | 92.0 | 35.5 | | COMET (230K) | ConceptNet | 20 | 66.6 | 72.4 | | | WebChild (4.6M) | - | 20 | 82.0* | - | | | Text Mining | ASCENT (8.6M) | - | - | 79.2* | - | | TransOMCS (18.4M) | ConceptNet | 20 | 56.0* | 98.3 | | different LMs to understand the performance impact of diverse configurations, such as model sizes and pretraining strategies, as demonstrated in our experiments. ## 4 Experiments To evaluate our framework, we extract knowledge of diverse new relations from various language models, and conduct human evaluation. We then make deeper analysis of prompt creation and scoring function in our framework. Finally, by utilizing our framework as a tool to interpret the knowledge stored in language models, we have made noteworthy observations regarding the knowledge capacity of black-box models. ## 4.1 Setup Relations We evaluate our framework with several relation sets: (1) **ConceptNet** (Speer et al., 2017): Following Li et al. 2016, we filter the KG and use a set of 20 common relations (e.g. HAS_SUBEVENT, MOTIVATED_BY_GOAL). The initial prompts for these relations are from the ConceptNet repository, and we randomly sample 5 example entity pairs from the ConceptNet KG for each relation. (2) **LAMA** (Petroni et al., 2019): Following previous works, we use the T-REx split (41 relations from WikiPedia, such as capital_of, member_of). For each relation, the human-written prompt provided in Petroni et al. 2019 is used as the initial prompt and we randomly sample 5 example entity pairs for each relation. (3) **Human**: We write 12 new relations of interests that can hardly be found in any existing KGs, and manually write an initial prompt and 5 example entity pairs for them. The resulting relations include complex relations as described in Section 3.2. (4) **Auto**: Besides relations from existing KGs and human-written ones, we automatically derive a large set of relations from E-KAR (Chen et al., 2022), a dataset for analogical reasoning. In the original dataset, given an entity pair, e.g. ⟨ID_CARD, IDENTITY⟩, the task is to select an analogous tuple from multiple choices, e.g. ⟨PRACTICE LICENSE, QUALIFICATION⟩. To turn a sample in E-KAR into a relation, we use the tuple in the question and the correct choices as 2 example entity pairs, and extract the initial prompt from the explanation provided in E-KAR (e.g. *Proof of* A requires B.), resulting in 487 relations. Some of the relations are not straightforward, making this relation set more difficult than other ones. 3 ## 4.2 Extracting Knowledge Of Diverse New Relations Our framework is applied to extract knowledge graphs from LMs with relations of ConceptNet, Auto, and Human. The accuracy of the extracted knowledge is then evaluated with human annotation using Amazon Mechanical Turk (MTurk). Each extracted knowledge tuple is la-3For reference, finetuned ROBERTA-LARGE achieves about 50% accuracy on the original dataset. | Methods | Acc | Rej | |----------------------|-------|-------| | AUTOPROMPT | 0.33 | 0.47 | | HUMAN PROMPT | 0.60 | 0.27 | | TOP-1 PROMPT (Ours) | 0.69 | 0.23 | | MULTI PROMPTS (Ours) | 0.73 | 0.20 | beled for correctness by three annotators using a True/False/Unjudgeable judge. A tuple is considered "accepted" if at least two annotators deem it to be true knowledge, and "rejected" if at least two annotators rate it as false. Here we refers portion of accepted tuples as accuracy. The statistics of our resulting KGs are listed in Table 2. Besides, we also put the results of other paradigms of methods, including COMET for KG completion and text-mining based methods (Figure 1). Note that the results across different paradigms are generally not directly comparable due to vastly different settings. Yet we still collect the results together for reference purpose. From our RebertaNet with relation set "Auto", we are able to extract a reasonably large sets of knowledge (122K), by extracting knowledge with 487 easy-to-collect "Auto" relations. The set of relation is an order of magnitude larger than the predefined set of relations in both KG completion and text mining based on ConceptNet as shown in the table. The accuracy of 65% is at a comparable level with that of COMET (230K) and TransOMCS (18.4M), which is reasonable especially considering our method solely uses an LM as the source of knowledge without any external training data, bringing flexibility to dynamically incorporate new relations. Besides, for our RobertaNet on ConceptNet relations, although the numbers listed in the table are not simply comparable, we can still find that RobertaNet achieves similar accuracy and absolutely higher novelty comparing with the knowledge from COMET, which is already finetuned using large number of knowledge terms under the same set of ConceptNet relations. Further, our results on the "human" relation set demonstrate that our RobertaNet keeps working comfortably on our highly realistic relations of user interests, including the complex ones as described in section §3.2. We showcase knowledge samples harvested from DISTILLBERT in Figure 3. | Source LMs | Acc | Rej | |---------------|-------|-------| | DISTILBERT | 0.67 | 0.24 | | BERT-BASE | 0.63 | 0.26 | | BERT-LARGE | 0.70 | 0.22 | | ROBERTA-BASE | 0.70 | 0.22 | | ROBERTA-LARGE | 0.73 | 0.20 | ## 4.3 Analyzing Automatic Prompt Creation To evaluate the effect of our automatic creation of prompts, we compare the generated KGs under several settings on the Human relations: (1) **MultiPrompts** refers to the the full framework described in §3 which use the automatically created diverse prompts in knowledge search. (2) **Top-1 Prompt**: To ablate the effect of ensembling multiple prompts, we evaluate the variant that uses only the prompt with largest weight (§3.1) for knowledge extraction. (3) **Human Prompt**: To further understand the effectiveness of the automatically created prompts, we assess the variant that uses the initial prompt of each relation. (4) **AutoPrompt** (Shin et al., 2020), which was proposed to learn prompts by optimizing the likelihood of tail entity prediction on the training set. To fit in our setting, we adapt it to optimize the compatibility score (Eq.1) on the example entity pairs. We omit other prompt tuning work (e.g., Zhong et al., 2021; Qin and Eisner, 2021) because they either are difficult to fit in our problem or require more training data and fail with only the few shot of example entity pairs in our setting. We harvest 1000 tuples for each Human relation, and evaluate them with human annotation. The annotation results are presented in Table 3 (We also list the detailed results per relation in Table 5 for reference) Our TOP-1 PROMPT significantly improves the accuracy up to 9% over the HUMAN PROMPT, demonstrating the effectiveness of our prompt searching algorithm in generating high-quality prompts. MULTI-PROMPTS further improves the accuracy by an additional 4%, indicating that the combination of diverse prompts better captures the semantics of a relation. However, the method utilizing the optimized prompt by AUTOPROMPT results in lower accuracy than the use of human or searched prompts. This can be attributed to the insufficient number of example knowledge tuples used to learn effective prompts for the desired relations. Based on the results above, we move a step for- ![7_image_0.png](7_image_0.png) ward to see how the created prompts influence the subsequent scoring module in the framework. Specifically, we study both the precision and recall of our scoring function parameterized by the prompts, to see if the automatically created prompts (§3.1) bring the consistency scoring (§3.2) better balance of knowledge accuracy (precision) and coverage (recall). To compare with other scoring methods that are restricted to specific sets of relations, this experiment was conducted using existing terms from both the ConceptNet and LAMA datasets. Specifically, we use the knowledge tuples from ConceptNet and LAMA as positive samples (§4.1), and synthesize the same amount of negative samples with the same strategy in Li et al. (2016) by random replacing entities or relations in a true knowledge tuple. Each scoring function ranks the samples based on the scores from high to low. We can then compute both the *precision* and *recall* of positive samples at different cut-off points along the ranking, and plot the precision-recall curves for each method. The automatic evaluation setting on given knowledge terms enables us to adapt existing prevalent works, e.g., KG completion and factual probing (Table 1), for comparison with our approach: (1) COMET (Bosselut et al., 2019) is a transformerbased KG completion model trained to predict the tail entity t conditioning on the head entity and relation (*h, r*) on ConceptNet. We use its loglikelihood log P(t|*h, r*) as the score for each given knowledge tuple. **(2) LPAQA** (Jiang et al., 2020) collects a set of prompts on LAMA with text mining and paraphrasing, and optimize their weights towards the objective of log P(t|*h, r*) on training samples. The resulting precision-recall curves on ConceptNet and LAMA knowledge are shown in Figure 4 ![7_image_1.png](7_image_1.png) and Figure 5, respectively. Scoring with multiple prompts always achieves best performance, followed by Top-1 prompts and then Human-written prompts. The finding is consistent with previous experiments, which verified the effectiveness of our scoring function design. Our framework also outperforms other baselines, such as COMET on ConceptNet and LPAQA on LAMA. Though trained with labeled data, these methods are only optimized to completing a tail entity given a query, in stead of scoring an entity pair, which is essential to extract KGs from LMs. ## 4.4 Analysis Of Knowledge In Different Lms As previously mentioned in Section §3, the resulting knowledge graphs can be viewed as a symbolic interpretation of LMs. We extract knowledge graphs from 5 distinct language models and submit them to human annotation evaluation. The findings are presented in Table 4 (The detailed results per relation is listed in Table 5), which sheds some new light on several knowledge-related questions regarding the LMs' knowledge capacity. Does a larger LM encode better knowledge? The large version of BERT and RoBERTa have the same pretraining corpus and tasks as their base versions, but have larger model architecture in terms of layers (24 v.s. 12), attention heads (16 v.s. 12), and the number of parameters (340M v.s. 110M). We can see that the accuracies of BertNet-large and RoBERTaNet-large are around 7% and 3% higher than their base version, separately, indicating the larger models indeed encoded better knowledge than the base models. Does better pretraining bring better knowledge? RoBERTa uses the same architecture as BERT but with better pretraining strategies, like dynamic masking, larger batch size, etc. In their corresponding KGs from our framework, RoBERTaNetlarge performs better than BertNet-large (0.73 v.s. 0.70), and RoBERTaNet-base is also better than BertNet-base (0.70 v.s. 0.63), showing that the better pretraining in RoBERTa leads to better knowledge learning and storage. Is knowledge really kept in the knowledge distillation process? DistilBERT is trained by distilling BERT-base, and it reduces 40% parameters from the latter. Interestingly, the knowledge distillation process instead improves around 4% of accuracy in the result knowledge graph. This should be attributed to the knowledge distillation process which might eliminate some noisy information from the teacher model. ## 5 Conclusion We have developed an automatic framework that extracts a KG from a pretrained LM (e.g, BERT, ROBERTA), in an efficient and scalable way, resulting in a family of new KGs, which we refer to as BERTNET, ROBERTANET, etc. Our framework is capable of extracting knowledge of arbitrary new relation types and entities, without being restricted by pre-existing knowledge or corpora. The resulting KGs also serve as interpretation of source LMs. Limitations Our current design and experimental studies are limited on LMs in the generic domain, and are not yet been studied in specific domains such as extracting healthcare knowledge from relevant neural models. We leave the exciting work of harvesting knowledge from various kinds of neural networks across applications and domains in the future work. Ethical considerations In this work, the harvested knowledge is automatically generated by LMs. We would like to note that the language models could possibly generate unethical knowledge tuples, same with the risks of other applications using language models for generation. We hope that the knowledge extraction study could offer techniques to better interpret and understand the language models, and in turn foster the future research of language model ethics. Since the knowledge graph only consists simple phrases, we think filtering sensitive words would be effective. No foreseeable negative societal impacts are caused by the method itself. ## References Badr AlKhamissi, Millicent Li, Asli Celikyilmaz, Mona Diab, and Marjan Ghazvininejad. 2022. A review on language models as knowledge bases. arXiv preprint arXiv:2204.06031. Greg Anderson, Abhinav Verma, Isil Dillig, and Swarat Chaudhuri. 2020. Neurosymbolic reinforcement learning with formally verified exploration. *Advances in neural information processing systems*, 33:6172–6183. Gabor Angeli, Melvin Johnson, and Christopher D. Manning. 2015. Leveraging linguistic structure for open domain information extraction. In ACL. Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. *Advances in neural information processing systems*, 26. Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Çelikyilmaz, and Yejin Choi. 2019. Comet: Commonsense transformers for knowledge graph construction. *The Association for Computational Linguistics*. Zied Bouraoui, José Camacho-Collados, and Steven Schockaert. 2020. Inducing relational knowledge from bert. In *AAAI*. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, T. J. Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeff Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. *ArXiv*, abs/2005.14165. Nicola De Cao, Wilker Aziz, and Ivan Titov. 2021. Editing factual knowledge in language models. Jiangjie Chen, Rui Xu, Ziquan Fu, Wei Shi, Zhongqiao Li, Xinbo Zhang, Changzhi Sun, Lei Li, Yanghua Xiao, and Hao Zhou. 2022. E-kar: A benchmark for rationalizing natural language analogical reasoning. arXiv preprint arXiv:2203.08480. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In *NAACL-HLT (1)*. Yanai Elazar, Nora Kassner, Shauli Ravfogel, Abhilasha Ravichander, Eduard Hovy, Hinrich Schütze, and Yoav Goldberg. 2021. Measuring and improving consistency in pretrained language models. *Transactions of the Association for Computational Linguistics*, 9:1012–1031. Tianqing Fang, Hongming Zhang, Weiqi Wang, Yangqiu Song, and Bin He. 2021. Discos: Bridging the gap between discourse knowledge and commonsense knowledge. In *Proceedings of the Web* Conference 2021, pages 2648–2659. Joshua Feldman, Joe Davison, and Alexander M. Rush. 2019. Commonsense knowledge mining from pretrained models. In *EMNLP*. Christiane D. Fellbaum. 2000. Wordnet : an electronic lexical database. *Language*, 76:706. Google. 2012. Introducing the knowledge graph: things, not strings. Shibo Hao, Yi Gu, Haodi Ma, Joshua Jiahua Hong, Zhen Wang, Daisy Zhe Wang, and Zhiting Hu. 2023. Reasoning with language model is planning with world model. Peter Hase, Mona T. Diab, Asli Çelikyilmaz, Xian Li, Zornitsa Kozareva, Veselin Stoyanov, Mohit Bansal, and Srini Iyer. 2021. Do language models have beliefs? methods for detecting, updating, and visualizing model beliefs. *ArXiv*, abs/2111.13654. Zhiting Hu, Xuezhe Ma, Zhengzhong Liu, Eduard H Hovy, and Eric P Xing. 2016. Harnessing deep neural networks with logic rules. In *ACL (1)*. Zhiting Hu and Eric P. Xing. 2022. Toward a 'Standard Model' of Machine Learning. *Harvard Data Science Review*, 4(4). Https://hdsr.mitpress.mit.edu/pub/zkib7xth. Zhengbao Jiang, Frank F. Xu, J. Araki, and Graham Neubig. 2020. How can we know what language models know? *TACL*. Jaehun Jung, Lianhui Qin, Sean Welleck, Faeze Brahman, Chandra Bhagavatula, Ronan Le Bras, and Yejin Choi. 2022. Maieutic prompting: Logically consistent reasoning with recursive explanations. arXiv preprint arXiv:2205.11822. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In ACL. Christy Y. Li, Xiaodan Liang, Zhiting Hu, and Eric P. Xing. 2019. Knowledge-driven encode, retrieve, paraphrase for medical image report generation. In AAAI. Xiang Li, Aynaz Taheri, Lifu Tu, and Kevin Gimpel. 2016. Commonsense knowledge base completion. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1445–1455, Berlin, Germany. Association for Computational Linguistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. *ArXiv*, abs/1907.11692. Yi Luan, Dave Wadden, Luheng He, Amy Shah, Mari Ostendorf, and Hannaneh Hajishirzi. 2019. A general framework for information extraction using dynamic span graphs. *arXiv preprint arXiv:1904.03296*. Benjamin Newman, Prafulla Kumar Choubey, and Nazneen Rajani. 2021. P-adapters: Robustly extracting factual information from language models with diverse prompts. *ArXiv*, abs/2110.07280. Tuan-Phong Nguyen, Simon Razniewski, Julien Romero, and Gerhard Weikum. 2021. Refined commonsense knowledge from large-scale web contents. arXiv preprint arXiv:2112.04596. Fabio Petroni, Tim Rocktäschel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H. Miller, and Sebastian Riedel. 2019. Language models as knowledge bases? *EMNLP*. Guanghui Qin and Jas' Eisner. 2021. Learning how to ask: Querying lms with mixtures of soft prompts. In NAACL. Julien Romero, Simon Razniewski, Koninika Pal, Jeff Z. Pan, Archit Sakhadeo, and Gerhard Weikum. 2019. Commonsense properties from query logs and question answering forums. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, pages 1411–1420. Alberto Santos, Ana R Colaço, Annelaura B Nielsen, Lili Niu, Maximilian Strauss, Philipp E Geyer, Fabian Coscia, Nicolai J Wewer Albrechtsen, Filip Mundt, Lars Juhl Jensen, et al. 2022. A knowledge graph to interpret clinical proteomics data. Nature Biotechnology, 40(5):692–702. Maarten Sap, Ronan Le Bras, Emily Allaway, Chandra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A. Smith, and Yejin Choi. 2019. Atomic: An atlas of machine commonsense for ifthen reasoning. *ArXiv*, abs/1811.00146. Shaoyun Shi, Hanxiong Chen, Min Zhang, and Yongfeng Zhang. 2019. Neural logic networks. ArXiv, abs/1910.08629. Taylor Shin, Yasaman Razeghi, Robert L Logan IV, Eric Wallace, and Sameer Singh. 2020. Eliciting knowledge from language models using automatically generated prompts. *EMNLP*. Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In *AAAI*. Stephen P Stich. 1979. Do animals have beliefs? *Australasian Journal of Philosophy*, 57(1):15–28. Bowen Tan, Lianhui Qin, Eric Xing, and Zhiting Hu. 2020. Summarizing text on any aspects: A knowledge-informed weakly-supervised approach. In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing (EMNLP), pages 6301–6309. Niket Tandon, Gerard De Melo, Fabian Suchanek, and Gerhard Weikum. 2014. Webchild: Harvesting and organizing commonsense knowledge from the web. In *Proceedings of the 7th ACM international conference on Web search and data mining*, pages 523–532. Bo Wang, Tao Shen, Guodong Long, Tianyi Zhou, and Yi Chang. 2021a. Structure-augmented text representation learning for efficient knowledge graph completion. *Proceedings of the Web Conference 2021*. Hongwei Wang, Fuzheng Zhang, Miao Zhao, Wenjie Li, Xing Xie, and Minyi Guo. 2019. Multi-task feature learning for knowledge graph enhanced recommendation. *The World Wide Web Conference*. Liming Wang, Siyuan Feng, Mark Hasegawa-Johnson, and Chang Yoo. 2022. Self-supervised semanticdriven phoneme discovery for zero-resource speech recognition. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics* (Volume 1: Long Papers), pages 8027–8047, Dublin, Ireland. Association for Computational Linguistics. Qingyun Wang, Manling Li, Xuan Wang, Nikolaus Nova Parulian, Guangxing Han, Jiawei Ma, Jingxuan Tu, Ying Lin, H. Zhang, Weili Liu, Aabhas Chauhan, Yingjun Guan, Bangzheng Li, Ruisong Li, Xiangchen Song, Heng Ji, Jiawei Han, Shih-Fu Chang, James Pustejovsky, David Liem, Ahmed Elsayed, Martha Palmer, Jasmine Rah, Cynthia Schneider, and Boyan A. Onyshkevych. 2021b. Covid-19 literature knowledge graph construction and drug repurposing report generation. In *NAACL*. Peter West, Chandra Bhagavatula, Jack Hessel, Jena Hwang, Liwei Jiang, Ronan Le Bras, Ximing Lu, Sean Welleck, and Yejin Choi. 2022. Symbolic knowledge distillation: from general language models to commonsense models. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4602–4625, Seattle, United States. Association for Computational Linguistics. Peter West, Chandra Bhagavatula, Jack Hessel, Jena D Hwang, Liwei Jiang, Ronan Le Bras, Ximing Lu, Sean Welleck, and Yejin Choi. 2021. Symbolic knowledge distillation: from general language models to commonsense models. *arXiv preprint* arXiv:2110.07178. Chenyan Xiong, Russell Power, and Jamie Callan. 2017. Explicit semantic ranking for academic search via knowledge graph embedding. *Proceedings of the* 26th International Conference on World Wide Web. Yunrong Yang, Zhidong Cao, Pengfei Zhao, Dajun Daniel Zeng, Qingpeng Zhang, and Yin Luo. 2021. Constructing public health evidence knowledge graph for decision-making support from COVID-19 literature of modelling study. *Journal* of Safety Science and Resilience, 2(3):146–156. Liang Yao, Chengsheng Mao, and Yuan Luo. 2019. Kgbert: Bert for knowledge graph completion. *ArXiv*, abs/1909.03193. Hongming Zhang, Daniel Khashabi, Yangqiu Song, and Dan Roth. 2020a. Transomcs: From linguistic graphs to commonsense knowledge. In *IJCAI*. Hongming Zhang, Xin Liu, Haojie Pan, Yangqiu Song, and Cane Wing-Ki Leung. 2020b. Aser: A largescale eventuality knowledge graph. In *Proceedings* of the web conference 2020, pages 201–211. Zexuan Zhong and Danqi Chen. 2020. A frustratingly easy approach for entity and relation extraction. arXiv preprint arXiv:2010.12812. Zexuan Zhong, Dan Friedman, and Danqi Chen. 2021. Factual probing is [mask]: Learning vs. learning to recall. *NAACL*. Chen Zhu, Ankit Singh Rawat, Manzil Zaheer, Srinadh Bhojanapalli, Daliang Li, Felix X. Yu, and Sanjiv Kumar. 2020. Modifying memories in transformer models. *ArXiv*, abs/2012.00363. ## A Detailed Results Of Harvested Knowledge In Table 3 and Table 4, we show the humanannotated results of harvested knowledge in different settings. Here we list the detailed results per relation in Table 5. ## B Preprocessing Of Conceptnet We filter out some linguistic relations (e.g. etymologically derived from) and some trivial relations (e.g. related to). We only consider the tuples with confidence higher than 1, and filter out relations comprising less than 1000 eligible tuples. We don't directly take the test set from (Li et al., 2016) because they reserve a lot of tuples for training, resulting in a small and unbalanced test set. ## C Efficient Knowledge Tuple Search In the candidate entity pairs proposal step, we use the minimum token log-likelihoods (shorted as MTL) instead of the full Equation 2, which allows us to apply a pruning strategy. The pseudo-code is shown in Algorithm 1. For simplicity of the pseudocode, we only include the case where each entity is composed of a single token. Appendix ?? illustrates the processing of multi-token entities. It's worth noting that our algorithm is an exact search algorithm instead of approximated algorithms like beam search, which prevents the results from biasing towards more probable head entities. As a running example, when we are searching for 100 entity tuples, we maintain a minimum heap to keep track of the MTL of the entity tuples. The maximum size of this heap is 100, and the heap top can be used as a threshold for future search because it's the 100-th largest MTL: When we are searching for a new entity tuple, once we find the log-likelihood at any time step is lower than the threshold, we can prune the continuous searching immediately, because this means the MTL of this tuple will never surpass any existing tuples in the heap. If a new entity tuple is searched out without being pruned, we will pop the heap and push the MTL of the new tuple. Intuitively, the pruning process makes sure that the generated part of the tuple in searching is reasonable for the given prompt. Algorithm 1 Efficient Entity Tuple Search $$\langle{\mathrm{T}}{\mathrm{T}}{\mathrm{L}}\rangle)$$ Input: LM: A language model; nr: The entity number for a tuple of relation r; N: maximum number of candidate tuples; Pr: The set of prompts describing relation r Output: tuple_list: A list of N entity tuples heap ← MinHeap() function DFS(cur_tuple, cur_MTL) idx←Count(cur_tuple) if idx = nr **then** heap.push((cur_tuple, cur_MTL)) if len(heap) > N **then** heap.pop() end if end if for v ∈ Vocab(LM) do cur_L ← log pLM(v|cur_*tuple, P*r) cur_MTL = min(cur_L, cur_MTL) if Count(cur_tuple > 0) and cur_MTL < heap.top() then return ▷ Pruning end if cur_tuple.append(v) DFS(cur_tuple, cur_MTL) end for end function DFS(EmptyList(), 0) tuple_list ← list(heap) ## D Detailed Experiment Setting We use GPT-3 with the instruction "paraphrase:sentence" with a few examples as the offthe-shelf paraphraser. In entity pair searching, we restrict every entity to appear no more than 10 times to improve the diversity of generated knowledge and search out at most 50,000 entity tuples for each relation. We finally use various score thresholds to get the outcome KGs in different scales, including (1) 50%: taking half of all searched-out entity pairs with higher consistency for each relation (2) base-k: Naturally, there are different numbers of valid tuples for different relations (e.g. tuples of ⟨ . . . , CAPITAL_OF, . . .⟩ should not exceed 200 as that is the number of all the countries in the world). We design a relation-specific thresholding method, that is to set 10% of the k-th consistency as the threshold (i.e., 0.1 × consistencyk), and retain all tuples with consistency above the threshold. We name the settings base-10 and base-100 when k is 10 and 100, respectively. We list the truncation method applied to each variant of ROBERTANET listed in Table 2: - RobertaNet (122.2k) - Auto: base-10 - RobertaNet (6.7K) - ConceptNet: base-10 - RobertaNet (24.3K) ConceptNet: base-100 - RobertaNet (230K) ConceptNet: 50% | Model | Ro-l | Ro-l | Ro-l | Ro-l | DB | B-b | B-l | Ro-b | |------------------------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------| | Prompt | Human | Auto | Top-1 | Multi | Multi | Multi | Multi | Multi | | BUSINESS | 0.60/0.32 | 0.76/0.13 | 0.75/0.16 | 0.88/0.07 | 0.54/0.27 | 0.64/0.23 | 0.76/0.13 | 0.74/0.19 | | HELP | 0.77/0.12 | 0.52/0.34 | 0.92/0.03 | 0.87/0.05 | 0.91/0.04 | 0.81/0.04 | 0.88/0.06 | 0.88/0.06 | | INGREDIENT FOR | 0.59/0.33 | 0.33/0.59 | 0.73/0.20 | 0.71/0.24 | 0.70/0.26 | 0.55/0.40 | 0.72/0.23 | 0.51/0.40 | | PLACE FOR | 0.76/0.10 | 0.41/0.36 | 0.63/0.32 | 0.89/0.07 | 0.84/0.14 | 0.78/0.18 | 0.87/0.11 | 0.88/0.09 | | PREVENT | 0.42/0.42 | 0.18/0.67 | 0.60/0.25 | 0.40/0.45 | 0.60/0.32 | 0.44/0.39 | 0.62/0.25 | 0.68/0.25 | | SOURCE OF | 0.76/0.17 | 0.21/0.67 | 0.52/0.44 | 0.60/0.33 | 0.63/0.36 | 0.65/0.32 | 0.75/0.24 | 0.55/0.37 | | SEPARATED BY THE OCEAN | 0.48/0.38 | 0.16/0.48 | 0.56/0.35 | 0.55/0.40 | 0.51/0.24 | 0.57/0.26 | 0.44/0.46 | 0.44/0.49 | | ANTONYM | 0.50/0.41 | 0.10/0.83 | 0.50/0.48 | 0.55/0.44 | 0.38/0.56 | 0.41/0.56 | 0.52/0.42 | 0.75/0.22 | | FEATURED THING | 0.85/0.12 | 0.38/0.40 | 0.88/0.06 | 0.89/0.10 | 0.37/0.44 | 0.44/0.40 | 0.46/0.44 | 0.65/0.20 | | NEED A TO DO B | 0.71/0.18 | 0.62/0.21 | 0.66/0.22 | 0.79/0.10 | 0.83/0.12 | 0.62/0.25 | 0.65/0.18 | 0.72/0.17 | | CAN BUT NOT GOOD AT | 0.52/0.34 | 0.29/0.42 | 0.61/0.19 | 0.44/0.21 | 0.51/0.31 | 0.60/0.21 | 0.64/0.22 | 0.39/0.35 | | WORTH CELEBRATING | 0.47/0.29 | 0.23/0.51 | 0.81/0.05 | 0.85/0.08 | 0.79/0.12 | 0.74/0.14 | 0.84/0.10 | 0.83/0.10 | | POTENTIAL RISK | 0.40/0.23 | 0.31/0.45 | 0.70/0.21 | 0.76/0.19 | 0.87/0.05 | 0.66/0.22 | 0.72/0.16 | 0.79/0.08 | | A DO B AT | 0.56/0.33 | 0.14/0.55 | 0.79/0.14 | 0.97/0.03 | 0.93/0.07 | 0.93/0.05 | 0.94/0.06 | 0.94/0.06 | | AVERAGE | 0.60/0.27 | 0.33/0.47 | 0.69/0.22 | 0.73/0.20 | 0.67/0.24 | 0.63/0.26 | 0.70/0.22 | 0.70/0.22 | Table 5: Detailed result of human evaluation. The numbers indicate the portions of accepted and rejected tuples. Ro-l, DB, B-b, B-l, Ro-b are short for Roberta-large, DistilBert, Bert-large, Bert-base, Roberta-base. Human, Auto, Top-1, and Multi stand for methods that use Human Prompt, Autoprompt, Top-1 Prompt (Ours), and Multi Prompts (Ours). ![12_image_0.png](12_image_0.png) - RobertaNet (2.2K) Human: base-10 - RobertaNet (7.3K) Human: base-100 - RobertaNet (23.6k) Human: 50% ## E Human Evaluation We present the screenshot of the instruction in Figure 7 and question in Figure 8. The inter-annotator agreement (Krippendorff's Alpha) is 0.27, showing fair agreement. ## F Compute Resource All of our experiments are running on a single Nvidia GTX1080Ti GPU. Harvesting a knowledge graph of one relation with Roberta-large takes about one hour. ## G The License Of The Assets All the data we used in this paper, including datasets, relation definitions, seed entity pairs, etc., are officially public resources. ## H Potential Risks We identify that our system is minimal in risks. Our proposed system produce results only based on the source language models like BERT. The risks of language models are well studied and our methods do not perpetuate or add to the known risks. However, we acknowledge the methods could be applied to maliciously trained language models and discourage such uses. Instructions (click to expand/collapse) You have the access to the formal HIT because you perform well in the Qualification HIT! Good Job! The instructions is the same for every HIT in the task. Feel free to skip it if you have read this. You will be given a statement in the form of "A, relation, B", and you are expected to decide this statement is True/False/Unjudgeable Examples: - Tokyo, capital of, Japan -> True - human, capable of, fly -> False - Gandhi, representative figure of, India -> True (When it comes to specific person/location etc., please google it if you are not sure about the statement) - also see, representative figure, fits well -> False (Sometimes the statement can be weird. Just choose "False" for them) - Bob, good at, football -> Unjudeable (Because we don't know who is the "Bob" here) - phone, at location, bag -> True (When the statement is general, it's true as long as it makes sense) - phone, at location, fridge -> False (This doesn't usually happen) Small grammatical erros should be ignored (e.g. a human name is not capitalized, a noun is in the plural form...) Sometimes the relation name is not informative enough, and you may check the definition and examples of the relation by clicking a button nearby. There are a total of 3 questions in one HIT. Please leave any message in the infobox if there is still something unclear to you. Recommended time for one question: approx. 20 seconds. There will be about 10 relations in total. After you get familiar with them, most of the statements should be quite easy to judge and you can have the answer at a glance. Sometimes, the statement requires specific knowledge which you might don't know. In that case, we wish you to google it before making the choice. Figure 7: The instruction to annotators Question 1 ![13_image_0.png](13_image_0.png) ![13_image_1.png](13_image_1.png) Question 2 Evaluate the statement: ${ent_2_1} , ${relation_2} , ${ent_2_2} ![13_image_2.png](13_image_2.png) - S(example_2_1_head), $(relation_2), $(example_2_1_tail) - ${example_2_head}, ${relation_2}, ${example_2_2_tail} - S(example_2_3_head), $(relation_2), $(example_2_3_tail) ![13_image_3.png](13_image_3.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitations section ✓ A2. Did you discuss any potential risks of your work? Appendix F ✓ A3. Do the abstract and introduction summarize the paper's main claims? Yes. Abstract and section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4 ✓ B1. Did you cite the creators of artifacts you used? Section 4 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? 4.1 ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? 4.1 B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? 4.1 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. 4.1 ## C ✓ **Did You Run Computational Experiments?** 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 4 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 4 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 4 ## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** 4.2.2 ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? appendix E ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? appendix E ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? appendix E ✓ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? 4.2 D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
tseriotou-etal-2023-sequential
Sequential Path Signature Networks for Personalised Longitudinal Language Modeling
https://aclanthology.org/2023.findings-acl.310
Longitudinal user modeling can provide a strong signal for various downstream tasks. Despite the rapid progress in representation learning, dynamic aspects of modelling individuals{'} language have only been sparsely addressed. We present a novel extension of neural sequential models using the notion of path signatures from rough path theory, which constitute graduated summaries of continuous paths and have the ability to capture non-linearities in trajectories. By combining path signatures of users{'} history with contextual neural representations and recursive neural networks we can produce compact time-sensitive user representations. Given the magnitude of mental health conditions with symptoms manifesting in language, we show the applicability of our approach on the task of identifying changes in individuals{'} mood by analysing their online textual content. By directly integrating signature transforms of users{'} history in the model architecture we jointly address the two most important aspects of the task, namely sequentiality and temporality. Our approach achieves state-of-the-art performance on macro-average F1 score on the two available datasets for the task, outperforming or performing on-par with state-of-the-art models utilising only historical posts and even outperforming prior models which also have access to future posts of users.
# Sequential Path Signature Networks For Personalised Longitudinal Language Modeling Talia Tseriotou1, Adam Tsakalidis1,2, Peter Foster2, Terry Lyons2,3**, Maria Liakata**1,2,4 1Queen Mary University of London, 2The Alan Turing Institute, 3University of Oxford, 4University of Warwick {t.tseriotou;a.tsakalidis;m.liakata}@qmul.ac.uk ## Abstract Longitudinal user modeling can provide a strong signal for various downstream tasks. Despite the rapid progress in representation learning, dynamic aspects of modelling individuals' language have only been sparsely addressed. We present a novel extension of neural sequential models using the notion of path signatures from rough path theory, which constitute graduated summaries of continuous paths and have the ability to capture non-linearities in trajectories. By combining path signatures of users' history with contextual neural representations and recursive neural networks we can produce compact time-sensitive user representations. Given the magnitude of mental health conditions with symptoms manifesting in language, we show the applicability of our approach on the task of identifying changes in individuals' mood by analysing their online textual content. By directly integrating signature transforms of users' history in the model architecture we jointly address the two most important aspects of the task, namely sequentiality and temporality. Our approach1achieves state-of-the-art performance on macro-average F1 score on the two available datasets for the task, outperforming or performing on-par with state-of-the-art models utilising only historical posts and even outperforming prior models which also have access to future posts of users. ## 1 Introduction Representation learning has become a critical tool in Natural Language Processing (NLP) applications, especially for user-specific tasks (Pan and Ding, 2019). Despite its importance there is limited work on low-dimensional static user representations (Amir et al., 2016; Song and Lee, 2017; Amir et al., 2017) or more importantly on dynamic user representations (Liang et al., 2018; Cao et al., 1https://github.com/Maria-Liakata-NLP-Group/ seq-sig-net 2019; Sawhney et al., 2021). Dynamically representing users through their textual data can be of paramount importance especially for addressing user-specific changes in their language over time, potentially indicative of underlying mental health conditions. Current research on temporal user representations for mental health applications (Sinha et al., 2019; Sawhney et al., 2020, 2021; Tsakalidis et al., 2022b) highlights the importance of sequential modeling but either relies heavily on emotion and network based features (which limit the generalisability of the representations) or models a user's entire available content as a whole, limiting its use to off-line rather than real-time applications. To address these, we propose an architecture that combines sequential modelling with path signatures (Chevyrev and Kormilitzin, 2016). Path signatures provide a pathwise definition to the solution of differential equations driven by rough signals and therefore a non-parametric way for sequential encoding. They are graduated summaries of continuous paths and have the ability to capture nonlinearities in trajectories. They have been proven effective in compressing sequential/temporal content (Fermanian, 2021) for a range of applications including Chinese character recognition (Yang et al., 2016; Xie et al., 2017), medical information extraction (Biyong et al., 2020) and emotion recognition through audio streams (Wang et al., 2019). We combine signature paths with contextual representations from a pre-trained BERT (Devlin et al., 2018) and recurrent neural networks to obtain a novel sequential, temporally sensitive architecture. We apply this to a longitudinal task in mental health, that of identifying Moments of Change (MoC) in individuals' mood (Tsakalidis et al., 2022b). We make the following contributions: - We propose the first architecture to combine path signatures with neural networks for Longitudinal Language Modeling, addressing temporality and sequentiality within the model (§3.5). - Our model provides compact and efficient dynamic user representations by combining path signatures with LSTMs to represent a user's history, capturing both long- and short-term dependencies in user's historical linguistic content. By operating only on historical data, our model's output representations are generalisable to longitudinal user tasks in real-time. - We show state-of-the-art performance in one dataset for the task of MoC prediction and outperform or perform on-par with all competing models for both datasets that use historical user data only. We perform very competitively against those that utilise additional future user data (§5.1). ## 2 Related Work Temporal Representations. Recent work has focused on expanding static representations in order to construct temporal user embeddings through user activity data (Pavlovski et al., 2020; Hansen et al., 2020; Zhang et al., 2020). Despite the importance of longitudinal online linguistic content, little work addresses dynamic temporally-sensitive user representations. For the task of semantic change detection, temporally sensitive word representations are obtained either over discrete time bins (Hamilton et al., 2016; Tsakalidis and Liakata, 2020) or jointly over time (Frermann and Lapata, 2016; Yao et al., 2018; Rudolph and Blei, 2018; Bamler and Mandt, 2017). Such work addresses the change in words over long periods rather than changes in users, which may cover much shorter spans. Liang et al. (2018) tackled the problem of temporal user representations through the extension of dynamic word representations (Bamler and Mandt, 2017), through joint word and user temporal modeling in a probabilistic fashion, adopting a skip-gram model. This work precedes the advent of powerful pretrained language models (PLMs). Dynamic topic models have been employed in social media for modeling the evolution of emotions and topics in subject-specific reviews and news corpora (He et al., 2014; Zhu et al., 2016). Although such work forms a strong foundation for dynamic representation modeling, temporal individual linguistic content spans across multiple unique topics unlike reviews and news documents that are heavily governed by aggregate topics. Additionally, in longitudinal user modeling individuals' mood changes occur uniquely and at different speeds, rather than presenting a mass change of sentiment in topic-specific documents. Lastly, since work on dynamic topic-emotion models precedes the PLM era, there is need to further explore the effect of contextual word representations in capturing the dynamics of words governed by the post topics. Longitudinal Modeling for Mental Health. User's linguistic footprint on social media is a rich resource for the detection of mental health conditions (Sinha et al., 2019; Jiang et al., 2020; Shing et al., 2020) and related linguistic shifts (De Choudhury et al., 2016; Guntuku et al., 2020; Tsakalidis et al., 2022b). Shared tasks such as CLPsych (Zirikly et al., 2019; Tsakalidis et al., 2022a) and CLEF eRISK (Losada et al., 2020) have recently highlighted the importance of temporal, sequential and longitudinal user modeling for downstream mental health applications. Our approach furthers work in sequential and longitudinal modeling from individuals' language data on social media by providing a novel architecture that combines summaries of user history through path signature transforms with RNNs. While we show the effectiveness of our architecture on the task of identifying MoCs in individuals' mood , our model can be applied in real-time and extended to a variety of temporally sensitive tasks and multi-modal sources of data. Path Signatures A path is defined as a continuous mapping from an interval to a real multidimensional space. The path's signature can be seen as a collection of the statistics of the path summarising uniquely important information about the path. Additionally, the signature provides a linear approximation of every continuous function of the path (Bonnier et al., 2019). In rough path theory (Chen, 1958; Lyons, 1998), path signatures give a path-wise definition to the solution of differential equations driven by irregular signals. Path signatures recently gained attention in machine learning due to their ability to represent a trajectory in the un-parameterised path space and therefore non-parametrically encode sequential data. They have been used to embed sequential data to a continuous path and from there to form compressed features of different granularity for different downstream tasks. Path signatures have shown strong performance as feature extractors in various tasks such as online Chinese character recognition (Yang et al., 2016; Xie et al., 2017), psychiatric disorders distinction (Arribas et al., 2018), video action recognition (Yang et al., 2017), mood prediction with missing longitudinal data (Wu et al., 2020), healthcare (Morrill et al., 2020) and financial time series (Levin et al., 2013). Recent work has integrated signatures directly in neural models (Bonnier et al., 2019) allowing their operation as a layer of sequential pooling in neural networks. Path signatures are still under-explored in NLP with limited applications in speech emotion recognition (Wang et al., 2019) and psychiatric disorder detection from interviews (Wang et al., 2021). Biyong et al. (2020) integrated path signatures with attention between the BERT embedding and prediction step for information extraction. Although this demonstrates the ability of signatures to enhance the sequential ordering capabilities in the Transformer (Vaswani et al., 2017), the work in question lacked temporal and sequential (beyond word ordering) elements. Our work presents an architecture combining path signatures with RNNs that addresses both temporal and sequential aspects, using path signatures as an integral part of sequential networks. ## 3 Methodology 3.1 Problem Definition We define a user timeline T [s,e] u as a series of consecutive posts {p1*, . . . , p*m} shared by user u at times T = {t1, t2*, . . . , t*m} between two dates s and e, where m can be any length. For each post pi we assume we need to classify it according to some multi-class sequential classification task, where it is important to consider historical context spanning different ranges. For each post pi we assume n history windows, each of length w posts, shifted by k posts.2 We define the first history window of pi of fixed length w as hi1 = {pi−(n−1)k−(w−1), pi−(n−1)k−(w−2)*, ..., p*i−(n−1)k} and the qth history window as hiq = {pi−(n−q)k−(w−1), pi−(n−q)k−(w−2)*, ..., p*i−(n−q)k}. The historical context for post piis therefore di = {hi1 , ...,hin−1 , hin, pi}. Method Overview. Fig. 1 shows the historical context for a post-level classification task. Each historical sequential window is used as the input to the path signature compression (see Signature Window Network Unit-SWNU in §3.3) in order to 2In practice n, k and w are fixed and the number of posts in a timeline m is given by m = k ∗ n + (w − k), where m = 29 in our model (k = 3, n = 9, w = 5). ![2_image_0.png](2_image_0.png) capture local sequential patterns (Fig. 2). The output of each SWNU is fed as the input to a BiLSTM (see §3.5) in order to produce the final single compressed history representation as shown in Fig. 3 to learn the temporal long-term linguistic evolution of the user. By employing an architecture that incorporates multiple SWNUs in a BiLSTM we achieve the enhancement of short-term dependencies in user linguistic content compared to a vanilla BiLSTM through the efficient representation of local sequential trajectories. At the same time we harness the powerful BiLSTM in modeling longterm sequential dependencies of the local windows. We finally combine the compressed historical information from the BiLSTM with the PLM (BERT) representation of the post pito be classified and its normalised timestamp. ## 3.2 Path Signature Preliminaries A sequence of user posts can be viewed as a sequence of linguistic signals. The stream-like nature of the task allows us to consider the sequence of c-dimensional posts in a timeline (encoded through PLM embeddings) as a continuous path P over an interval [t1, tm]. 3 The signature S(P) of this path P over [t1, tm] is the collection of r-folded iterated integrals of P along the (integer) indices i1, i2, · · · , ir ∈ {1, 2, · · · , c}, with r denoting the number of involved dimensions: $ S(P)^{i_1,i_2,\dots,i_r}_{t_1,t_m}=\int_{g_r}\dots\int_{g_1}dP^{i_1}_{g_1}\otimes\dots\otimes dP^{i_r}_{g_r},$ (1) for $ g_i\in[t_1,t_m]$ and $ t_1<g_1<g_2<\dots<t_m.$ The $ t_i$ is the set of all $ g_i$. for gi ∈ [t1, tm] and t1 < g1 < g2 *< ... < t*m. The signature is a collection of all r iterated integrals: $$S(P)_{t_{1},t_{m}}=(1,S(P)_{t_{1},t_{m}}^{1},...,S(P)_{t_{1},t_{m}}^{c},\tag{2}$$ $$S(P)_{t_{1},t_{m}}^{1,1},S(P)_{t_{1},t_{m}}^{1,2},...,S(P)_{t_{1},t_{m}}^{c,c},$$ $$...,S(P)_{t_{1},t_{m}}^{i_{1},i_{2},\cdots,i_{r}},...)$$ ${}^{3}$where $t_{1}$ is the timestamp of the first post and $t_{m}$ the last timestamp in the timeline. The above leads to infinite dimensions. Thus in machine learning applications we use the Nth degree truncated signature which means that the r iterated integrals go up to degree N to constrain the number of dimensions. We are working with the truncated signatures, more specifically of degree 3: $$\begin{array}{c}{{T S(P)_{t_{1},t_{m}}^{3}=(1,S(P)_{t_{1},t_{m}}^{1},...,S(P)_{t_{1},t_{m}}^{c},}}\\ {{S(P)_{t_{1},t_{m}}^{1,1},S(P)_{t_{1},t_{m}}^{1,2}...,S(P)_{t_{1},t_{m}}^{c,c},}}\\ {{S(P)_{t_{1},t_{m}}^{1,1,1},...,S(P)_{t_{1},t_{m}}^{c,c,c})}}\end{array}$$ A higher degree of truncated signature adds more granularity in the path but it also leads to exponentially increasing number of output dimensions used as features, as the latter is calculated by the equation (c N+1 − c)(c − 1)−1, where c are the feature dimensions and N is the degree of truncation. While Eq. 3 provides the signature compressed feature set, the constant 1 is excluded from the features for simplicity as a common practise. Since signatures provide a way to uniformly linearly approximate a continuous function (Fermanian, 2021), their dimensions explode in size in proportion to the dimensions of the input (Kidger and Lyons, 2020). In our work we use log-signatures since their dimensions increase more modestly and therefore allow us to incorporate higher interactions between inputs in a more compressed representation. This resulted in better performance of our model. For simplicity we will be referring to the application of log-signatures as signatures.4 ## 3.3 **Signature Window Network Unit (Swnu)** ![3_Image_0.Png](3_Image_0.Png) Path signatures have been used as feature extractors in the past (see §2). This comes with the risk that valuable compressed signature information in higher order terms may be lost when truncated at degree N. Bonnier et al. (2019) proposed integrating signature transforms in neural networks which allows for backpropagation in the whole network and therefore for a learnable augmentation of the data Φ(x) that can preserve the important higher order information in lower degrees of the truncated signature in S N (Φ(x)) rather than applying the signature directly on the data. Since signatures transform a stream of data into a mathematical non-streamlike representation, the signature transform can in theory only be applied once. Bonnier et al. (2019) further suggest the use of a signature multiple times by lifting it from a stream to a stream of streams. For temporally ordered post data P={p1, p2*, . . . , p*m} with Pj={p1, p2, · · · , pj} one can obtain a stream of truncated signatures through expanding windows: $$(S^{N}({\mathcal{P}}_{2}),S^{N}({\mathcal{P}}_{3}),\cdots,S^{N}({\mathcal{P}}_{m})).$$ We present the building block of our architecture called the Signature Window Network Unit (SWNU), which produces a compressed history representation for a window in time. Given a series of posts, we slide a convolution 1D layer with a Tanh activation function to allow learnable dimensionality reduction. The selection of Convolution 1D is based on its ability to reduce the embedding dimensions while preserving the sequential nature of the data and avoiding interactions between time points (posts) given a small kernel size. While more obvious choices such as an LSTM or a Transformer would preserve sequentiality, they would introduce post interactions which are undesirable, while also being more expensive. The choice of Convolution 1D, which involves only 552 parameters, allows for the efficient, simple and cheap formation of our building block. The signature is applied as described in Eq. 4, therefore producing compressed representations of local expanding windows. These are fed into an LSTM to model (see Fig. 2) the entire sequence and progression of the linguistic content within the specified timeframe. The output of the LSTM provides a learnable stream of this more granular progression that a final signature layer compresses to get a low-dimensional single representation, h ′ ij , for the whole specified posting window. This unit is depicted in Fig. 2. ![4_image_1.png](4_image_1.png) ## 3.4 Post Encoding Pre-trained contextualised word representations such as those from BERT have been proved important in different NLP tasks (Peters et al., 2018). While the [CLS] token from BERT has been widely used to represent a given sequence, sentenceBERT (SBERT) embeddings (Reimers and Gurevych, 2019) are better suited for capturing the sentence semantics in a more compressed fashion, which is important when utilising path signatures. We encode each post piin a timeline using sentenceBERT (384-dimensional representation). Since the dimension of the truncated signature explodes exponentially with the input path dimension, a common practice in literature is to reduce the input dimensionality. We used UMAP (McInnes et al., 2018) due to its ability to preserve global structure and produce effective low dimensional representations used in machine learning (Sainburg et al., 2021).5 Lastly, we order the posts in a timeline in ascending order of respective timestamps and create data points for each post and its history windows, as described in §3.1. ## 3.5 Sequential Path Signature Network The Signature Window Network Unit provides a way to compactly model the user's historical linguistic content over a specified time window. However the kind of longitudinal tasks over user posts ![4_image_0.png](4_image_0.png) we are considering (such as changes in the mood of a user, see §4.1) may progress non-linearly. Our architecture (Fig. 3) employs a BiLSTM of 9 units that utilises information from both directions of the posting history, up to the current post pi. Each unit of the BiLSTM takes as input the compressed signature representation of the corresponding Signature Window Network Unit (see §3.3), formed over short sliding windows within the timeline up to that point. Thus through the BiLSTM's hidden state we obtain a single compressed history representation. Our architecture preserves the local sequential information through signatures while also capturing the dependencies between them in a sequential manner through the BiLSTM in order to preserve information from the significant parts of a user's history. ## 3.6 Network Optimisation For a post-level classification task (see §3.1, §4.1), we concatenate the SBERT representation of the current post with the history representation obtained from the BiLSTM and the normalised timestamp as shown in Fig. 3. By including the timestamp, the model can capture signals directly associated with specific periods in time, e.g. the Covid-19 period. We obtain the final output from: $\mathbf{R_{i}=FFN(H_{i}\oplus p_{i}\oplus t_{i,norm})\in R^{D_{BILSTM}+384+1}}$ (5) We form a single integrated task-informed representation of the user's overall linguistic content by finally passing the representation through a feedforward network (FFN) with 2 hidden layers. We add a ReLU activation function and a Dropout layer between the layers and employ an output linear layer for 3-class classification prediction. Sequential tasks from user data like the one we tackle here (see §4.1) are often heavily imbalanced. To target this problem we use the alpha-weighted focal loss (Lin et al., 2017) on the log-softmax of the output, assigning more importance to minority classes, with γ controlling the down-weighting of well-classified samples and α being a class-level loss weight: L = Focal( ˆyi, y; *γ, α*). The loss function propagates in the whole network (see Fig. 3), so that the building block of Signature Window Network Units as well as the BiLSTM and FFN are trained together in a single network. ## 4 Experiments 4.1 Task Definition And Datasets Task Definition. We apply our model to the longitudinal task of capturing 'Moments of Change' (MoC), the identification of changes in a user's mood given a series of sequential posts between two dates (timeline). Following Tsakalidis et al. (2022b), we approach this as a supervised 3-class, post-level sequential classification task distinguishing between: *Switches* (IS) (post(s) revealing a sudden mood shift from positive to negative, or vice versa); *Escalations* (IE) (gradual user mood progression from neutral or positive to more positive, or from neutral or negative to more negative); None (O) (no change in mood) - see Fig. 4 for an example of a user's timeline and the associated post-level labels. For each post to be classified we make use of the current post, its timestamp and historical posts. We report results on post-level evaluation metrics (Precision, Recall, F1). Datasets. We make use of the two available datasets for the task in the English language in the same way as intended by their authors: (a) TalkLife (a peer-to-peer network for mental health support) (Tsakalidis et al., 2022b) consists of 500 15day long user timelines (18,702 posts), each spanning [10-124] posts; (b) Reddit from the CLPsych 2022 Shared task (Tsakalidis et al., 2022a) consists of 256 2-month long user timelines (6,205 posts). Both datasets were annotated at the post level by ![5_image_0.png](5_image_0.png) annotators who had access to each entire timeline. Due to the nature of the task, classes are highly imbalanced with 4.7%/6.6% IS, 10.8%/15.8% IE and 84.5%/77.6% O for TalkLife/Reddit, respectively. We perform 5-fold cross validation on TalkLife as in (Tsakalidis et al., 2022b) and keep the train/test split that was used in CLPsych Shared Task 2022 for Reddit (Tsakalidis et al., 2022a). ## 4.2 Baseline Models Reported performance on baselines is based on the same splits and random seeds for consistency. (a) For TalkLife, we compare against the following baselines introduced by Tsakalidis et al. (2022b): - BERT(f) a post-level (timeline-agnostic) BERT classifier (Devlin et al., 2018) trained using the alpha-weighted focal loss (Lin et al., 2017); - EM-DM a BiLSTM operating on the timeline level, using as inputs post-level emotion features derived from DeepMoji (Felbo et al., 2017); - BiLSTM-bert a timeline-level model consisting of two stacked BiLSTM networks (trained using the Cross Entropy loss) taking as its post-level inputs the [CLS] tokens from BERT(f). We further adjust this model to operate on the post-level and its recent history (29 recent posts, for direct comparison with our work) instead of the whole timeline at once (**BiLSTM-bert(hist)**). (b) For Reddit, we considered the following models from the CLPsych 2022 Shared Task: - IIITH (Boinepelli et al., 2022), an LSTM-based model operating on the current post and a window of its history, trained using a weighted Cross Entropy loss function; - LAMA (AlHamed et al., 2022), an LSTM utilising the sequence of the previous posts for a given target post. Under-sampling was performed on majority class posts to address class imbalance; - WResearch (Bayram and Benhiba, 2022), an XGBoost model (Chen and Guestrin, 2016) using emotion-based features concatenated with the emotional difference between the current and previous post, and look-back window abnormality vectors obtained by a seq2seq model (Provotar et al., 2019); - UoS (Azim et al., 2022), a multi-task attention based BiLSTM looking at the whole timeline, where each steps is a user's post. The input is a concatenation of emotion-based representations. To examine the effect of the signature transforms, for both datasets, we include a simplified version of our model **SBERT(avg hist)**, a feed-forward network of 2 hidden layers, using alpha-weighted focal loss which takes as input a 384-dimensional SBERT representation (Reimers and Gurevych, 2019) for the current post concatenated with the mean of SBERT representations of historical user posts and the normalised post timestamp. Additionally, we produce a new fairer baseline model, called **BiLSTM-sbert(hist)**, for comparison with both datasets by adjusting BiLSTM-bert (Tsakalidis et al., 2022b) to operate on the post-level and its recent history (29 recent posts) using SBERT pre-trained embeddings and focal loss. Lastly, we include two **Naïve** classifiers: *Majority* (always assigning majority class) and *Random* (classifying a post based on the label distributions). ## 5 Results And Discussion 5.1 Comparison Against Baselines Results on both datasets are presented in Table 1. Since the MoC task presents a high class imbalance with the minority classes (IS/IE) being particularly important, we choose macro-avg F1 as our core performance metric. Our model ranks second best on both datasets, while it achieves the highest macroaveraged recall on TalkLife with very competitive recall on the minority classes, which is particularly important for anomaly detection tasks such as that of capturing MoC in mental health. Our model shows state-of-the-art performance on TalkLife among baselines that only use historical information. It achieves the second best performance among all baselines and even surpasses some baselines (EM-DM) that have access to the entire user's timeline. The best performing BiLSTM-bert baseline on TalkLife has access to the entire user's timeline, while Seq-Sig-Net only has access up to the current post, enabling realtime predictions. For a fairer comparison against our model, we provide a new baseline BiLSTMbert(hist) which uses the same architecture and hyperparameters for tuning as the original BiLSTMbert but with access only to the current post and its historical data in the timeline. Our model outperforms BiLSTM-bert(hist) and importantly does so by a large margin in F1 of the minority classes, even though it uses dimensionally reduced linguistic representations that are associated with some information loss. On Reddit, our model outperforms or performs on-par with all baselines, including those that have access to the entire user's timeline. While BiLSTMsbert(hist) scores similarly to Seq-Sig-Net on Reddit with respect to macro-avg F1, we show that our model well outperforms BiLSTM-sbert(hist) on TalkLife by a clear margin (macro-avg F1: .563 vs .541), demonstrating its ability to capture historical information with respect to sudden changes. Since TalkLife is a platform specifically focused on mental health discussions, it is much more challenging to spot mood changes compared to Reddit, where even the mention of a mental health related topic signals a mood change. This is also quantitatively shown in literature where on Reddit a post-level logistic regression on tfidf representations achieves .492 macro-avg F1 (Tsakalidis et al., 2022a), while on TalkLife a post-level random forest on tfidf representations achieves a much lower performance of .360 macro-avg F1 (Tsakalidis et al., 2022b). Beyond its competitive performance and its ability to model local trajectories of user history, our architecture provides an end-to-end solution, important for real-time application. Strong baselines such as BiLSTM-bert, BiLSTM-bert(hist) and WResearch train separate models for feature extraction on which they then train a separate classification model. Apart from being an end-to-end solution, our model is task agnostic. It is based on encoding multistage language embeddings by addressing the sequential and temporal aspects of longitudinal language tasks and it does so without task-specific features such as emotion representations (contrary to EM-DM, WResearch and UoS). ## 5.2 Ablation Study We examine the effect of incorporating historical posts (Table 2). When we simply average SBERT Naïve Majority - – - – - – .845 1 .916 .282 .333 .305 Random .047 .047 .047 .108 .108 .108 .845 .845 .845 .333 .333 .333 Post-level BERT(f) (Tsakalidis et al., 2022b) .260 **.321** .287 .401 .478 .436 898 .864 .881 ˙ .520 .554 .534 Timeline-level EM-DM **.553** .118 .193 .479 .351 .405 .880 **.948** .913 **.631** .472 .504 ✓ ✓ (Tsakalidis et al., 2022b) BiLSTM-bert .397 .264 .316 **.568** .461 **.508** .898 .936 **.917** .621 .553 **.580** ✓ Timeline-level SBERT(avg hist) .283 .244 .262 .424 .486 .452 .896 .885 .890 .534 .539 .535 (-signature) BiLSTM-sbert(hist) .258 .272 .264 .442 .506 .468 .901 .879 .890 .534 .553 .541 BiLSTM-bert(hist) .405 .241 .302 .536 .415 .468 .892 .938 .914 .611 .531 .561 Timeline-level (+signature) Seq-Sig-Net (our work) .331 .290 .309 .435 **.555** .487 **.907** .881 .894 .558 **.576** .563 ![7_image_0.png](7_image_0.png) Naïve Majority - .000 .000 - .000 .000 .724 1.000 .840 - .333 .280 Random .066 .066 .066 .158 .158 .158 .776 .776 .776 .333 .333 .333 IIITH (Boinepelli et al., 2022) .206 **.524** .296 .402 .630 .491 **.954** .647 .771 .520 .600 .519 Timeline-level LAMA (AlHamed et al., 2022) .166 .354 .226 .609 .389 .475 .882 .861 .871 .552 .535 .524 (CLPsych) WResearch (Bayram and Benhiba, 2022) .362 .256 .300 .646 .553 .596 .868 .929 .897 .625 .579 .598 ✓ UoS (Azim et al., 2022) **.490** .305 .376 **.697** .630 **.662** .881 .940 .909 **.689** .625 .649 ✓ ✓ Timeline-level SBERT(avg hist) .340 .329 .330 .605 .563 .582 .893 .912 .902 .613 .601 .605 (-signature) BiLSTM-sbert(hist) .463 .407 **.430** .629 **.637** .630 .895 .901 .898 .663 **.648 .653** Timeline-level (+signature) Seq-Sig-Net (our work) .454 .405 .425 .643 .607 .624 .896 .919 .908 .664 .644 .652 IS IE O macro-avg **Model Type** ![7_image_1.png](7_image_1.png) ![7_image_2.png](7_image_2.png) ![7_image_3.png](7_image_3.png) P R F1 P R F1 P R F1 P R F1 Emotion Future | TalkLife | Reddit | | | | | | | | | |--------------------------------------|------------------------------------------------------------------|-----------------------------------------|------|----|-----|----|----|----|-----| | Model name | Explanation of ablation | IS | IE | O | avg | IS | IE | O | avg | | SBERT post | (*) | .281 .431 .887 .533 .200 .541 .909 | .550 | | | | | | | | SBERT(avg hist) (*) + mean hist. + t | .262 .452 .890 .535 .330 .582 .902 | .605 | | | | | | | | | SWNU Network | (*) + 1 SWNU + t | .296 .477 .894 .556 .308 .623 .911 .614 | | | | | | | | | Seq-Sig-Net | (*) + BiLSTM on SWNU + t .309 .487 .894 .563 .425 .624 .908 .652 | | | | | | | | | | Model name | Memory (MB) | Parameters (million) | Avg Training time (minutes) | |-------------------|---------------|------------------------|-------------------------------| | BiLSTM-bert(hist) | 18.9 | 2.5 | 36.7 | | Seq-Sig-Net | 12.9 | 1.7 | 33.9 | historical representations and concatenate this to the current post representation with normalised time (SBERT(avg hist) model) we achieve better performance in IS, IE and macro-average F1 for both TalkLife and Reddit. This demonstrates the added value of having historical information for our task. The version of the model that uses a single SWNU to encode the recent history of a post presents improved performance in all metrics and classes on TalkLife and most metrics on Reddit (4.3%/11.6% relative improvement on macro-avg F1 over SBERT post on TalkLife/Reddit, respectively), showcasing the ability of SWNU to efficiently model time windows of user posts. Finally, Seq-Sig-Net yields the best macro-avg F1 score (5.6%/18.5% relative improvement over SBERT post on TalkLife/Reddit) and the best F1 scores for IS & IE, showing the ability of our model to produce historical user representations that memorise influential local parts of a user's timeline. ## 5.3 Computational Resources We assess the resource requirements of Seq-SigNet compared to LSTM-based models by gathering both the computational cost and time requirements of Seq-Sig-Net and of the most competitive baseline based on TalkLife experiments, BiLSTMbert(hist), which we present in Table 3. Seq-SigNet requires 12.9 MB of memory (1.7M parameters) while BiLSTM-bert(hist) requires 18.9MB (2.5M parameters) making the latter 46.5% more expensive to train - without accounting for additional significant memory requirements for fine tuning BERT representations in the first place. We also performed runtime experiments on one seed and all five folds for both models and obtained the average based on five experiments: BiLSTM-bert(hist) requires 8.3% more time, (again without considering the initial BERT fine-tuning step). Since the remaining competitive baselines are also LSTMbased with multiple units - e.g., UoS consists of a larger BiLSTM (100 units compared to 29 used by BiLSTM-bert(hist), plus an additional multihead attention layer) - we expect them to be even more expensive. Therefore, apart from its competitive performance, Seq-Sig-Net is much greener, operating on fewer parameters and compressed information. ## 5.4 Quantitative Analysis Peaks of Escalations (IEP) and Beginning of Switches (ISB) marked during the annotation of Escalations (IE) and Switches (IS) in mood constitute critical points (Tsakalidis et al., 2022b). In Fig. 5, we compare BiLSTM-bert(hist) against our model (Seq-Sig-Net) in capturing these points with respect to the distance (in number of posts) since the last IE or IS in a user's timeline on TalkLife data. For clarity we bin performance in 3-post steps ![8_image_0.png](8_image_0.png) and label cases without any prior IS or IE in the first bin. Our model well outperforms BiLSTMbert(hist) in identifying peaks even when the last signal of a moment of change appears more than 4posts in the past (which is the visible history length by the SWNU - see §3.3). This demonstrates the ability of Seq-Sig-Net to efficiently compress local information sequentially and model long-range effects. Our model's overall performance starts deteriorating on the overall Macro-F1 (all) metric when the last IS/IE is more than 12 posts in the past. We assume there is a trade-off between capturing detail in posts within short-range vs capturing coarser but longer-range information. This could be remedied potentially by changing the range of posts accessible to the SWNU. ## 5.5 Qualitative Analysis We evaluate the effectiveness of the learnable dynamic user representations from different models for the task by clustering the resulting embeddings. More specifically, we extracted the representations on TalkLife data before the output layer in SeqSig-Net and BiLSTM-bert(hist) as well as the finetuned BERT representations and we used UMAP to reduce them in 2 dimensions. In Fig. 6 we plot a randomly selected subset of representations from each model per class, to study how well the representations can distinguish the different classes. The reduced representations from both BiLSTMbert(hist) and Seq-Sig-Net achieve better separation than the Fine-tuned BERT representations: there is less mixing of clusters in the middle area of BiLSTM-bert(hist) and Seq-Sig-Net compared to the middle area of fine-tuned BERT representations. This highlights the importance of sequential modeling. Table 4 shows three popular clustering metrics for each representation type on Fig. 6 in order to better quantify class separation. When extracting ![8_image_1.png](8_image_1.png) ![8_image_2.png](8_image_2.png) different clustering scores based on the reduced representations, Seq-Sig-Net performs best across all three metrics, showcasing the strong clustering ability of our model representations and the advantage offered by signatures in pooling features indicative of local trajectories. ## 6 Conclusion And Future Work We present a novel sequential model architecture combining RNNs with path signatures, applicable to longitudinal tasks which consider timelines of social media posts. Our model achieves effective compression of a user's history through both signature transforms and sequential modeling via a BiLSTM. It does so through encoding the local progression of textual information in history through signatures in an integrated, robust and computationally efficient way. The use of signatures within our network allows for the incorporation of nonparametric higher order information in a learnable way and combines this benefit with the sequential modeling of local and long-term information through LSTMs. We evaluate our model on personalised longitudinal language modelling, on the task of identifying changes in a user's mood. Our model well outperforms or performs on-par with all baselines for this task operating on historical data, for both of the two existing datasets, from the TalkLife and Reddit platforms. In the future we plan to investigate direct injection of signature transforms into Transformer networks for time-sensitive modelling as well as explore other time-sensitive NLP tasks, such as rumour verification using social media threads (Zubiaga et al., 2016). ## Limitations Our work addresses the sequential task of modeling temporal user data through the use of path signatures as a tool for providing low-dimensional trajectories. Although in our work we inject a post-level timestamp in the final representations, the path signature element is agnostic of time and rather only makes use of the sequence order. It therefore potentially hinders the model's ability to efficiently model long timelines (unlike ours) with significant and highly irregular lags between posts. We plan to address this in future work. Additionally, we understand that by employing truncated path signatures in the model, we loose information that can potentially provide additional signal through the compression that happens both in dimensionality reduction and in the signature itself. We have evaluated our model on a longitudinal mental health task. While the proposed architecture is in principle task agnostic we have not yet evaluated it on other longitudinal tasks on social media. ## Ethics Statement Prior to engaging in this research work, Ethics approval was received from the Institutional Review Board (IRB) of the corresponding ethics board of the University of Warwick. Ethical considerations around the nature of user generated content (Mao et al., 2011; Keküllüoglu et al., 2020) from online platforms were addressed through thorough data analysis, data sharing policies to protect sensitive information and anonymisation of the data. Access to TalkLife's user sensitive data was obtained through the submission of a project proposal and the approval of the corresponding license by TalkLife. Potential risks from the application of NLP models in being able to identify moments of change in individuals' timelines are akin to those in earlier work on personal event identification from social media and the detection of suicidal ideation. Potential mitigation strategies include restricting access to the code base and annotation labels used for evaluation. ## Acknowledgements This work was supported by a UKRI/EPSRC Turing AI Fellowship to Maria Liakata (grant EP/V030302/1), the Alan Turing Institute (grant EP/N510129/1), a DeepMind PhD Scholarship, an EPSRC (grant EP/S026347/1), the Data Centric Engineering Programme (under the Lloyd's Register Foundation grant G0095), the Defence and Security Programme (funded by the UK Government), the Office for National Statistics & The Alan Turing Institute (strategic partnership) and by the Hong Kong Innovation and Technology Commission (InnoHK Project CIMDA). The authors would like to thank Yue Wu, Anthony Hills and the anonymous reviewers for their valuable feedback. ## References Falwah AlHamed, Julia Ive, and Lucia Specia. 2022. Predicting moments of mood changes overtime from imbalanced social media data. In Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology, pages 239–244. Silvio Amir, Glen Coppersmith, Paula Carvalho, Mário J Silva, and Bryon C Wallace. 2017. Quantifying mental health from social media with neural user embeddings. In Machine Learning for Healthcare Conference, pages 306–321. PMLR. Silvio Amir, Byron C Wallace, Hao Lyu, and Paula Carvalho Mário J Silva. 2016. Modelling context with user embeddings for sarcasm detection in social media. *arXiv preprint arXiv:1607.00976*. Imanol Perez Arribas, Guy M Goodwin, John R Geddes, Terry Lyons, and Kate EA Saunders. 2018. A signature-based machine learning model for distinguishing bipolar disorder and borderline personality disorder. *Translational psychiatry*, 8(1):1–7. Tayyaba Azim, Loitongbam Singh, and Stuart Middleton. 2022. Detecting moments of change and suicidal risks in longitudinal user texts using multi-task learning. In *Proceedings of the Eighth Workshop on* Computational Linguistics and Clinical Psychology, pages 213–218. Robert Bamler and Stephan Mandt. 2017. Dynamic word embeddings. In International conference on Machine learning, pages 380–389. PMLR. Ulya Bayram and Lamia Benhiba. 2022. Emotionallyinformed models for detecting moments of change and suicide risk levels in longitudinal social media data. In Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology, pages 219–225. John Pougué Biyong, Bo Wang, Terry Lyons, and Alejo J Nevado-Holgado. 2020. Information extraction from swedish medical prescriptions with sig-transformer encoder. *arXiv preprint* arXiv:2010.04897. Sravani Boinepelli, Shivansh Subramanian, Abhijeeth Singam, Tathagata Raha, and Vasudeva Varma. 2022. Towards capturing changes in mood and identifying suicidality risk. In Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology, pages 245–250. Patric Bonnier, Patrick Kidger, Imanol Perez Arribas, Cristopher Salvi, and Terry Lyons. 2019. Deep signature transforms. *arXiv preprint arXiv:1905.08494*. Lei Cao, Huijun Zhang, Ling Feng, Zihan Wei, Xin Wang, Ningyun Li, and Xiaohao He. 2019. Latent suicide risk detection on microblog via suicideoriented word embeddings and layered attention. arXiv preprint arXiv:1910.12038. Kuo-Tsai Chen. 1958. Integration of paths–a faithful representation of paths by noncommutative formal power series. *Transactions of the American Mathematical Society*, 89(2):395–407. Tianqi Chen and Carlos Guestrin. 2016. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining, pages 785– 794. Ilya Chevyrev and Andrey Kormilitzin. 2016. A primer on the signature method in machine learning. *arXiv* preprint arXiv:1603.03788. Munmun De Choudhury, Emre Kiciman, Mark Dredze, Glen Coppersmith, and Mrinal Kumar. 2016. Discovering shifts to suicidal ideation from mental health content in social media. In Proceedings of the 2016 CHI conference on human factors in computing systems, pages 2098–2110. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*. Bjarke Felbo, Alan Mislove, Anders Søgaard, Iyad Rahwan, and Sune Lehmann. 2017. Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm. arXiv preprint arXiv:1708.00524. Adeline Fermanian. 2021. Embedding and learning with signatures. Computational Statistics & Data Analysis, 157:107148. Lea Frermann and Mirella Lapata. 2016. A bayesian model of diachronic meaning change. Transactions of the Association for Computational Linguistics, 4:31–45. Sharath Chandra Guntuku, H Andrew Schwartz, Adarsh Kashyap, Jessica S Gaulton, Daniel C Stokes, David A Asch, Lyle H Ungar, and Raina M Merchant. 2020. Variability in language used on social media prior to hospital visits. *Scientific reports*, 10(1):1–9. William L Hamilton, Jure Leskovec, and Dan Jurafsky. 2016. Diachronic word embeddings reveal statistical laws of semantic change. arXiv preprint arXiv:1605.09096. Casper Hansen, Christian Hansen, Lucas Maystre, Rishabh Mehrotra, Brian Brost, Federico Tomasi, and Mounia Lalmas. 2020. Contextual and sequential user embeddings for large-scale music recommendation. In *Fourteenth ACM Conference on Recommender Systems*, pages 53–62. Yulan He, Chenghua Lin, Wei Gao, and Kam-Fai Wong. 2014. Dynamic joint sentiment-topic model. ACM Transactions on Intelligent Systems and Technology (TIST), 5(1):1–21. Zheng Ping Jiang, Sarah Ita Levitan, Jonathan Zomick, and Julia Hirschberg. 2020. Detection of mental health from reddit via deep contextualized representations. In *Proceedings of the 11th International* Workshop on Health Text Mining and Information Analysis, pages 147–156. Dilara Keküllüoglu, Walid Magdy, and Kami Vaniea. 2020. Analysing privacy leakage of life events on twitter. In *12th ACM conference on web science*, pages 287–294. Patrick Kidger and Terry Lyons. 2020. Signatory: differentiable computations of the signature and logsignature transforms, on both cpu and gpu. *arXiv preprint* arXiv:2001.00706. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Daniel Levin, Terry Lyons, and Hao Ni. 2013. Learning from the past, predicting the statistics for the future, learning an evolving system. arXiv preprint arXiv:1309.0260. Shangsong Liang, Xiangliang Zhang, Zhaochun Ren, and Evangelos Kanoulas. 2018. Dynamic embeddings for user profiling in twitter. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1764– 1773. Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. 2017. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pages 2980–2988. David E Losada, Fabio Crestani, and Javier Parapar. 2020. Overview of erisk at clef 2020: Early risk prediction on the internet (extended overview). In CLEF (Working Notes). Terry J Lyons. 1998. Differential equations driven by rough signals. *Revista Matemática Iberoamericana*, 14(2):215–310. Huina Mao, Xin Shuai, and Apu Kapadia. 2011. Loose tweets: an analysis of privacy leaks on twitter. In Proceedings of the 10th annual ACM workshop on Privacy in the electronic society, pages 1–12. Leland McInnes, John Healy, and James Melville. 2018. Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426. James H Morrill, Andrey Kormilitzin, Alejo J NevadoHolgado, Sumanth Swaminathan, Samuel D Howison, and Terry J Lyons. 2020. Utilization of the signature method to identify the early onset of sepsis from multivariate physiological time series in critical care monitoring. *Critical Care Medicine*, 48(10):e976– e981. Jiaqi Mu, Suma Bhat, and Pramod Viswanath. 2017. All-but-the-top: Simple and effective postprocessing for word representations. *arXiv preprint* arXiv:1702.01417. Shimei Pan and Tao Ding. 2019. Social media-based user embedding: A literature review. arXiv preprint arXiv:1907.00725. Martin Pavlovski, Jelena Gligorijevic, Ivan Stojkovic, Shubham Agrawal, Shabhareesh Komirishetty, Djordje Gligorijevic, Narayan Bhamidipati, and Zoran Obradovic. 2020. Time-aware user embeddings as a service. In *Proceedings of the 26th ACM* SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 3194–3202. Matthew E Peters, Mark Neumann, Luke Zettlemoyer, and Wen-tau Yih. 2018. Dissecting contextual word embeddings: Architecture and representation. arXiv preprint arXiv:1808.08949. Oleksandr I Provotar, Yaroslav M Linder, and Maksym M Veres. 2019. Unsupervised anomaly detection in time series using lstm-based autoencoders. In *2019 IEEE International Conference on Advanced* Trends in Information Theory (ATIT), pages 513–517. IEEE. Vikas Raunak, Vivek Gupta, and Florian Metze. 2019. Effective dimensionality reduction for word embeddings. In *Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019)*, pages 235–243. Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084. Maja Rudolph and David Blei. 2018. Dynamic embeddings for language evolution. In *Proceedings of the* 2018 World Wide Web Conference, pages 1003–1011. Tim Sainburg, Leland McInnes, and Timothy Q Gentner. 2021. Parametric umap embeddings for representation and semisupervised learning. *Neural Computation*, 33(11):2881–2907. Ramit Sawhney, Harshit Joshi, Lucie Flek, and Rajiv Shah. 2021. Phase: Learning emotional phase-aware representations for suicide ideation detection on social media. In *Proceedings of the 16th conference of* the european chapter of the association for computational linguistics: main volume, pages 2415–2428. Ramit Sawhney, Harshit Joshi, Saumya Gandhi, and Rajiv Shah. 2020. A time-aware transformer based model for suicide ideation detection on social media. In Proceedings of the 2020 conference on empirical methods in natural language processing (EMNLP), pages 7685–7697. Han-Chin Shing, Philip Resnik, and Douglas W Oard. 2020. A prioritization model for suicidality risk assessment. In *Proceedings of the 58th annual meeting of the association for computational linguistics*, pages 8124–8137. Pradyumna Prakhar Sinha, Rohan Mishra, Ramit Sawhney, Debanjan Mahata, Rajiv Ratn Shah, and Huan Liu. 2019. \# suicidal-a multipronged approach to identify and explore suicidal ideation in twitter. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, pages 941–950. Yan Song and Chia-Jung Lee. 2017. Learning user embeddings from emails. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 733–738. Adam Tsakalidis, Jenny Chim, Iman Munire Bilal, Ayah Zirikly, Dana Atzil-Slonim, Federico Nanni, Philip Resnik, Manas Gaur, Kaushik Roy, Becky Inkster, et al. 2022a. Overview of the clpsych 2022 shared task: Capturing moments of change in longitudinal user posts. Adam Tsakalidis and Maria Liakata. 2020. Sequential modelling of the evolution of word representations for semantic change detection. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8485–8497. Adam Tsakalidis, Federico Nanni, Anthony Hills, Jenny Chim, Jiayu Song, and Maria Liakata. 2022b. Identifying moments of change from longitudinal user text. arXiv preprint arXiv:2205.05593. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *arXiv preprint arXiv:1706.03762*. Bo Wang, Maria Liakata, Hao Ni, Terry Lyons, Alejo J Nevado-Holgado, and Kate Saunders. 2019. A path signature approach for speech emotion recognition. In *Interspeech 2019*, pages 1661–1665. ISCA. Bo Wang, Yue Wu, Nemanja Vaci, Maria Liakata, Terry Lyons, and Kate EA Saunders. 2021. Modelling paralinguistic properties in conversational speech to detect bipolar disorder and borderline personality disorder. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7243–7247. IEEE. Yue Wu, Terry J Lyons, and Kate EA Saunders. 2020. Deriving information from missing data: implications for mood prediction. arXiv preprint arXiv:2006.15030. Zecheng Xie, Zenghui Sun, Lianwen Jin, Hao Ni, and Terry Lyons. 2017. Learning spatial-semantic context with fully convolutional recurrent network for online handwritten chinese text recognition. IEEE transactions on pattern analysis and machine intelligence, 40(8):1903–1917. Weixin Yang, Lianwen Jin, Hao Ni, and Terry Lyons. 2016. Rotation-free online handwritten character recognition using dyadic path signature features, hanging normalization, and deep neural network. In *2016 23rd International Conference on Pattern* Recognition (ICPR), pages 4083–4088. IEEE. Weixin Yang, Terry Lyons, Hao Ni, Cordelia Schmid, and Lianwen Jin. 2017. Developing the path signature methodology and its application to landmarkbased human action recognition. arXiv preprint arXiv:1707.03993. Zijun Yao, Yifan Sun, Weicong Ding, Nikhil Rao, and Hui Xiong. 2018. Dynamic word embeddings for evolving semantic discovery. In *Proceedings of the* eleventh acm international conference on web search and data mining, pages 673–681. Junqi Zhang, Bing Bai, Ye Lin, Jian Liang, Kun Bai, and Fei Wang. 2020. General-purpose user embeddings based on mobile app usage. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 2831– 2840. Chen Zhu, Hengshu Zhu, Yong Ge, Enhong Chen, Qi Liu, Tong Xu, and Hui Xiong. 2016. Tracking the evolution of social emotions with topic models. Knowledge and Information Systems, 47:517–544. Ayah Zirikly, Philip Resnik, Ozlem Uzuner, and Kristy Hollingshead. 2019. Clpsych 2019 shared task: Predicting the degree of suicide risk in reddit posts. In Proceedings of the sixth workshop on computational linguistics and clinical psychology, pages 24–33. Arkaitz Zubiaga, Maria Liakata, Rob Procter, Geraldine Wong Sak Hoi, and Peter Tolmie. 2016. Analysing how people orient to and spread rumours in social media by looking at conversational threads. *PloS* one, 11(3):e0150989. ## A Hyperparameters Model Experimental Settings: We select the best model for each of the 5 folds (for TalkLife)/ 1 fold (for Reddit) using the best validation F1 macroaverage score on 70 epochs with early stopping (patience of 2 for TalkLife and 3 for Reddit). We used Adam optimiser (Kingma and Ba, 2014) with a weight decay of 0.0001. Following Tsakalidis et al. (2022a,b), we use the same train/test splits on both TalkLife and Reddit for direct comparison. For reported results we also used the same five random seeds of (0, 1, 12, 123, 1234), averaging them out at the end for both TalkLife and Reddit. Dev set was formed on 33% of the train set. Hyperparameter selection is based on the validation set, through grid search with parameters: learning rate ∈ [0.0001, 0.0003], batch size of 64, reduced UMAP dimensions of 15, Convolution 1D reduced dimensions ∈ [10, 12], LSTM hidden dimensions of SWNU ∈ [10, 12], BiLSTM hidden dimensions ∈ [200, 300], dimensions of feedforward layers ∈ [32, 64], dropout rate of 0.1, γ of focal loss ∈ [2, 3] and alpha of p1/pt with pt being the probability of class t in the training data. The best hyperparameters on TalkLife data are: learning rate= 0.0003, feed-forward layer dimensions=32, γ=2, Convolution 1D reduced dimensions=12 , LSTM hidden dimensions of SWNU=10 and BiLSTM hidden dimensions=300. For Reddit the best hyperparameters are: learning rate= 0.0001, feed-forward layer dimensions= 64, γ=2, Convolution 1D reduced dimensions=10, LSTM hidden dimensions of SWNU=10 and BiLSTM hidden dimensions=200. BiLSTM-bert(hist): For consistency we reproduced the history version of the BiLSTM-bert model as reported by Tsakalidis et al. (2022b). We used fine-tuned BERT representations trained on BERT-base (uncased) with a dropout rate of 0.25 and a linear layer on the [CLS] output, trained for 3 epochs using Adam optimiser and a batch size of 8. These were based on focal loss with γ =2 and α of p1/pt with pt being the probability of class t in the training data. We used the BERT fine-tuned model with focal loss above and obtained the representation inputs in the BiLSTM-bert(hist) model, for classification on the post-level. BiLSTM-bert(hist) models each current post and its recent history using 29 most recent posts in total. Following the exact same hyperparameters as Tsakalidis et al. (2022a), we explored BiLSTM units ∈ [64, 128, 256] for the first and 124 units for the second BiLSTM, dropout rate ∈ [0.25, 0.50, 0.75] and an output layer. Similar to the authors we used cross entropy loss with batch size ∈ [16, 32, 64] and learning rate ∈ [0.001, 0.0001]. We employed early stopping (with patience 2) on 100 epochs and ran the final model on the same five random seeds of (0, 1, 12, 123, 1234). BiLSTM-sbert(hist): We reproduced the history version of the BiLSTM-bert model as per Tsakalidis et al. (2022b). We used pretrained sentenceBERT representations (Reimers and Gurevych, 2019) of 384 dimensions to obtain the representation inputs in the BiLSTM-sbert(hist) model, for post-level classification. BiLSTMsbert(hist) models each current post and its recent history using 29 most recent posts in total. Following the exact same hyperparameters as Tsakalidis et al. (2022a), we explored BiLSTM units ∈ [64, 128, 256] for the first and 124 units for the second BiLSTM, dropout rate ∈ [0.25, 0.50, 0.75] and an output layer. Similar to the authors we explored batch size ∈ [16, 32, 64] and learning rate ∈ [0.001, 0.0001]. For the loss function we employed focal loss for direct comparison with SeqSig-Net that also uses focal loss with γ ∈ [2, 3] and alpha of p1/pt (with pt being the probability of class t in the training data). We employed early stopping (with patience 2 for TalkLife and 3 for Reddit) on 100 epochs and ran the final model on the same five random seeds of (0, 1, 12, 123, 1234). Ablation Study (including SBERT(avg hist)): We performed hyper-parameter tuning for all the models of the study using Adam optimiser (Kingma and Ba, 2014) with a weight decay of 0.0001 and focal loss (Lin et al., 2017). We used the exact same train/test splits for direct comparison as well as the same five random seeds of (0, 1, 12, 123, 1234). For hyperparameter tuning of ablation models, including SBERT(avg hist) we followed a similar regime with our main experimental setting, using a learning rate ∈ [0.0001, 0.0003], batch size of 64, dimensions of feed-forward layers ∈ [32, 64], dropout rate of 0.1, γ of focal loss ∈ [2, 3] and alpha of p1/pt with pt being the probability of class t in the training data. For the ablation of SWNU Network we also used reduced UMAP dimensions of 15, Convolution 1D reduced dimensions ∈ [10, 12] and LSTM hidden dimensions ∈ [10, 12]. ## B Libraries The experiments ran in a Python 3.8.13 environment with the following libraries: torch (1.8.1), signatory (1.2.6), numpy (1.19.5), pandas (1.4.2), sentence_transformers (2.0.0), scikitlearn (1.0.1), umap (0.5.3). ## C Infrastructure The runs were performed on a Standard F16s_v2, with 16 CPUs and 32 GiB of RAM. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 'Limitations' paragraph after the Conclusion and Future Work section ✓ A2. Did you discuss any potential risks of your work? 'Ethics Statement' paragraph, after the 'Limitations' ✓ A3. Do the abstract and introduction summarize the paper's main claims? Both the Abstract and the Introduction summarise our main claims. Our contributions are listed at the end of the Introduction. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4.1 ✓ B1. Did you cite the creators of artifacts you used? Section 4.1 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Provided in the 'Ethics Statement' ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? 'Ethics statement' and references in Section 4.1 in using the artefacts in the same way as the authors intended, while also using the same settings as referenced in Appendix A ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? 'Ethics statement' ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? We mention the language of the data at Section 4.1 and the platforms from which they were obtained. We mention throughout this Section and in the rest of the paper that the artifacts are around the mental health domain. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Sections 4.1 and Appendix A The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ## C ✓ **Did You Run Computational Experiments?** Our experimental setup is explained in Section 4. Further ablation studies and analyses are reported in sections 5.2-5.5. Hyperparameters are reported in the Appendix A ('Hyperparameters'). The libraries we have used are provided in Appendix B ('Libraries'). ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? We provide the number of parameters and memory requirements for our model and the most competitive baseline as well as the computational budget in Section 5.3 ('Computational Resources'). We also provide a detailed list of hyperparameters and random seeds we have used in our experiments (Appendix A). The infrastructure we have used is provided in Appendix C. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? All of the information is provided in Appendix A. ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? We report average statistics from our runs with random seeds (clarified in Appendix A). We do not report all results per fold/run/dataset, as we thought this would be overwhelming (5 runs with 5 seeds on two datasets). ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Yes. We have detailed our packages and their corresponding versions in Appendix B ('Libraries') and elaborated around specifics of a package in Section 3.3 ## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** We Work With Data Shared In Online Platforms By Real Users. We Did Not Use Any Annotators. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? In the 'Ethics Statement'. We have IRB ethics for the work as mentioned in the 'Ethics Statement', while we only work with existing datasets. ✓ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? 'Ethics Statement' D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
li-etal-2023-multi-modal-debiasing
A Multi-modal Debiasing Model with Dynamical Constraint for Robust Visual Question Answering
https://aclanthology.org/2023.findings-acl.311
Recent studies have pointed out that many well-developed Visual Question Answering (VQA) systems suffer from bias problem. Despite the remarkable performance gained on In-Distribution (ID) datasets, the VQA model might merely capture the superficial correlation from question to answer rather than showing real reasoning abilities. Therefore, when switching to Out-of-Distribution (OOD) dataset, whose test distribution is unknown or even reversed with the training set, significant drops might be demonstrated. Although efforts have been devoted to easing the negative bias effect brought by language prior and analysing its inherent cause, they are still limited by the following two aspects. First, most current debiasing methods achieve promising OOD generalization ability with a major sacrifice of the ID performance. Second, existing researches are restricted by exploiting comprehensive biases, since weakening the language bias is mainly focused, while only a few works consider vision bias. In this paper, we investigate a straightforward way to mitigate bias problem for VQA task. Specifically, we reduce bias effect by subtracting bias score from standard VQA base score. Based on such a direct strategy, we design two bias learning branches to detect more bias information, which are combined with a dynamical constraint loss to alleviate the problem of over-correction and insufficient debiasing effect. We evaluate our method on the challenging VQA v2.0 and VQA-CP V2,0 datasets and the proposed method achievessignificant improvement.
# A Multi-Modal Debiasing Model With Dynamical Constraint For Robust Visual Question Answering Yu Li1, Bojie Hu1, 2, Fengshuo Zhang1, Yahan Yu2**, Jian Liu**1, Yufeng Chen1and **Jinan Xu**1∗ 1 Beijing Key Lab of Traffic Data Analysis and Mining, Beijing Jiaotong University, China 2 Tencent Minority-Mandarin Translation, Beijing, China {yuyuli, fengshuozhang, jianliu, jaxu, chenyf}@bjtu.edu.cn [email protected], [email protected] ## Abstract Recent studies have pointed many welldeveloped Visual Question Answering (VQA) systems suffer from bias problem. Despite the remarkable performance gained on InDistribution (ID) datasets, the VQA model might capture the superficial correlation from question to answer rather than showing real reasoning abilities. Therefore, when switching to Out-of-Distribution (OOD) dataset, whose test distribution is unknown or even reversed with the training set, significant drops appear. Efforts have been devoted to negative bias brought by language prior but are still limited by two aspects. First, most current debiasing methods achieve promising OOD generalization ability with a sacrifice of the ID performance. Second, they are restricted by exploiting comprehensive biases, since weakening the language bias is mainly focused and few works consider vision bias. In this paper, we investigate a straightforward way to mitigate bias problem for VQA task by subtracting bias score from VQA base score. Then we design two bias learning branches to detect more bias, which is combined with a dynamical constraint loss to alleviate the problem of over-correction and insufficient debiasing. We evaluate our method on the challenging VQA v2.0 and VQA-CP V2.0 datasets and achieve significant improvement. ## 1 Introduction Visual Question Answering (VQA) (Antol et al., 2015) is a challenging task spanning both computer vision and natural language processing. The goal of VQA is to infer the answer based on a given image and a textual question, which is generally cast as a **classification problem**. Promising results on test set whose distribution is analogous with the training set, such as VQA v2.0 (Goyal et al., 2017), are generally favorable. However, latest studies (Agrawal et al., 2016; Goyal *Jinan Xu is the corresponding author Figure 1: A straightforward way to mitigate the bias ![0_image_0.png](0_image_0.png) ![0_image_1.png](0_image_1.png) ![0_image_2.png](0_image_2.png) problem for VQA task is by subtracting the QA score from the VQA base score. et al., 2017) have pointed out that many welldeveloped VQA models merely over-exploit the language prior from the training set to provide correct answers without reasoning. That is, the answer prediction might rely more on the correlation to the question and less on the image. For instance, in the VQA-CP v2.0 (Agrawal et al., 2018) training set, the answers of the question with the first few words "how many *· · ·* " are usually "2", and the answers of the specific question "what′*s the color of the bananas*?" are almost all "*yellow*". Consequently, significant drops (Agrawal et al., 2018) are demonstrated while handling with the out-of-distribution test dataset. Recently, solutions for this problem can be categorized into two classes, namely, nonaugmentation-based methods (Cadene et al., 2019; Ramakrishnan et al., 2018; Wu and Mooney, 2019; Selvaraju et al., 2019; Clark et al., 2019; Jing et al., 2020; Niu et al., 2021) and augmentation-based methods (Chen et al., 2020; Gokhale et al., 2020; Liang et al., 2020; Teney et al., 2020). The former seeks to weaken language bias or leverage visual grounding to increase the image dependency, while the latter aims to to balance the dis-tribution of training data. Moreover, most of the advanced debiasing methods still suffer from two issues, namely comprehensive bias detecting (Wen et al., 2021) and In-Distribution (ID) generalizability problems (Niu and Zhang, 2021). In this work, firstly we explore a very straight- ![1_image_0.png](1_image_0.png) | Model | ALL | Y/N | NUM. | Other | |--------------------|-------|-------|--------|---------| | Updn (SV QA) | 39.80 | 41.39 | 12.10 | 46.56 | | Updn (SV QA - SQA) | 51.49 | 76.38 | 13.93 | 48.74 | forward solution to VQA bias, which is shown in Figure 1. Generally, we try to design different strategies for learning and inference via a VQA base model and a question answering (QA) model, beforehand. In the training procedure, these two models are separately optimized, and let S*V QA*, SQA denote the VQA base score and QA score, respectively. In the inference procedure, we calculate the debiased result by subtracting the SQA from S*V QA*. We apply such a simple strategy to the popular VQA model Updn (Anderson et al., 2018) on VQA-CP v2.0 dataset, and we find that the overall accuracy gains from 39.80% to 51.49% as shown in Table 1. Despite its remarkable performance, we still identify the above two major limitations in this strategy, which specifically reflect the following two aspects: - **First, the model answers questions without** comprehensively exploiting vision bias. Figure 2 (a) & (b) in the left indicate the impact of the bias related to visual side, where the salient "*elephant*" object leads to wrong answers. According to our statistics, most of the irrelevant wrong answer "*elephant*" appear in "*what*" type questions, while the biased answers might be different in the questions belonging to "*how many*" type, such as Figure 2 (c). Therefore, both language and visual modalities might jointly bring about bias. - **Second, strong uncertainty exists in the final score, since the base model and bias** model are optimized separately. That means the model cannot guarantee that the correct answer has the highest score after subtracting the bias effect. For this reason, the model still suffers from inadequate debiasing and overcorrection problems, which has been shown in the right part of Figure 2. To solve the above problems, we propose a Multi-modal Debiasing model with Dynamical Constraint (MDDC). For the first limitation, we construct two bias learning branches. Inspired by the way of using the single modality to identify unimodal-specific bias, we adopt a question-only branch for language bias. Unfortunately, such a strategy is unsuitable for the bias issue related to visual information. The reason is that the same image is usually used to answer various types of questions, thus the model cannot obtain the specific vision bias that involves necessary information to answer the question, but only an image-to-answer distribution bias. We assume that a more effective way is to provide some question clues for images to generate question-specific vision bias. Following this assumption, we design a special bias learning branch by incorporating prompts extracted from questions into an image-only answering model. For the second limitation, we propose a dynam- ![2_image_0.png](2_image_0.png) ical constraint loss to reduce the difference of the amount of information (Ramakrishnan et al., 2018) between the VQA base module and the bias module. In this way, we dynamically subtract the bias score according to the degree of bias. Therefore, we mitigate the problem of uncertain inference caused by the separate optimization of these two modules. We evaluate the proposed MDDC on VQA v2.0 and VQA-CP v2.0 benchmarks. Experimental results on both datasets demonstrate that our debiasing strategy is competitive compared with mainstream baselines. ## 2 Related Work Visual question answering has witnessed great progress, while a growing body of work (Agrawal et al., 2016; Goyal et al., 2017) has pointed out the drawbacks of reasoning ability and bias affect. In this section, we review recently proposed VQA debiasing approaches, which can be generally fall into non-augmentation-based methods and augmentation-based methods. ## 2.1 Non-Augmentation-Based Methods One of the strategies is to introduce prior knowledge (i.e., human visual and textual explanations) to strengthen the visual grounding for VQA model. HINT (Selvaraju et al., 2019), SCR (Wu and Mooney, 2019) are proposed with a self-critical training objective that ensures the correct answers to match important image regions with the help of human explanations. Another common solution (Ramakrishnan et al., 2018; Cadene et al., 2019) is to design ensemble-based models, which adds an auxiliary QA model to identify bias. Ramakrishnan et al. (2018) propose an adversarial regularization method between the VQA base model and the question-only branch to overcome language bias. RUBi (Cadene et al., 2019) also leverages the QA model to capture language bias when unwanted regularities are identified. Wen et al. (2021) use both question-to-answer and vision-to-answer models to generate bias representations of two modalities. Niu et al. (2021) design a novel counterfactual inference framework to reduce language bias by subtracting the direct language effect from the VQA total causal effect. Guo et al. (2022) propose a loss re-scaling way to assign different weights to each answer according to the training data statistics. ## 2.2 Augmentation-Based Methods Recently, studies automatically generate additional question-image pairs to balance the distribution of training data. Chen et al. (2020) propose a method, CSS, to produce massive counterfactual samples by masking the critical objects and words. Mutant (Gokhale et al., 2020) generates the samples by semantic transformations of the original images or questions. Teney et al. (2020) and Zhu et al. (2020) obtain negative samples to balance the dataset without external annotations. Chen et al. (2022) design a knowledge distillation-based answer assignment to generate pseudo answers for each image-question pairs. However, it is important to note that the VQA-CP is proposed to evaluate whether the VQA model can distinguish between visual knowledge and language prior. Therefore, we expect that the model can be robust enough to make debiased inference under biased training. ## 3 Our Approach In this section, we first describe the general architecture of our proposed MDDC model and then give the details for each component. Figure 3 depicts the overview of our approach, which consists of three major modules: (1) the standard VQA base module, which aims to indicate the probability belonging to each answer candidate; (2) the bias module, which aims to capture biases combining both questions and images simultaneously; (3) the dynamical constraint module, which aims to dynamically control the final prediction distribution. ## 3.1 Standard Vqa Base Module Given a dataset D = {(vi, qi, ai)} N i=1 which contains N samples, we define the i-th image vi ∈ V, the i-th question qi ∈ Q, and the i-th answer ai ∈ A. A standard VQA module is defined as: $$p(a|v_{i},q_{i})=\sigma(f_{V Q A}(e_{v}(v_{i}),e_{q}(q_{i})))\quad(1)$$ ![3_image_0.png](3_image_0.png) where ev(·) and eq(·) denote image and question encoders, respectively, f*V QA*(·)represents the mapping function which is learned to project the multimodal feature to the answer space, σ(·) is the sigmoid function. ## 3.2 Bias Module At the heart of our system is the design to obtain bias distributions. To make use of this intuition, we capture the language bias by using a QA model, as well as the vision bias by incorporating question clues into a vision-to-answer-only model. ## 3.2.1 Language Bias Learning Language bias stands for the prior that produces the answer only according to the given question. For example, given a question qi, we denote the language bias answer probability as: $$p(a|q_{i})=\sigma(f_{Q}(e_{q}(q_{i})))$$ where fQ(·) is a linear function to map the question representation to the answer space. ## 3.2.2 Question-Guided Vision Bias Learning We introduce a question-guided vision bias learning module for VQA debiasing, which is shown in Figure 4. Since merely using visual information is hard to obtain more targeted bias, a more flexible way is to guide images to generate answers with the intent and concepts of questions. The intent-level clues provide semantic enhancement on individual images, manifesting the goal of the question in a global view. Additionally, the concept-level clues can supplement more semantics to images, where the concept refers to a set of entities mentioned in the question. Here, we compute answer probability predicted by the question-guided vision bias module as follows: $$p(a|v_{i},a_{t},q_{t},c)=\sigma(f_{F}(e_{v}(v_{i}),a_{t},q_{t},c))\quad(3)$$ where at, qt and c stand for the answer type, question type and concepts of the question, respectively, fF (·) is the function to combine these components and map the fusion representation to the answer space. Concretely, we fuse at and qt via a gate mechanism to obtain the question intent vector which is later added with each image region embedding. Then, a multi-layer self-attention (Vaswani et al., 2017) is adopted to make interactive learning for the image features incorporated with intent clues. Finally, we get the vision bias output via a concept attention or average pooling operation. Note that the concept attention is a normal attention mechanism using c as the query to weight the image regions. However, we assume that not all questions are suitable for using concepts. For example, as for the *other* type question "*W hat color is the apple*?", it might be easy to answer "red" if the concept "*apple*" and intent "*what color*" are provided. But for the *number* type questions, they are still hard to be answered even though given the intent and concept. Thus we only apply concept attention to *number* type questions, and employ average pooling operation on *yes/no* and *other* type questions. ## 3.3 Dynamical Constraint $$(2)$$ As mentioned above, it is necessary to build connection between the standard VQA base module and bias module. In this subsection, we introduce a dynamical constraint loss LD to control the final distribution subtracted by the bias probability. Denote B = {b1*, . . . , b*M} as the set of features extracted from M bias modules. We define s as the feature outputted from the VQA base module. Afterwards, LD is computed as: $$\mathcal{L}_{D}=\frac{1}{N}\sum_{i=1}^{N}\sum_{j=1}^{A}\sum_{k=1}^{M}\beta_{i j}(I(a_{j}|s)_{i}-I(a_{j}|b_{k})_{i})\tag{4}$$ $$\beta_{i j}=p(a_{j}|s)_{i}\tag{5}$$ where $N$, $A$ are the number of samples and the number of candidate answers, respectively, β is a dynamic control coefficient, I(X|Y ) represents the amount of information of X under the condition of Y . The goal of LD is to decrease the 5035 uncertainty of the standard VQA module prediction and increase the probability uncertainty of the bias module according to the degree of bias. As for the former, it helps the VQA base module learn adequate knowledge disrupted by bias. As for the latter, it prompts the bias module to compute the appropriate bias score, for that different samples have varying degrees of impact from bias. Note that both the VQA probability score p(aj |s) and the bias probability score p(aj |b) satisfy the Bernoulli distribution since the sigmoid function is applied to the final output layer. Therefore, LD is different from the Kullback-Leibler divergence (Doersch, 2016). More details are explained in Appendix A. ## 3.4 Training And Inference Training. In the model training phase, we separately optimize the standard VQA module and bias module via the binary cross-entropy loss bce(·), which is defined as: $${\mathcal{L}}_{B}=b c e(p(a|\mathbf{s}),y)+w\sum_{k=1}^{M}b c e(p(a|b_{k}),y)\,\,\,(6)$$ where w is a hyper-parameter to balance the base and bias components, and y is the target label. Then, the final loss function is computed as L = LB +λLD, where λ is the discount coefficient. Additionally, we stop the gradient backpropagation of the bias module to the language encoder and vision encoder in order to prevent the VQA base module from updating in a biased direction. Inference. At the inference stage, the final score for the j-th answer ∆p(aj ) is distinct according to different answer types, which is defined as: $$\Delta p(a_{j})^{t}=p(a_{j}|s)-\sum_{k=1}^{M}\alpha_{k}^{t}p(a_{j}|b_{k})\ \ \ \ (7)$$ where t is the answer type (e.g., yes/no, *number*, and *other*), and α tstands for the weight of t, which satisfies the condition of PM k=1 α t k = 1. ## 4 Experiment 4.1 Experimental Settings Dataset. We conduct experiments on VQA-CP v2.0 dataset (Agrawal et al., 2018), which is proposed to evaluate the debiasing ability. Besides, we also validate the performance on VQA v2.0 (Goyal et al., 2017), to see the generalization ability on ID dataset. For both datasets, the questions are divided into three categories: yes/no, *number* and *other*. | Base | Parameter | VQA-CP v2.0 | VQA v2.0 | |-------------------------------|-------------------|-------------------|-----------------| | lr | 5e-4 | 5e-4 | | | batch size | 256 | 256 | | | Updn | epoch | 25 | 25 | | t | y , αn, αo} | {0.99, 0.01, 0.5} | {0.5, 0.5, 0.5} | | α 1 = {α α t = {α y , αn, αo} | {0.01, 0.99, 0.5} | {0.5, 0.5, 0.5} | | | 2 | lr | 5e-5 | 5e-5 | | batch size | 32 | 32 | | | LXMERT | epoch | 10 | 10 | | α t = {α y , αn, αo} | {0.99, 0.01, 0.5} | {0.5, 0.5, 0.5} | | | 1 t | y , αn, αo} | {0.01, 0.99, 0.5} | {0.5, 0.5, 0.5} | | α 2 = {α | | | | Metric. Following previous work (Antol et al., 2015), the standard evaluation metric in VQA challenge is adopted, which is computed as: $$A c c(a n s)=m i n{\bigg(}1,{\frac{\#h u m a n s\;p v o v i d e d\;a n s}{3}}{\bigg)}$$ where the *humans provided ans* is the number of each answer that human annotated for question. Hyper-Parameters and Environment. Optimal hyper-parameters are chosen via grid search. All the embeddings of question clues are randomly initialized. The intent extraction model is trained by fine-tuning BERT (Devlin et al., 2019), and the concepts are extracted by entity recognition tool. We use the Pytorch 1.40 framework to implement our model. All computations are done on NVIDIA Tesla V100 GPUs. Other important hyper-parameters are listed in Table 2. ## 4.2 Tested Backbones We mainly implement our approach on two VQA backbones, namely Updn (Anderson et al., 2018) and LXMERT (Tan and Bansal, 2019). Updn. the most popular VQA baseline, which firstly employs the pre-trained object detection model (Ren et al., 2015) to obtain features of salient image regions. LXMERT. a multi-modal pre-training framework based on a cross-modality encoder from Transformers. In our experiments, we separately divide this backbone into two groups depending on whether loading pre-trained weights or not. ## 4.3 Baselines We compare our model with existing mainstream bias reduction techniques, which can be grouped as Models Base VQA-CP v2.0 test VQA v2.0 val All Y/N Num. Other All Y/N Num. Other SAN (Yang et al., 2016) - 26.88 38.35 11.96 42.98 52.41 70.06 39.28 47.84 GVQA (Agrawal et al., 2018) - 39.23 57.99 13.68 22.14 48.24 72.03 31.17 34.65 S-MRL (Cadene et al., 2019) - 38.46 42.85 12.81 43.20 63.10 - - - Updn (Anderson et al., 2018) - 39.74 42.27 11.93 46.05 63.48 81.18 42.14 55.66 Updn † (Anderson et al., 2018) - 39.80 41.39 12.10 46.56 **64.36** 82.02 43.31 56.49 AReg (Ramakrishnan et al., 2018) Updn 41.17 65.49 15.48 35.48 62.75 79.84 42.35 55.16 GRL (Grand and Belinkov, 2019) Updn 42.33 59.74 14.78 40.76 63.27 - - - SCR (Wu and Mooney, 2019) Updn 48.47 70.41 10.42 47.29 62.30 77.40 40.90 **56.50** AttAlign (Selvaraju et al., 2019) Updn 39.37 43.02 11.89 45.00 63.24 80.99 42.55 55.22 HINT (Selvaraju et al., 2019) Updn 46.73 70.04 10.68 46.31 63.38 81.18 42.99 55.56 DLR (Jing et al., 2020) Updn 48.87 70.99 18.72 45.57 57.96 76.82 39.33 48.54 RUBi (Cadene et al., 2019) Updn 44.23 67.05 17.48 39.61 - - - - LM (Clark et al., 2019) Updn 48.78 72.78 14.61 45.58 63.26 81.16 42.22 55.22 LMH (Clark et al., 2019) Updn 52.73 72.95 **31.90** 47.79 56.35 65.06 37.63 54.69 CSS (Chen et al., 2020) Updn 41.16 43.96 12.78 47.48 59.21 72.97 40.00 55.13 CF-VQA (HM) (Niu et al., 2021) Updn 49.74 74.81 18.46 45.19 63.73 82.15 **44.29** 54.86 CF-VQA (SUM) (Niu et al., 2021) Updn 53.55 **91.15** 13.03 44.97 63.54 **82.51** 43.96 54.30 Re-scaling (Guo et al., 2022) Updn 47.09 68.42 21.71 42.88 55.50 64.22 39.61 53.09 MDDC (Ours) Updn **54.70** 83.58 19.93 **49.10** 63.33 81.64 42.56 54.88 LXMERT∗† (Tan and Bansal, 2019) - 40.91 41.91 13.71 47.85 **65.32 83.13 46.51 56.75** MDDC (Ours) LXMERT **53.83 76.73 26.07 49.44** 64.03 82.15 45.64 55.10 LXMERT † (Tan and Bansal, 2019) - 57.11 54.97 38.34 63.38 **75.87 91.46 59.61 68.31** MDDC (Ours) LXMERT **69.77 87.88 52.80 64.93** 74.51 90.14 58.81 66.76 Table 3: Summary of results on VQA-CP v2.0 and VQA v2.0 datasets. † denotes our implementation, and ∗ stands for using the LXMERT model structure without loading multi-modal pre-trained weights. The best score is in **bold** and the second best is underlined. Models VQA-CP v2.0 test VQA v2.0 val All Y/N Num. Other ∆Gap **All Y/N Num. Other** ∆Gap Updn † 39.80 41.39 12.10 46.56 - **64.36 82.02 43.31 56.49** - + bl 51.49 76.38 13.93 48.74 + 11.69 62.41 81.17 42.47 53.40 - 1.95 + bl + LD 54.10 **84.27** 14.22 **49.23** + 14.30 62.65 81.40 42.92 53.61 - 1.71 + bl + bv 52.15 77.00 16.61 48.87 + 12.35 63.04 81.60 41.24 54.69 - 1.32 + bl + bv + LD **54.70** 83.58 **19.93** 49.10 + **14.90** 63.33 81.64 42.56 54.88 - **1.03** LXMERT∗40.91 41.91 13.71 47.85 - **65.32 83.13 46.51 56.75** - + bl 50.80 69.93 19.49 49.35 + 9.89 62.98 81.86 45.53 53.22 - 2.34 + bl + LD 51.48 71.82 19.57 **49.56** + 10.57 63.17 81.69 45.60 53.73 - 2.15 + bl + bv 52.41 72.94 25.63 48.98 + 11.50 64.08 82.24 45.51 55.17 - **1.24** + bl + bv + LD **53.83 76.73 26.07** 49.44 + **12.92** 64.03 82.15 45.64 55.10 - 1.29 LXMERT 57.11 54.97 38.34 63.38 - **75.87 91.46 59.61 68.31** - + bl 69.09 87.42 52.73 63.96 + 11.98 73.85 89.66 58.65 65.85 - 2.02 + bl + LD 69.30 **87.99** 50.85 64.55 + 12.19 73.84 89.84 58.53 65.72 - 2.03 + bl + bv 69.54 87.41 51.34 **65.15** + 12.43 74.62 90.34 59.04 66.77 - **1.25** + bl + bv + LD **69.77** 87.88 **52.80** 64.93 + **12.66** 74.51 90.14 58.81 66.76 - 1.36 Table 4: Ablation study on VQA-CP v2.0 and VQA v2.0 Datasets. † denotes our implementation, and ∗ stands for using the LXMERT model structure without loading multi-modal pre-trained weights. The best score is in **bold** and the second best is underlined. follows: (1) Methods incorporating human visual or textual explanation, including SCR (Agrawal et al., 2018), AttAlign (Selvaraju et al., 2019) and HINT (Selvaraju et al., 2019). (2) Adversarial regularization-based methods, including AReg (Ramakrishnan et al., 2018) and GRL (Grand and Belinkov, 2019). (3) Ensemble-based methods, including RUBi (Cadene et al., 2019), LM (Clark et al., 2019), LMH (Clark et al., 2019), Re-scaling (Guo et al., 2022). (4) Question encoding-based method DLR (Jing et al., 2020). (5) Counterfactualbased methods, including CF-VQA (Niu et al., 2021), CSS (Chen et al., 2020). In subsequent part, all the experimental results for the compared baselines are taken from their original papers. ## 4.4 Results The results on VQA-CP v2.0 and VQA v2.0 are reported in Table 3. Results on VQA-CP v2.0. Overall, our method achieves the best performance on VQA-CP v2.0 dataset compared with non-augmentation approaches. Drilling down to the question type, our method also gets competitive results. Specifically, we achieve the second-best results on *yes/no* type question, and the best results on *other* type question. It is worth noting that our strategy obtains improvements of 6.90% and 4.13% across *number* type and *other* type question compared with CFVQA (Niu et al., 2021) which also employs a subtracting way to reduce bias effect. We infer the reason is that our model detects more comprehensive biases from both language and vision aspects, and our dynamical constraint loss also plays a role in adjusting the final distribution. Results on VQA v2.0. In consistent with what metioned in (Agrawal et al., 2018; Selvaraju et al., 2019; Ramakrishnan et al., 2018; Cadene et al., 2019; Chen et al., 2020; Niu et al., 2021) , we usually observe a drop after debiasing on VQA v2.0 because of the almost consistent distribution followed by training and test datasets. As a comparison, our debiasing strategy demonstrates strong robustness and achieves competitive results. To sum up, these results not only show the effectiveness of our approach for reducing bias problem but also the value of the performance on ID dataset. ## 5 Analysis 5.1 Ablation Study An ablation experiment would be informative to analyze the effects of the dynamical constraint loss (denoted as + LD), and the bias learning strategy, which can be taken apart as language bias learning (denoted as + bl), and question-guided vision bias learning (denoted as + bv). For fairness, all the models are trained under the same settings. Table 4 lists the results on two datasets. It can be seen that coupling with all the components does really helpful on VQA-CP v2.0 (Agrawal et al., 2018), and can narrow the drop gap on VQA v2.0 (Goyal et al., 2017). When there is no multi-modal pre-trained knowledge (e.g., Updn (Anderson et al., 2018) and LXMERT∗(Tan and Bansal, 2019)), on OOD dataset (i.e., VQA-CP v2.0), we find + bv | Image | Intent | Concept | All | Y/N | Num. | Other | |---------|----------|-----------|-------|-------|--------|---------| | ✓ | 52.85 | 85.13 | 12.44 | 47.02 | | | | ✓ | ✓ | 53.65 | 86.24 | 12.27 | 47.91 | | | ✓ | ✓ | 53.10 | 86.01 | 12.46 | 47.00 | | | ✓ | 53.53 | 84.94 | 12.83 | 48.23 | | | | ✓ | ✓ | ✓ | 54.70 | 83.58 | 19.93 | 49.10 | brings significant improvement on *number* type question. A possible reason might be that the bias in *number* type question is severely affected by images with language information (e.g., intent and concept) on VQA-CP v2.0. When leveraging pretrained weights into LXMERT, there are still slight improvements brought by all the components. On the whole, our bias learning strategy (+ bl + bv) can detect more comprehensive biases than individual + bl, and it narrows the performance gap on ID dataset (i.e., VQA v2.0). Fortunately, integrating the dynamical constraint improves the result across all base models on OOD dataset, and LD does not have a significant negative impact on ID dataset. To conclude, it is always preferable to use all the components (+ bl + bv + LD), due to the superior performance. This proves the effectiveness of our bias learning strategy and dynamical constraint. ## 5.2 Vision Bias Learning Analysis 5.2.1 Impact Of Question Clues In the following set of experiments, we demonstrate the effectiveness of the question clues mentioned in our question-guided vision bias learning module on VQA-CP v2.0. Note that we only change three components (i.e., image, intent, concept) based on the overall Updn + MDDC model. As depicted in Table 5, we conclude that both the image and the question clues are necessary for debiasing. Concretely, leveraging intent feature is helpful for *other* type question, based on which incorporating concept information via a conceptattention mechanism boosts the performance of number type question from 12.27% to 19.93%. Such a phenomena indicates that the base model Updn might easily overfit the training set of VQACP v2.0 dataset and learn less valid knowledge for number recognition ability. ## 5.2.2 Impact Of Layer Number We further investigate the layer number of selfattention (Vaswani et al., 2017) in question-guided ![7_image_1.png](7_image_1.png) ![7_image_0.png](7_image_0.png) ![7_image_2.png](7_image_2.png) ![7_image_3.png](7_image_3.png) ![7_image_5.png](7_image_5.png) vision bias learning module on VQA-CP v2.0 test split. Figure 5 shows the change of accuracy on the test set as the layer number increases, which is based on Updn model. A proper number of layers can make the model perform well on *number* type questions, which again verifies the effect of our vision bias learning strategy, while accuracies on the rest items are more stable. We find that the best results can be obtained when the number is equal to 3, and further numbers do not provide a significant performance improvement. ## 5.3 Qualitative Analysis Debiasing qualitative examples on VQA-CP v2.0 are shown in Figure 6. By inspecting the results, we can further verify that our debiasing approach can address more comprehensive biases and dynamically adjust the final score. As illustrated in ![7_image_4.png](7_image_4.png) Figure 6, MDDC can successfully mitigate the biasd inference on all kinds of question types. For the example at the first row, MDDC overcomes the bias related to both question and image sides (i.e., "white", "*elephant*"). The two examples at the second row manifest the effects of MDDC on *number* and *yes/no* type questions. Another example based on Updn model in Figure 7 further illustrates the benefit brought by the dynamical constraint loss LD. Specifically, LD helps to increase the difference between the VQA score and the QA score corresponding to the answer of "no", and it narrows the score gap of the wrong answer (i.e., "yes"), which promotes the final score of the correct answer to be the highest. ## 6 Conclusion A robust visual question answering model with dynamical constraint is proposed for reducing as much multi-modal bias as possible. Compared with previous researches, we investigate a very straightforward way to obtain debiasing effect by subtracting bias score from VQA base score. On one hand, we design a language bias learning branch and a question-guided vision bias learning branch to detect comprehensive biases. On the other hand, a dynamical constraint loss is proposed related to the two bias branches to alleviate the over-correction and insufficient debiasing problems to some extent. Experimental results on VQA-CP v2.0 and VQA v2.0 datasets demonstrate the effectiveness of our proposed approach from both quantitative and qualitative perspectives. ## Limitations Our model introduces additional parameters in the question-guided vision bias module, compared with other methods. Moreover it is also worth exploring whether the question-guided vision bias module can improve *number* type questions in other OOD data sets. ## Acknowledgements The research work descried in this paper has been supported by the National Key R&D Program of China (2020AAA0108001) and the National Nature Science Foundation of China (No. 61976015, 61976016, 61876198 and 61370130). The authors would like to thank the anonymous reviewers for their valuable comments and suggestions to improve this paper. ## References Aishwarya Agrawal, Dhruv Batra, and Devi Parikh. 2016. Analyzing the behavior of visual question answering models. *arXiv preprint arXiv:1606.07356*. Aishwarya Agrawal, Dhruv Batra, Devi Parikh, and Aniruddha Kembhavi. 2018. Don't just assume; look and answer: Overcoming priors for visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4971–4980. Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In *Proceedings of the IEEE Conference on Computer Vision* and Pattern Recognition, pages 6077–6086. Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. Vqa: Visual question answering. In Proceedings of the IEEE International Conference on Computer Vision, pages 2425–2433. Remi Cadene, Corentin Dancette, Matthieu Cord, Devi Parikh, et al. 2019. Rubi: Reducing unimodal biases for visual question answering. *Advances in Neural* Information Processing Systems, 32:841–852. Long Chen, Xin Yan, Jun Xiao, Hanwang Zhang, Shiliang Pu, and Yueting Zhuang. 2020. Counterfactual samples synthesizing for robust visual question answering. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pages 10800–10809. Long Chen, Yuhang Zheng, and Jun Xiao. 2022. Rethinking data augmentation for robust visual question answering. In *Proceedings of the European Conference on Computer Vision*, pages 95–112. Springer. Christopher Clark, Mark Yatskar, and Luke Zettlemoyer. 2019. Don't take the easy way out: Ensemble based methods for avoiding known dataset biases. In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language* Processing (EMNLP-IJCNLP), pages 4069–4082, Hong Kong, China. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171– 4186. Association for Computational Linguistics. Carl Doersch. 2016. Tutorial on variational autoencoders. *arXiv preprint arXiv:1606.05908*. Tejas Gokhale, Pratyay Banerjee, Chitta Baral, and Yezhou Yang. 2020. Mutant: A training paradigm for out-of-distribution generalization in visual question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 878–892, Online. Association for Computational Linguistics. Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In *Proceedings of the* IEEE Conference on Computer Vision and Pattern Recognition, pages 6904–6913. Gabriel Grand and Yonatan Belinkov. 2019. Adversarial regularization for visual question answering: Strengths, shortcomings, and side effects. In *Proceedings of the Second Workshop on Shortcomings* in Vision and Language, pages 1–13. Association for Computational Linguistics. Yangyang Guo, Liqiang Nie, Zhiyong Cheng, Qi Tian, and Min Zhang. 2022. Loss re-scaling VQA: revisiting the language prior problem from a classimbalance view. *IEEE Transactions on Image Processing*, 31:227–238. Chenchen Jing, Yuwei Wu, Xiaoxun Zhang, Yunde Jia, and Qi Wu. 2020. Overcoming language priors in vqa via decomposed linguistic representations. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 11181–11188. Zujie Liang, Weitao Jiang, Haifeng Hu, and Jiaying Zhu. 2020. Learning to contrast the counterfactual samples for robust visual question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3285–3292, Online. Association for Computational Linguistics. Yulei Niu, Kaihua Tang, Hanwang Zhang, Zhiwu Lu, Xian-Sheng Hua, and Ji-Rong Wen. 2021. Counterfactual vqa: A cause-effect look at language bias. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12700– 12710. Yulei Niu and Hanwang Zhang. 2021. Introspective distillation for robust question answering. *Advances* in Neural Information Processing Systems, 34:16292– 16304. Sainandan Ramakrishnan, Aishwarya Agrawal, and Stefan Lee. 2018. Overcoming language priors in visual question answering with adversarial regularization. Advances in Neural Information Processing Systems, 31:1541–1551. Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in Neural Information Processing Systems, 28. Ramprasaath R Selvaraju, Stefan Lee, Yilin Shen, Hongxia Jin, Shalini Ghosh, Larry Heck, Dhruv Batra, and Devi Parikh. 2019. Taking a hint: Leveraging explanations to make vision and language models more grounded. In *Proceedings of the IEEE/CVF* international conference on computer vision, pages 2591–2600. Hao Tan and Mohit Bansal. 2019. Lxmert: Learning cross-modality encoder representations from transformers. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5100–5111. Association for Computational Linguistics. Damien Teney, Ehsan Abbasnejad, Kushal Kafle, Robik Shrestha, Christopher Kanan, and Anton Van Den Hengel. 2020. On the value of out-ofdistribution testing: An example of goodhart's law. Advances in Neural Information Processing Systems, 33:407–417. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *Advances in Neural Information Processing Systems*, 30. Zhiquan Wen, Guanghui Xu, Mingkui Tan, Qingyao Wu, and Qi Wu. 2021. Debiased visual question answering from feature and sample perspectives. *Advances in Neural Information Processing Systems*, 34:3784–3796. Jialin Wu and Raymond Mooney. 2019. Self-critical reasoning for robust visual question answering. *Advances in Neural Information Processing Systems*, 32. ![9_image_0.png](9_image_0.png) Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, and Alex Smola. 2016. Stacked attention networks for image question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 21–29. Xi Zhu, Zhendong Mao, Chunxiao Liu, Peng Zhang, Bin Wang, and Yongdong Zhang. 2020. Overcoming language priors with self-supervised learning for visual question answering. In *Proceedings of the* Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20, pages 1083–1089. International Joint Conferences on Artificial Intelligence Organization. ## A A Theoretical Explanation This section presents an approximately feasible theoretical explanation for our dynamical constraint LD. According to the definition, the total loss L (L = LB + λLD) can be combined like items and further simplified. For easier explanation, we extract the loss item of answer ai ∈ A from a single sample, namely L(ai) to illustrate, which is computed as: $$\begin{array}{l}{{{\mathcal L}(a_{i})=-\left(y_{i}+\lambda\beta_{i}\right)\log p(a_{i}|s)}}\\ {{\qquad-\left(1-y_{i}\right)\log\left(1-p(a_{i}|s)\right)}}\\ {{\qquad-\left(w y_{i}-\lambda\beta_{i}\right)\log p(a_{i}|b)}}\\ {{\qquad-w(1-y_{i})\log\left(1-p(a_{i}|b)\right)}}\end{array}\tag{9}$$ We assume the bias can be reflected as: the score of a common answer is extremely high or too low in the training phase, which affects the selection of the correct answer when evaluating on test set. Here, we consider two boundary cases, depending on whether the target label of the current answer is 1 or 0 (i.e., y = 1 or y = 0). If yi = 1, both the VQA score p(ai|s) and the bias score p(ai|b) are optimized to 1. On this condition, L(ai) can be transformed to: ![10_image_0.png](10_image_0.png) Since βi = p(ai|s), when βi → 1, the learning procedure of VQA base model is further boosted while the bias learning is inhibited. Due to the extremely long-tailed answer distribution, the training objective can be unbalanced across different answers. It indicates that the unbalanced bias knowledge might be overlearned from more samples during training. Thus, if the sample is severely biased, the bias score tends not to decrease much after suppression, and the VQA base score will be relatively less reserved after subtracting (as shown in Figure 8 A). If yi = 0, both p(ai|s) and p(ai|b) are optimized to 0, and the L(ai) can be reduced to: $$\mathcal{L}(a_{i})=\underbrace{-\lambda\beta_{i}\log p(a_{i}|s)}_{\text{inhibition}}-\log\left(1-p(a_{i}|s)\right)$$ $$\underbrace{\overbrace{+\lambda\beta_{i}\log p(a_{i}|b)}^{\text{bootstrap}}-w\log(1-p(a_{i}|b))}_{\text{bias learning}}$$ Intuitively, the term −λβilog p(ai|s) inhibits p(ai|s) → 0, thus it can prevent the model from overfitting the training set to some extent. In addition, the item +λβilog p(ai|b) boosts p(ai|b) → 0. Therefore, when the VQA base score of a wrong answer is high (i.e., B), the process of adjusting the prediction scores is similar to the condition of y = 1 (as shown in Figure 8 B). In this way, during inference procedure, the final score of the biased answer might be more likely to decrease, while the unbiased answer tends to retain a relatively higher final score. In summary, such a strategy can help to prevent the model from overfitting the training set, and dynamically obtain a more appropriate final score. ## B Illustrative Examples In order to fully demonstrate the specific role of the dynamic constraint loss, we deliver more illustrative examples to show the probability predictions for each branch, as shown in Figure 9. We choose Updn as the backbone, and all the results are obtained under the same experimental settings. For the cases in Figure 9, we find that when LD is not added (w/o LD), strong uncertainty exists in the prediction results. The reason is that the VQA branch and the bias branches are trained separately, causing debiasing effect to be less significant in certain cases. By contrast, we explicitly introduce LD to the model (w/ LD) and thus obtain satisfactory results. ![11_image_0.png](11_image_0.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? the section: Limitation ✓ A2. Did you discuss any potential risks of your work? the section: Limitation ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract; Section 1: Introduction ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. ✗ B1. Did you cite the creators of artifacts you used? Left blank. ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Left blank. ✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Left blank. ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Left blank. ✗ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Left blank. ✗ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Left blank. ## C ✓ **Did You Run Computational Experiments?** Section 4: Experiment ✗ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Limited by number of pages The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4: Experiment ✗ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Limited by number of pages ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 4: Experiment D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. ✗ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Left blank. ✗ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Left blank. ✗ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Left blank. ✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Left blank. ✗ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Left blank.
guan-etal-2023-trigger
Trigger-Argument based Explanation for Event Detection
https://aclanthology.org/2023.findings-acl.312
Event Detection (ED) is a critical task that aims to identify events of certain types in plain text. Neural models have achieved great success on ED, thus coming with a desire for higher interpretability. Existing works mainly exploit words or phrases of the input text to explain models{'} inner mechanisms. However, for ED, the event structure, comprising of an event trigger and a set of arguments, are more enlightening clues to explain model behaviors. To this end, we propose a Trigger-Argument based Explanation method (TAE), which can utilize event structure knowledge to uncover a faithful interpretation for the existing ED models at neuron level. Specifically, we design group, sparsity, support mechanisms to construct the event structure from structuralization, compactness, and faithfulness perspectives. We evaluate our model on the large-scale MAVEN and the widely-used ACE 2005 datasets, and observe that TAE is able to reveal the process by which the model predicts. Experimental results also demonstrate that TAE can not only improve the interpretability on standard evaluation metrics, but also effectively facilitate the human understanding.
## Trigger-Argument Based Explanation For Event Detection Yong Guan1, Jiaoyan Chen2, Freddy Lecue3, Jeff Z. Pan4,∗ **Juanzi Li**1∗ , Ru Li5 1Department of Computer Science and Technology, Tsinghua University, Beijing, China 2Department of Computer Science, The University of Manchester, UK 3INRIA, France 4School of Informatics, The University of Edinburgh, UK 5School of Computer and Information Technology, Shanxi University, Taiyuan, China [email protected], [email protected], [email protected] ## Abstract A critical task for constructing event knowledge graphs is event detection (ED), which aims to identify events of certain types in plain text. Neural models have achieved great success on ED, thus coming with a desire for higher interpretability. Existing works mainly exploit words or phrases of the input text to explain models' inner mechanisms. However, for ED, the event structure, comprising of an event trigger and a set of arguments, provides more enlightening clues to explain model behaviors. To this end, we propose a Trigger-Argument based Explanation method (TAE), which can utilize event structure knowledge to uncover a faithful interpretation for the existing ED models at neuron level. Specifically, we design group, sparsity, *support* mechanisms to construct the event structure from structuralization, compactness, and faithfulness perspectives. We evaluate our model on the large-scale MAVEN and the widely-used ACE 2005 datasets, and observe that TAE is able to reveal the process by which the model predicts. Experimental results also demonstrate that TAE can not only improve the interpretability on standard evaluation metrics, but also effectively facilitate the human understanding. ## 1 Introduction Event Detection (ED) aims at identifying event triggers with specific event types, which is the first and fundamental step for extracting semantic and structural knowledge from plain text (Ahn, 2006; Nguyen and Grishman, 2015). For instance, event mention "*The train driver was beaten over the head* by a thug." in Figure 1 comprises an event trigger "*beaten*" and a set of arguments such as "the train driver", "*the head*" and "*a thug*". An ideal ED system is expected to detect "*beaten*" as an event trigger of the type Bodily_harm. Recently, with the growth of open source annotated datasets ∗Corresponding authors. ![0_image_0.png](0_image_0.png) Figure 1: Different explanations. For (a) and (b), features with deeper colors are considered more important by previous work. The usefulness of event triggers and arguments are illustrated in (c) and (d). "arg" refers to "*argument*". Bodily_harm and Competition are two event types in MAVEN. (Walker et al., 2006; Wang et al., 2020) and the development of deep learning technologies, deep approaches have become popular for tackling the ED problem (Nguyen et al., 2016; Wang et al., 2021). Despite their great performance, they are still opaque for people to comprehend the inner mechanisms. Although there exist many works that focus on explaining the model behavior on natural language processing (NLP) problems, such as text classification (Lei et al., 2016), text matching (Jiang et al., 2021) and machine reading comprehension (Ju et al., 2021), very little progress has been made to interpret ED models. We identify two major limitations that prevent the existing explanation methods from being applied to ED models. Neglecting event structured knowledge. Existing methods mainly focus on assessing the contributions of individual input unit (e.g., word or phrase) to generate explanations for neural networks (Li et al., 2016; Jiang et al., 2021). As shown in Figure 1(a) and (b), both explanations provide insights of which words (e.g., "*beaten*") or phrases (e.g., "*beaten over the head*") contribute to the prediction. However, neither of them is suitable to explain ED models as an event is represented as a structure comprising an event trigger and a set of arguments. Thus, the trigger-argument structures are more sensible clues to explain ED systems. In Figure 1(c), "*beaten*" is an ambiguous word that may evoke completely dissimilar events such as Bodily_harm and Competition. In this case, trigger word "*beaten*" and its arguments (e.g., "*the head*" and "*a thug*" which refer to Body_part and Agent) work together for the prediction Bodily_harm. Thus, how to take advantage of the event structure knowledge for ED model explanation is a non-trivial task. Explanations cannot reflect the decisionmaking process. Models usually provide important features which are words or phrases selected from an input text as explanations, but they do not further elaborate the function of these features, i.e., why models produce the prediction according to these features. It poses challenges to interpret an explanation and connect it to model prediction. For example, in Figure 1(a) and (b), models may assign high relevance score to "*train driver*" or "*thug*", but it is still confused why these features can lead to the prediction Bodily_harm. In fact, "*train driver*" and "*thug*" serve as Victim and Agent, which compose together for the Bodily_harm event in which "An Agent *injures a* Victim" in Figure 1(c). Furthermore, Figure 1(d) provides an example that wrongly classifies Competition as Bodily_harm, because models take "*Harley*" and "*John*" as Agent and Victim rather than Participant_1 and Participant_2. Thus, exploring explanations that can not only identify important features but also reveal how these features contribute to the prediction are urgently needed. To address the aforementioned challenges, we propose TAE, a Trigger-Argument based Explanation method, to generate structure level explanations for ED models. Specifically, TAE focuses on utilizing neuron features of ED models to construct explanations based on trigger-argument knowledge. It has three core sub-modules: *Group Modeling* aims to divide neurons into different groups, where each group is regarded as an event structure, in such a way that each neuron corresponds to one argument and works together with other neurons that belong to the same event structure to explain the prediction of ED models; *Sparsity Modeling* aims to compact explanations by designing differentiable masking mechanisms to automatically filter out useless features generated by the group mechanism, and the intuition behind this module is that a good explanation should be short for understanding or reading (Miller, 2019); *Support Modeling* aims to ensure that the explanations generated by the group and sparsity mechanisms are faithful to the predictive model. Note we utilize FrameNet, a well-defined linguistic knowledge base by experts, to assist TAE identify event structures and help humans understand the decision-making process. The contributions of this paper are as follows: - We propose a model-agnostic method, called TAE ( Trigger-Argument based Explanation), to construct structure-level explanations for Event Detection (ED) systems. To the best of our knowledge, this is the first exploration to explain ED with structure knowledge. - TAE adopts three strategies, namely, Group Modeling, *Sparsity Modeling* and *Support* Modeling to characterize the trigger-argument based explanations from structuralization, compactness, and faithfulness perspectives. - We utilize FrameNet (Baker et al., 2006), a well-defined knowledge base, to help complete the event structure in MAVEN. The annotated data is released for further research1. - Experimental results on the large-scale MAVEN and widely-used ACE 2005 datasets show that TAE can generate more faithful and human-understandable explanations. ## 2 Related Work In this section, we review the related works on Event Detection and *Interpretation Methods*. Event Detection. Event detection is a key task for Event Knowledge Graph (Pan et al., 2017) construction. Traditional methods for ED have employed feature based techniques (Ji and Grishman, 2008; Li et al., 2013). These approaches mainly rely on elaborately designed features and NLP tools. Later, advanced deep learning methods have been applied for ED, such as convolutional neural networks (Chen et al., 2015), bidirectional 1https://github.com/neuroninterpretation/TAE recurrent neural networks (Nguyen et al., 2016), which can take advantage of neural networks to learn features automatically. Since pre-trained language models (PLMs) are capable of capturing the meaning of words dynamically by considering their context, they have proven successful on a range of NLP tasks including ED (Tong et al., 2020; Wang et al., 2021, 2020). Although neural networks and PLMs bring incredible performance gains on ED task, they offer little transparency concerning the inner workings. Interpretation Methods. There has been growing interests in producing explanations for deep learning systems in recent years, enabling humans to understand the intrinsic mechanism. In general, the explanations from these methods can typically be categorized as post-hoc explanations that aim to explain a trained model and reveal how model arrives at prediction (Lipton, 2016). Among them, gradient-based, attention-based and erasure-based methods are three typical methods. Gradient-based methods are model-aware interpretation methods using gradients to measure feature importance (Shrikumar et al., 2017). Since a token index is not ordinal, methods simply sum up the relevance scores of each representation dimension. Because the score can have a negative or positive sign, the score may become zero even if it does contribute to prediction (Arras et al., 2017). Attention-based methods attempt to use attention weights as feature importance scores (Vashishth et al., 2019). However, attention is argued to not be an optimal method to identify the attribution for an output as its validity is still controversial (Bastings and Filippova, 2020). Erasure-based methods are widely-used approaches where a subset of features is considered irrelevant if it can be removed without affecting the model prediction (Feng et al., 2018). A straightforward approach is to erase each token by replacing it with a predefined value such as zero (Li et al., 2016). However, these erasure methods usually generate explanations by calculating the contribution of individual unit to the predictions, which are not suitable for ED as an event is often correctly identified with event structure. In this paper, we attempt to generate explanations for ED models by considering semantic structured knowledge (Chen et al., 2018) entailed in the input at neuron level, which is complementary to the aforementioned approaches. ## 3 Preliminaries 3.1 Event Detection An event refers to "a specific occurrence involving one or more participants" in automatic content extractions.To facilitate the understanding of the ED task, we introduce related terminologies as follows: Event Trigger: the main word which most clearly expresses an event that happens. Event Arguments: the entities that are involved in an event that serves as a participant. Event Mention: a phrase or sentence within which an event is described. For event mention "*The train driver was beaten* over the head by a thug", an event extractor is expected to identify an Bodily_harm event triggered by "*beaten*" and extract corresponding arguments with different roles such as "*the train* driver" (Victim) and "*a thug*" (Agent). In this paper, instead of explaining the overall standard event extraction models, we concentrate only on the ED task. That is, for this example, our goal is to explain why ED models can classify the event as Bodily_harm or not. ## 3.2 Problem Formulation ED explanation aims to explain a trained ED model and reveal how the model arrives at the prediction. For an event mention x = {x1, x2, ..., xi*, ..., x*n} with n words, a given pre-trained neural network (NN) f maps x to the corresponding label yj , where yj ∈ {y1, y2, ..., yj *, ..., y*m} is corresponding event type which has unique trigger-arguments Fx ∈ F. Assume that the NN model f = g(h(x)) can be decomposed into two-stages: (1) utilizes h(·) to map the input x to the intermediate layer h(x) = {h1(x), h2(x), ..., hk(x)*, ..., h*N (x)} with N neurons, such that h(·) ∈ R (N·d), and hk(x) is the k-th neuron in h(x); and (2) uses g(·) to map the intermediate layer h(x) to the output g(h(x)), which is the probability of input x being predicted as label yj by NN model, as shown in top part of Figure 2. To better understand neurons, recent work attempts to identify the closest features to explain its behavior. The correlations between neuron and feature are obtained as follows: Neu(hk(x)) = arg max ρ(hk(x), F) (1) where F are features of the input x, such as key words, POS, and trigger-arguments. Neu(hk(x)) is the most related feature selected in x to represent hk(x). ρ is an arbitrary correlation calculation ![3_image_0.png](3_image_0.png) function, and we use IoU (intersection over union) in this paper. Following the existing work (Ghorbani et al., 2019; Mu and Andreas, 2020), we select the closest layer to the classifier that already learns more abstract information for prediction, to detect the neuron behavior2. ## 4 Method In this paper, we propose TAE, a trigger-argument based explanation method for event detection, which attempts to utilize event structure knowledge to explain model predictions at neuron level. The overview of our method is shown in Figure 2, which contains three modules: (1) The *Group* module captures structured knowledge of events; (2) The *Sparsity* module encourages models to select few but key features in the event structure; (3) The *Support* module is a fundamental module that guarantees explanations generated by *Group* and Sparsity consistent with the original prediction. The loss function of an structured explanation for an event is obtained by an optimization problem: $${\mathcal{L}}=\arg\min\,\lambda_{g}{\mathcal{L}}_{g}+\lambda_{s}{\mathcal{L}}_{s}+\lambda_{s d}{\mathcal{L}}_{s d}\quad(2)$$ where (Lg, Ls and Lsd) are from the group, sparsity and support modules, while λg, λs and λsd are hyper-parameters controlling weights of different losses. ## 4.1 Group Modeling The Group module aims to divide neurons into different groups, and each group corresponds to a trigger-argument structure. Some existing works try to aggregate related features according to the 2For a NN model, it has been shown that lower layers usually encode word and position information, and the higher layers can learn hierarchically-oriented information. distance information, such as encouraging the highly overlapping regions of the image (Varshneya et al., 2021), or gathering the neighbor words to enhance the current word (De Cao et al., 2020; Jiang et al., 2021). However, these methods might not work, as arguments of event types can be scattered in different positions and usually not adjacent to each other in input texts. To solve this problem, we propose a group loss objective that constructs event structures by aggregating neurons corresponding to the related arguments. We first use the clustering algorithm kmeans (Hartigan and Wong, 1979; Ghorbani et al., 2019) to automatically cluster neurons with the nearest mean into the same group. $$G=\mathrm{K-means}\ (\{h_{k}(x)\})\qquad\qquad(3)$$ where $G\in\{G_1,G_2,...,G_L\}$ is the group set and L is group number. Then, for individual group Gl, we use the IoU to measure the contribution ϕ(h l i (x)) of neuron h l i (x) in the group. $$\phi(h_{i}^{l}(x))=\frac{2||h_{i}^{l}(x)-F_{x}||_{1}}{||h_{i}^{l}(x)||_{1}+||F_{x}||_{1}+||F_{x}-h_{i}^{l}(x)||_{1}}\tag{4}$$ where $F_{x}$ is the trigger-argument feature of input x, and h l i (·) is the i-th neuron in group Gl. Finally, the group objective Lg is to minimize the intra-cluster sum of the distances from each neuron to the labeled feature in the input (Varshneya et al., 2021), given by the following equation: $${\mathcal{L}}_{g}=\sum_{l}^{L}{\frac{1}{|G_{l}|}}\sum_{i}\phi(h_{i}^{l}(x))\qquad\quad(5)$$ During train phase, for each batch, we extract the trigger-arguments on the whole batch while calculating the Lg. It means that the neuron can learn the batch data features rather than individual features, which can enhance the generalization ability. ## 4.2 Sparsity Modeling The Sparsity module aims to produce compact and human-friendly explanations. This is achieved by removing "dead neurons" (Mu and Andreas, 2020), which are useless for model prediction, while only keeping the key information to explain predictions. To this end, following the existing work (De Cao et al., 2020), we use the differentiable masking mechanism to filter out the useless neuron features. Specially, for each extracted neuron, a classifier with a sigmoid activation function is added to determine whether the neuron should to be masked or not. During training phase, we directly use L1 norm (Jiang et al., 2021) to minimize the number of the neurons as follows: $${\mathcal{L}}_{s}=\operatorname*{min}\;\sum_{k}\varphi(h_{k}(x))$$ where φ(·) is the neuron classifier. The straightforward idea is to minimize the non-zero position. ## 4.3 Support Modeling The support module aims to ensure the faithfulness of explanations generated by *Group* and *Sparsity*. A desirable interpretable event detection model should satisfy the intuition that a prediction is directly dependent on the selected features. For an ED model, we choose the neurons in h(x) to generate explanations. Group and Sparsity are utilized to select neuron features µ containing structured and important information. Thus the goal of Support is to measure whether µ can depict the true profile of how the model works. Specifically, function h′(·) maps µ to the new hidden states h′(µ), and g(·) maps the new hidden states h′(µ) to the new output g(h′(µ)), as shown in the bottom part of Figure 2. We introduce an optimization objective to guarantee the support modeling. Different from the existing work matching the current prediction, we directly ask the reconstructed representation to meet the ground truth distribution3. $$\begin{array}{c}{{{\mathcal{L}}_{s d}={\mathcal{P}}(\hat{y}|g(h^{\prime}(\mu),\theta))}}\\ {{s.t.\quad K L(g(h(x)),g(h^{\prime}(\mu)))}}\end{array}$$ where yˆ and θ are the ground truth labels and trainable parameters respectively. KL represents Kullback–Leibler divergence. Note h′(·) can be any current popular network architectures, such as LSTM, Transformer and PLMs. In our setting, to maintain the interpretability, we use the simple linear projection and MLP (multilayer perceptron) to build the network, and the computation is much more efficient since we don't need to optimize the whole backbone (Yeh et al., 2020). In addition, in this way, it mainly focuses on learning the neuron behavior instead of sacrificing the performance of the pre-trained CNN models. 3Assume that we extract neurons from the pre-trained model, and the neuron exactly meets the current prediction. If we detect the trigger-argument information to be useful for model decision and remove the useless neurons, a reasonable explanation may meets or better than the current prediction. | Methods | MAVEN | ACE 2005 | | | | | |-----------|---------|------------|------|------|------|------| | P | R | F1 | P | R | F1 | | | LSTM | 51.3 | 52.4 | 51.5 | 63.4 | 66.8 | 64.8 | | LSTM+CNN | 53.5 | 54.2 | 52.3 | 65.6 | 67.2 | 64.9 | | BERT | 52.6 | 63.5 | 57.7 | 69.9 | 72.2 | 70.5 | | DMBERT | 53.1 | 65.2 | 58.6 | 71.9 | 74.7 | 71.4 | | DeBERTa | 58.7 | 65.6 | 60.8 | 73.7 | 74.4 | 72.1 | $$(6)$$ Table 1: Model performance on MAVEN and ACE. P and R refer to Precision and Recall respectively. ## 5 Experiment 5.1 Datasets We evaluate our models on MAVEN (Wang et al., 2020) and ACE 2005 dataset (Walker et al., 2006). MAVEN is a manually annotated dataset4for event detection task without annotated arguments, which contains 168 event types and 4,480 documents. The event types are manually derived from the frames defined in FrameNet (Baker et al., 1998; Guan et al., 2021b,a). To satisfy our needs, we utilize the automatic frame parser SEMAFOR (Das et al., 2014) to parse the MAVEN data. We select the data that have event type in MAVEN, and regard the corresponding frame elements (Guo et al., 2020) as the event arguments. Finally, we collect 12,649 event mentions, and randomly split them into train/dev/test sets with sizes of 8,469/2,000/2,000. ACE 2005 is also a manually annotated dataset5, which contains 8 event types, 35 argument roles and 599 English documents (Li et al., 2020). We further remove the data without arguments, and finally select 3,014 examples. Since the data size is relatively small, and cannot use it to learn a better NN model. So we directly utilize the models trained on MAVEN to test on ACE 2005, which can also verify the models' generalization ability.6 $$\begin{array}{l}{(7)}\\ {(8)}\end{array}$$ ## 5.2 Ed Models Our TAE is a model-agnostic method to explain ED models. In this paper, we first select two typical NN models, namely LSTM (Hochreiter and Schmidhuber, 1997) which contains 2 layers with 300 hidden states, LSTM+CNN (Tan et al., 2015) which has 2 layers with 300 hidden states. Moreover, we also select three PLM-based models which Models ExplanationsACE 2005 MAVEN ![5_image_0.png](5_image_0.png) Support Sparsity Support Sparsity AORC AOPC SUPP SPAR AORC AOPC SUPP SPAR LSTM LEAVE-ONE-OUT 0.988 0.623 0.003 33.92 1.101 0.634 0.002 38.34 BACKSELECT 1.103 0.614 0.005 36.72 1.124 0.682 0.005 40.17 LIME 0.955 0.659 0.005 34.04 0.904 0.772 0.001 37.51 DIFFMASK 0.913 0.717 0.019 **4.967** 0.903 0.891 0.013 **5.163** TAE(OURS) **0.872 0.812 0.027** 5.220 **0.886 1.057 0.015** 5.313 LSTM+CNN LEAVE-ONE-OUT 0.943 0.764 0.012 33.60 0.935 0.821 0.014 37.17 BACKSELECT 0.981 0.776 0.018 33.89 0.954 0.846 0.011 38.65 LIME 0.886 0.742 0.017 32.54 0.823 0.928 0.016 35.22 DIFFMASK 0.801 0.799 0.021 4.841 0.778 1.116 0.027 **3.738** TAE(OURS) **0.737 0.971 0.035 3.494 0.725 1.136 0.031** 4.244 BERT LEAVE-ONE-OUT 0.841 0.893 0.026 26.06 0.915 1.076 0.040 29.29 BACKSELECT 0.775 0.922 0.026 25.14 0.873 1.091 0.037 27.33 LIME 0.714 0.954 0.035 23.44 0.809 1.139 0.041 26.07 DIFFMASK 0.679 1.247 0.058 3.862 0.764 1.326 0.041 4.791 TAE(OURS) **0.535 1.453 0.072 2.471 0.693 1.557 0.048 2.926** DMBERT LEAVE-ONE-OUT 0.829 0.966 0.033 22.81 0.874 1.115 0.043 25.52 BACKSELECT 0.767 0.979 0.029 21.47 0.822 1.156 0.044 23.82 LIME 0.667 1.097 0.038 19.54 0.737 1.241 0.047 22.29 DIFFMASK 0.517 1.207 0.048 2.733 0.662 1.464 0.046 3.429 TAE(OURS) **0.477 1.528 0.066 1.246 0.572 1.626 0.051 1.582** ![5_image_1.png](5_image_1.png) LEAVE-ONE-OUT 0.730 0.945 0.036 23.55 0.781 1.014 0.040 23.90 BACKSELECT 0.700 0.971 0.033 20.15 0.743 1.002 0.043 23.35 LIME 0.692 1.083 0.048 19.54 0.716 1.156 0.050 21.82 DIFFMASK 0.616 1.221 0.054 1.938 0.719 1.155 0.058 2.437 TAE(OURS) **0.525 1.674 0.073 1.148 0.623 1.774 0.069 1.603** achieve promising performance on ED, including BERT (Devlin et al., 2019) which has 12 layers and 768 hidden states, DMBERT (Wang et al., 2019) which also applied on BERT-base version with 768 hidden states, and DeBERTa (He et al., 2021) which has 24 layers and 1,536 hidden states. Table 1 shows the results (P, R, F1) of different models on both datasets in our experiments, where DeBERTa outperforms the other four models with higher F1 scores. ## 5.3 Support Evaluation We adopt three metrics to evaluate support degree (i.e., faithfulness): two metrics from prior explanation work including *area over reservation curve* (AORC) (DeYoung et al., 2020) and area over the perturbation curve (AOPC) (Nguyen, 2018), and a new defined evaluation metric called *support-score* (SUPP). AORC calculates the distance between the original predicted logits and the masked ones by reserving top k% neuron features which are identified by trigger-arguments as follows: $$\mathrm{Aorc}=\sum_{k=0}^{K}||\mathcal{P}(\hat{y}|x)-\mathcal{P}_{(k)}^{\prime\prime}(\hat{y}|x)||_{2}\tag{9}$$ where P(k) ′′(ˆy|x) means the prediction which reserves the top k% neuron features. Under this metric, lower AORC scores are better. AOPC score calculates the average change in prediction probability on the predicted class over all test data by deleting the top r% neuron features. $$\text{Aopc}=\frac{1}{N}\sum_{i=1}^{N}\{{\cal P}(\hat{y}|x)-{\cal P}_{(r)}^{\prime\prime}(\hat{y}|x)\}\tag{10}$$ $\bullet$\(\ where P(r) ′′(ˆy|x) is the prediction which remove the top r% neuron features. N denotes the number of examples. In our experiment, r is set to 20. Under this metric, the larger scores are better. We propose SUPP score to verify whether the new prediction g(h′(µ)) is positive to the original ones g(h(x)). Under this metric, the larger SUPP scores are better. $$\text{Supp}=\frac{1}{N}\sum\{g(h^{\prime}(\mu)-g(h(x))\}\tag{11}$$ We compare Tae with four competitive base We compare TAE with four competitive baselines, namely LEAVE-ONE-OUT (Li et al., 2016), LIME (Ribeiro et al., 2016), BACKSELECT (Carter et al., 2019) and DIFFMASK (De Cao et al., 2020), utilizing AORC, AOPC and SUPP metrics. Automatic support evaluation results are shown in Table 2, and we have the following three observations: (1) TAE achieves better performance in most cases across all the three metrics on both datasets. For metric SUPP, all methods achieve positive results, indicating our method can identify important features and make a positive contribution to model predictions. (2) Compare to LSTM- and CNN- based methods, BERT-based methods achieve significantly better performance. It is perhaps because BERT has Methods Attack Arriving Statement Motion Process_start Creating Death Giving Avg. LSTM 0.0542 0.0628 0.0354 **0.0478** 0.0602 0.0686 **0.0537** 0.0489 0.0540 LSTM+Group **0.0564 0.0675 0.0381** 0.0453 **0.0633 0.0705** 0.0526 **0.0545 0.0560** LSTM+CNN 0.0564 0.0536 **0.0430** 0.0508 0.0627 **0.0439** 0.0556 0.0376 0.0505 LSTM+CNN+Group **0.0615 0.0557** 0.0426 **0.0513 0.0725** 0.0437 **0.0598 0.0447 0.0540** BERT 0.0746 0.0662 0.0577 0.0692 0.0701 0.0720 0.0510 0.0653 0.0658 BERT+Group **0.0763 0.0683 0.0581 0.0758 0.0712 0.0739 0.0629 0.0772 0.0705** DMBERT 0.0763 0.0651 0.0497 0.0833 0.0614 0.0720 0.0624 0.0668 0.0671 DMBERT+Group **0.0789 0.0651 0.0586 0.0917 0.0721 0.0733 0.0640 0.0766 0.0725** DeBERTa 0.0782 0.0654 0.0517 0.0862 0.0638 0.0695 0.0601 0.0676 0.0677 DeBERTa+Group **0.0822 0.0655 0.0588 0.0917 0.0799 0.0710 0.0615 0.0747 0.0731** already preserved a large amount of general knowledge by training on large-scale data. (3) Compared with MAVEN, the results on ACE are equally remarkable. Overall, our model achieves very strong results on different types of data and methods, proving that it is a good modelagnostic approach. ## 5.4 Sparsity Evaluation For evaluating the sparsity, we directly report the sparsity score, which obtained in Equation 6 just like the explanation work (Jiang et al., 2021), as the metric. In this criterion, the score means the degree of sparsity, and the lower scores are better. The intuition behind this criterion is that a good explanation should be short for understanding. The results are reported in Table 2. We can see TAE achieves the lowest SPAR values in most cases for the automatic evaluation, for example, the SPAR of our TAE + DEBERTA on ACE and MAVEN are 1.603 and 1.148, while the SPAR of LEAVE-ONE-OUT are 23.90 and 23.55, which indicates that TAE can effectively discover the useless neurons. ## 5.5 Group Evaluation In order to verify the effectiveness of the group mechanism, we use two metrics for explanations. First, following the previous explanation work (Bau et al., 2017; Varshneya et al., 2021), for each predefined group, we compute the number of unique trigger-argument in the group as the *interpretability* score. Second, for trigger-argument structure, we average the *IoU score* which is computed in Equation 1 to represent its explanation quality score like (Mu and Andreas, 2020). Figure 3 shows the comparison of the interpretability score with different groups. With the group number increasing, the number of triggerarguments detected by the model gradually increases, indicating grouping mechanism can improve the model interpretability. However, the num- ![6_image_0.png](6_image_0.png) - - - - - - - ber of trigger-arguments remains constant after the group number exceeds 50. A major reason is the uneven distribution of the data, which mainly concentrates on 20% of the event types. Note, the maximum group number is limited to event type number, such as for MAVEN, the maximum group number is 168. Table 3 shows the IoU score of 8 event types. In this setting, we separate test for each event type. The score increases with the group mechanism on most cases, which can further prove the effectiveness of the group mechanism. ## 5.6 Analysis On Trigger-Argument To further verify the effectiveness of the triggerarguments, we introduce features used in Mu and Andreas (2020) as comparison, such as POS (partof-speech), most common words, and entity. They suggest that neuron cannot be regarded as a simple detector (Bau et al., 2017) but may express the meaning of multiple concepts. So they use composition operations such as OR, AND and NOT to expand the neuron behavior. We use the average IoU score of whole neurons on different formula lengths as one metric: $$S L_{i}=\frac{1}{|h_{x}|}\sum_{j=0}\arg\max\operatorname{IoU}(h_{j}(x),F)\quad(12)$$ where SLiis the IoU score of the formula length i, and F is the feature set. hj (·) denotes the j5052 ![7_image_0.png](7_image_0.png) th neuron feature and |hx| is the neuron number. Under this metric, larger IoU scores are better. Figure 4 shows the results with (w/) and without (w/o) trigger-arguments, and we obtain the following two findings: (1) with the help of triggerargument, the IoU scores are larger than that w/o trigger-argument on each formula length. The results demonstrate that trigger-argument can help generate more faithfulness explanations compared to word level features. That's because each argument expresses complete meaning which may contain a semantic span rather than an individual word. (2) as the max formula length keeps getting larger, the IoU score keeps getting larger. When the formula length is greater than 10, the score is no longer changing, indicating the maximum representation capacity of neuron is 10 trigger-arguments for the model. We further perform a qualitative analysis by deleting the arguments with high support scores. As shown in Figure 5, event mention "*The crash* sparked a review of helicopter safety" belongs to Causation. Explanation of our TAE model is that "*The crash*" and "*sparked a review of helicopter* safety" are two core arguments to form Causation that "*An Cause causes an Effect*". So when we delete argument effect ("*sparked a review of helicopter safety*"), ED model wrongly classifies the event as Process_start. The same applies to the second event Attack, when delete the Victim, ED models identify it as Catastrophe. The qualitative results indicate that our proposed TAE can capture trigger-argument structures that are important for model prediction. ## 5.7 Case Study Figure 6 shows an example of TAE explanation. Given an event mention, 1) Group Modeling divides neurons into different event structure according to the arguments information, e.g., neurons are grouped into Military_operation, Attack ![7_image_1.png](7_image_1.png) and Departing; 2) Sparsity Modeling filters useless features such as Depictive and Result to compact the explanations; 3) Support Modeling selects features that are consist with the prediction, for example, Attack are more faithful comparing to Military_operation and Departing. From the above three procedures, we obtain the TAE explanation, which not only contains important features from the text but also reveals why they are important for the final prediction. For instance, "*American, Canadian, British and French aircraft* and ground forces" and "*retreating Iraqi military* personnel" respectively refer to Assailant and Victim, which work together to characterize the Attack event in which "Assailant *physically attacks the* Victim". In addition, with the help of trigger-argument information, the explanation is more helpful for human understanding. ## 6 Conclusion In this paper, we propose a trigger-argument based explanation method, TAE, which exploits the event structure-level explanations for event detection (ED) task. TAE focuses on utilizing neuron features of ED models to generate explanations, along with three strategies, namely, group modeling, sparsity modeling, and support modeling. We conduct experiments on two ED datasets (i.e., MAVEN and ACE). The results show that TAE achieves better performance compared to the strong baselines on multiple public metrics. In addition, TAE also provides more faithful and human-understandable explanations. There might be a few different future directions. Firstly, we might look into the idea of using explanations to further improve ED classification, as well as ED explanations in downstream applications. Secondly, we plan to explore ED under the multi-modal setting. Thirdly, event relation extraction is still challenging and deserves some further investigation. From the practical aspect of event knowledge graphs (EKGs), it is worth investigating high-quality yet efficient methods for constructing EKGs and making use of EKGs to predict future events (Lecue and Pan., 2013; Deng et al., 2020). Furthermore, it might be an idea to integrate commonsense knowledge (Speer et al., 2016; Romero et al., 2019; Malaviya et al., 2020) into event knowledge graphs. ## Limitations In this section, we discuss the limitations of TAE. First, as our method depends on event structure information which is obtained through automatic parser, if the parser is not good enough, then it will impact the performance. Second, since we focus on leveraging structural information, we restrict the experiments on text-based event explanation. Future work will explore multi-modal event detection explanations and evaluate models on other NLP tasks. ## Acknowledgments This work is supported by the Institute for Guo Qiang, Tsinghua University (2019GQB0003), the NSFC Youth Project (62006136), and the Chang Jiang Scholars Program (J2019032). ## References David Ahn. 2006. The stages of event extraction. In Proceedings of the Workshop on Annotating and Reasoning about Time and Events, pages 1–8. Leila Arras, Grégoire Montavon, Klaus-Robert Müller, and Wojciech Samek. 2017. Explaining recurrent neural network predictions in sentiment analysis. In Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 159–168. C. F. Baker, C. J. Fillmore, and J. B. Lowe. 2006. The berkeley framenet project. Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The Berkeley FrameNet project. In 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, pages 86–90. Jasmijn Bastings and Katja Filippova. 2020. The elephant in the interpretability room: Why use attention as explanation when we have saliency methods? In Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 149–155. David Bau, Bolei Zhou, Aditya Khosla, Aude Oliva, and Antonio Torralba. 2017. Network dissection: Quantifying interpretability of deep visual representations. In *2017 IEEE Conference on Computer Vision and* Pattern Recognition, pages 3319–3327. Brandon Carter, Jonas Mueller, Siddhartha Jain, and David Gifford. 2019. What made you do this? understanding black-box decisions with sufficient input subsets. In *Proceedings of the Twenty-Second* International Conference on Artificial Intelligence and Statistics, volume 89 of Proceedings of Machine Learning Research, pages 567–576. PMLR. Jiaoyan Chen, Freddy Lecue, Jeff Z. Pan, Ian Horrocks, and Huajun Chen. 2018. Knowledge-based Transfer Learning Explanation. In *Proc. of the International* Conference on Principles of Knowledge Representation and Reasoning (KR2018), pages 349–358. Yubo Chen, Liheng Xu, Kang Liu, Daojian Zeng, and Jun Zhao. 2015. Event extraction via dynamic multipooling convolutional neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 167–176. Dipanjan Das, Desai Chen, André F. T. Martins, Nathan Schneider, and Noah A. Smith. 2014. Frame-semantic parsing. *Computational Linguistics*, 40(1):9–56. Nicola De Cao, Michael Sejr Schlichtkrull, Wilker Aziz, and Ivan Titov. 2020. How do decisions emerge across layers in neural models? interpretation with differentiable masking. In *Proceedings of the 2020* Conference on Empirical Methods in Natural Language Processing, pages 3243–3255. Songgaojun Deng, Huzefa Rangwala, and Yue Ning. 2020. Dynamic knowledge graph based multi-event forecasting. In *Proc. of KDD*. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4171–4186. Jay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, and Byron C. Wallace. 2020. ERASER: A benchmark to evaluate rationalized NLP models. In *Proceedings* of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4443–4458. Shi Feng, Eric Wallace, Alvin Grissom II, Mohit Iyyer, Pedro Rodriguez, and Jordan Boyd-Graber. 2018. Pathologies of neural models make interpretations difficult. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3719–3728. Amirata Ghorbani, James Wexler, James Y Zou, and Been Kim. 2019. Towards automatic concept-based explanations. In *Advances in Neural Information* Processing Systems, volume 32. Yong Guan, Shaoru Guo, Ru Li, Xiaoli Li, and Hongye Tan. 2021a. Frame semantic-enhanced sentence modeling for sentence-level extractive text summarization. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 4045–4052, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Yong Guan, Shaoru Guo, Ru Li, Xiaoli Li, and Hu Zhang. 2021b. Integrating semantic scenario and word relations for abstractive sentence summarization. In *Proceedings of the 2021 Conference on* Empirical Methods in Natural Language Processing, pages 2522–2529, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Shaoru Guo, Ru Li, Hongye Tan, Xiaoli Li, Yong Guan, Hongyan Zhao, and Yueping Zhang. 2020. A framebased sentence representation for machine reading comprehension. In *Proceedings of the 58th Annual* Meeting of the Association for Computational Linguistics, pages 891–896, Online. Association for Computational Linguistics. John A Hartigan and Manchek A Wong. 1979. Algorithm as 136: A k-means clustering algorithm. Journal of the royal statistical society. series c (applied statistics), 28(1):100–108. Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2021. Deberta: Decoding-enhanced bert with disentangled attention. In *International* Conference on Learning Representations. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long Short-Term Memory. *Neural Computation*, 9(8):1735–1780. Heng Ji and Ralph Grishman. 2008. Refining event extraction through cross-document inference. In *Proceedings of ACL-08: HLT*, pages 254–262. Zhongtao Jiang, Yuanzhe Zhang, Zhao Yang, Jun Zhao, and Kang Liu. 2021. Alignment rationale for natural language inference. In *Proceedings of the 59th Annual Meeting of the Association for Computational* Linguistics and the 11th International Joint Conference on Natural Language Processing, pages 5372– 5387. Yiming Ju, Yuanzhe Zhang, Zhixing Tian, Kang Liu, Xiaohuan Cao, Wenting Zhao, Jinlong Li, and Jun Zhao. 2021. Enhancing multiple-choice machine reading comprehension by punishing illogical interpretations. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 3641–3652. Freddy Lecue and Jeff Z. Pan. 2013. Predicting Knowledge in An Ontology Stream. In Proc. of the 23rd International Joint Conference on Artificial Intelligence (IJCAI 2013). Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing neural predictions. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 107–117. Jiwei Li, Will Monroe, and Dan Jurafsky. 2016. Understanding neural networks through representation erasure. *arXiv preprint arXiv:1612.08220*. Manling Li, Alireza Zareian, Qi Zeng, Spencer Whitehead, Di Lu, Heng Ji, and Shih-Fu Chang. 2020. Cross-media structured common space for multimedia event extraction. In *Proceedings of the 58th Annual Meeting of the Association for Computational* Linguistics, pages 2557–2568. Qi Li, Heng Ji, and Liang Huang. 2013. Joint event extraction via structured prediction with global features. In *Proceedings of the 51st Annual Meeting of* the Association for Computational Linguistics, pages 73–82. Zachary Lipton. 2016. The mythos of model interpretability. *Communications of the ACM*, 61. Chaitanya Malaviya, Chandra Bhagavatula, Antoine Bosselut, and Yejin Choi. 2020. Commonsense knowledge base completion with structural and semantic context. In *Proceedings of AAAI*. Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. *Artificial Intelligence*, 267:1–38. Jesse Mu and Jacob Andreas. 2020. Compositional explanations of neurons. In *Advances in Neural Information Processing Systems*, volume 33, pages 17153– 17163. Dong Nguyen. 2018. Comparing automatic and human evaluation of local explanations for text classification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1069–1078. Thien Huu Nguyen, Kyunghyun Cho, and Ralph Grishman. 2016. Joint event extraction via recurrent neural networks. In *Proceedings of the 2016 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 300–309. Thien Huu Nguyen and Ralph Grishman. 2015. Event detection and domain adaptation with convolutional neural networks. In *Proceedings of the 53rd Annual* Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 365–371. J. Z. Pan, G. Vetere, J.M. Gomez-Perez, and H. Wu, editors. 2017. *Exploiting Linked Data and Knowledge* Graphs for Large Organisations. Springer. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "why should i trust you?": Explaining the predictions of any classifier. In *Proceedings* of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, page 1135–1144. Julien Romero, Simon Razniewski, Koninika Pal, Jeff Z. Pan, Archit Sakhadeo, and Gerhard Weikum. 2019. Commonsense Properties from Query Logs and Question Answering Forums. In *Proc. of 28th ACM International Conference on Information and Knowledge* Management (CIKM 2019), pages 1411–1420. Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. 2017. Learning important features through propagating activation differences. In *International* conference on machine learning, pages 3145–3153. Robert Speer, Joshua Chin, and Catherine Havasi. 2016. Conceptnet 5.5: An open multilingual graph of general knowledge. In AAAI Conference on Artificial Intelligence. Ming Tan, Bing Xiang, and Bowen Zhou. 2015. Lstmbased deep learning models for non-factoid answer selection. *CoRR*, abs/1511.04108. Meihan Tong, Bin Xu, Shuai Wang, Yixin Cao, Lei Hou, Juanzi Li, and Jun Xie. 2020. Improving event detection via open-domain trigger knowledge. In *Proceedings of the 58th Annual Meeting of the Association* for Computational Linguistics, pages 5887–5897. Saurabh Varshneya, Antoine Ledent, Robert A. Vandermeulen, Yunwen Lei, Matthias Enders, Damian Borth, and Marius Kloft. 2021. Learning interpretable concept groups in cnns. In *Proceedings* of the Thirtieth International Joint Conference on Artificial Intelligence, pages 1061–1067. Shikhar Vashishth, Shyam Upadhyay, Gaurav Singh Tomar, and Manaal Faruqui. 2019. Attention interpretability across nlp tasks. *arXiv preprint* arXiv:1909.11218. Christopher Walker, Stephanie Strassel, Julie Medero, and Kazuaki Maeda. 2006. Ace 2005 multilingual training corpus. *Linguistic Data Consortium*. Xiaozhi Wang, Xu Han, Zhiyuan Liu, Maosong Sun, and Peng Li. 2019. Adversarial training for weakly supervised event detection. In *Proceedings of the* 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 998–1008. Xiaozhi Wang, Ziqi Wang, Xu Han, Wangyi Jiang, Rong Han, Zhiyuan Liu, Juanzi Li, Peng Li, Yankai Lin, and Jie Zhou. 2020. MAVEN: A Massive General Domain Event Detection Dataset. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, pages 1652–1671. Ziqi Wang, Xiaozhi Wang, Xu Han, Yankai Lin, Lei Hou, Zhiyuan Liu, Peng Li, Juanzi Li, and Jie Zhou. 2021. CLEVE: Contrastive Pre-training for Event Extraction. In *Proceedings of the 59th Annual Meeting* of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, pages 6283–6297. Chih-Kuan Yeh, Been Kim, Sercan Arik, Chun-Liang Li, Tomas Pfister, and Pradeep Ravikumar. 2020. On completeness-aware concept-based explanations in deep neural networks. In *Advances in Neural Information Processing Systems*, volume 33, pages 20554– 20565. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section Limitations ✓ A2. Did you discuss any potential risks of your work? Section 5 ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 5.1 ✓ B1. Did you cite the creators of artifacts you used? Section 5.1 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Section 5.1 ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section 5 ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Left blank. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 5 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 5 ## C ✓ **Did You Run Computational Experiments?** Section 5 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 5 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 5 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 5 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 5 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. ✗ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Left blank. ✗ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Left blank. ✗ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Left blank. ✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Left blank. ✗ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Left blank.
pacheco-etal-2023-interactive
Interactive Concept Learning for Uncovering Latent Themes in Large Text Collections
https://aclanthology.org/2023.findings-acl.313
Experts across diverse disciplines are often interested in making sense of large text collections. Traditionally, this challenge is approached either by noisy unsupervised techniques such as topic models, or by following a manual theme discovery process. In this paper, we expand the definition of a theme to account for more than just a word distribution, and include generalized concepts deemed relevant by domain experts. Then, we propose an interactive framework that receives and encodes expert feedback at different levels of abstraction. Our framework strikes a balance between automation and manual coding, allowing experts to maintain control of their study while reducing the manual effort required.
# Interactive Concept Learning For Uncovering Latent Themes In Large Text Collections Maria Leonor Pacheco1,2 Tunazzina Islam3 Lyle Ungar4 Ming Yin3 **Dan Goldwasser**3 1Microsoft Research 2University of Colorado Boulder 3Purdue University 4University of Pennsylvania [email protected] [email protected] {islam32,mingyin,dgoldwas}@purdue.edu ## Abstract Experts across diverse disciplines are often interested in making sense of large text collections. Traditionally, this challenge is approached either by noisy unsupervised techniques such as topic models, or by following a manual theme discovery process. In this paper, we expand the definition of a theme to account for more than just a word distribution, and include generalized concepts deemed relevant by domain experts. Then, we propose an interactive framework that receives and encodes expert feedback at different levels of abstraction. Our framework strikes a balance between automation and manual coding, allowing experts to maintain control of their study while reducing the manual effort required. ## 1 Introduction Researchers and practitioners across diverse disciplines are often interested making sense of large text collections. Thematic analysis is one of the most common qualitative research methods used to approach this challenge, and it can be understood as a form of pattern recognition in which the themes (or codes) that emerge from the data become the categories for analysis (Braun and Clarke, 2012; Roberts et al., 2019). In standard practice, researchers bring their own objectives or questions and identify the relevant themes or patterns recognized while analyzing the data, potentially grounding them in a relevant theory or framework. Themes in thematic analysis are broadly defined as "patterned responses or meaning" derived from the data, which inform the research question. With the explosion of data and the rapid development of automated techniques, disciplines that traditionally relied on qualitative methods for the analysis of textual content are turning to computational methods (Brady, 2019; Hilbert et al., 2019). Topic modeling has long been the go-to NLP technique to identify emerging themes from text collections (Blei et al., 2003; Boyd-Graber et al., 2017; 5059 Baden et al., 2022). Despite its wide adoption, topic modeling does not afford the same flexibility and representation power of qualitative techniques. For this reason, many efforts have been dedicated to understanding the ways in which topic models can be flawed (Mimno et al., 2011), and evaluating their coherence and quality (Stevens et al., 2012; Lau et al., 2014; Röder et al., 2015). More recently, Hoyle et al. (2021) showed that human judgements and accepted metrics of topic quality and coherence do not always agree. Given the noisy landscape surrounding topic modeling, manual qualitative methods are still prevalent across fields for analyzing nuanced and verbally complex data (Rose and Lennerholt, 2017; Lauer et al., 2018; Antons et al., 2020). Human-in-the-loop topic modeling approaches aim to address these issues by allowing experts to correct and influence the output of topic models. Given that topics in topic models are defined as distributions over words, the feedback received using these approaches is usually limited to identifying representative words and imposing constraints between words (Hu et al., 2011; Lund et al., 2017; Smith et al., 2018). In this paper, we argue that themes emerging from a document collection should not just be defined as a word distribution (similar to a topic model), but as a distribution over generalized concepts that can help us explain them. We build on the definition put forward by Braun and Clarke (2012), where themes are latent patterned meanings that emerge from the data, and supporting concepts serve as a way to explain themes using theoretical frameworks that are deemed relevant by domain experts. For example, emerging themes in a dataset about Covid-19 can be characterized by the strength of their relationship to stances about the covid vaccine and the moral framing of relevant entities (e.g. The theme "Government distrust" is strongly correlated to an anti-vax stance and frames *Dr. Fauci* as an entity enabling *cheating*). This representation of a theme aligns more closely with qualitative practices, as experts can introduce their pre-existing knowledge about the domain. Moreover, higher-level abstractions expand the capabilities of experts to correct and influence theme discovery, as it allows them to formulate concepts to generalize from observations to new examples (Rogers and McClelland, 2004), and to deductively draw inferences via conceptual rules and statements (Johnson, 1988). Following this rationale, we suggest a new computational approach to support and enhance standard qualitative practices for content analysis. We approach both inductive thematic analysis (i.e. identifying the relevant themes that emerge from the data and developing the code-book), and deductive thematic analysis (i.e. identifying the instances where a known theme is observed). To support this process, we allow researchers to shape the space of themes given machine generated candidates. Then, we allow them to provide feedback over machine judgments that map text to themes using relevant conceptual frameworks. To showcase our approach, we look at the task of characterizing social media discussions around topics of interest to the computational social science community. Namely, we consider two distinct case studies: The covid-19 vaccine debate in the US, and the immigration debate in the US, the UK and Europe. For each case, the qualitative researchers use different theories to ground theme discovery, each associated with a different set of concepts. For the covid-19 vaccine debate, the theme discovery process is grounded using vaccination stances and morality frames (Roy et al., 2021; Pacheco et al., 2022). For the immigration debates, the theme discovery is grounded using three framing typologies: narrative frames (Iyengar, 1991), policy frames (Card et al., 2015) and immigration frames (Benson, 2013; Hovden and Mjelde, 2019). All of these choices build on previous work and were validated by the qualitative researchers. From a machine learning perspective, these two case studies could be regarded as completely different tasks and have been approached independently in previous work. The reason for this is the data, the context, and the target labels (both the emerging themes and the supporting concepts) are different for each scenario. To aid experts in theme discovery, we propose an iterative two-stage machine-in-the-loop framework. In the first stage, we provide experts with an automated partition of the data, ranked example instances, and visualizations of the concept distribution. Then, we have a group of experts work together to explore the partitions, code emerging patterns and identify coherent themes. Once themes are identified, we have the experts select representative examples, write down additional examples and explanatory phrases, and explain themes using the set of available concepts. In the second stage, we incorporate the expert feedback using a neuro-symbolic mapping procedure. The *symbolic* part allows us to explicitly model the dependencies between concepts and the emerging themes using weighted logical formulae (e.g. w : policy_frame(economic) ⇒ theme(economic_migrants). These rules can be interpreted as soft constraints whose weights are learned from the feedback provided by the experts. The *neural* part allows us to maintain a distributed representation of the data points and themes, which facilitates the live exploration of the data based on distances and similarities, and provides a feature representation for learning the rule weights. After the mapping stage concludes, some instances will be assigned to the identified themes, and the remaining instances will be re-partitioned for a consecutive discovery stage. We conducted extensive evaluations of the different components, design choices, and stages in our methodology. We showed that our framework allows experts to uncover a set of themes that cover a large portion of the data, and that the resulting mapping from tweets to themes is fairly accurate with respect to human judgements. While we focused on polarized discussions, our framework generalizes to any content analysis study where the space of relevant themes is not known in advance. ## 2 Related Work This paper suggests a novel approach for identifying themes emerging from text collections. The notion of a theme presented in this work is strongly related to topic models (Blei et al., 2003). However, unlike latent topics that are defined as word distributions, our goal is to provide a richer representation that more strongly resembles qualitative practices by connecting the themes to general concepts that help explain them. For example, when identifying themes emerging from polarized discussions in social media, we look at conceptual frameworks such as moral foundations theory (Haidt and Graham, ![2_image_0.png](2_image_0.png) 2007; Amin et al., 2017; Chan, 2021) and framing theory (Entman, 1993; Chong and Druckman, 2007; Morstatter et al., 2018). Our work is conceptually similar to recent contributions that characterize themes and issue-specific frames in data, either by manually developing a code-book and annotating the data according to it (Boydstun et al., 2014; Mendelsohn et al., 2021), or by using data-driven methods (Demszky et al., 2019; Roy and Goldwasser, 2021). Unlike these approaches, our work relies on interleaved humanmachine interaction rounds, in which humans can identify and explain themes from a set of candidates suggested by the model, as well as diagnose and adapt the model's ability to recognize these themes in documents. This work is part of a growing trend in NLP that studies how humanmachine collaboration can help improve automated language analysis (Wang et al., 2021). In that space, two lines of works are most similar to ours. Interactive topic models (Hu et al., 2011; Lund et al., 2017; Smith et al., 2018) allow humans to adapt the identified topics, but the feedback is usually limited to lexical information. Open Framing (Bhatia et al., 2021) allows humans to identify and name frames based on the output of topic models, but lacks our model's ability for sustained interactions that help shape the theme space, as well as the explanatory power of our neuro-symbolic representation. ## 3 The Framework We propose an iterative two-stage framework that combines ML/NLP techniques, interactive interfaces and qualitative methods to assist experts in characterizing large textual collections. We define large textual collections as repositories of textual instances (e.g. tweets, posts, documents), where each instance is potentially associated with a set of annotated or predicted concepts. In the first stage, our framework automatically proposes an initial partition of the data, such that instances that are thematically similar are clustered together. We provide experts with an interactive interface equipped with a set of *exploratory operations* that allows them to evaluate the quality of the discovered partitions, as well as to further explore and partition the space by inspecting individual examples, finding similar instances, and using open text queries. As experts interact with the data through the interface, they following an inductive thematic analysis approach to identify and code the patterns that emerge within the partitions (Braun and Clarke, 2012). Next, they group the identified patterns into general themes, and instantiate them using the interface. Although intuitively we could expect a single partition to result in a single theme, note that this is not enforced. Experts maintain full freedom as to how many themes they instantiate, if any. Once a theme is created, experts are provided with a set of *intervention operations* to explain the themes using natural language, select good example instances, write down additional examples, and input or correct supporting concepts to characterize the theme assignments. The full set of operations are listed in Tab. 1 and demonstrated in App. A.1. In the second stage, our framework finds a mapping between the full set of instances and the themes instantiated by the experts. We use the information contributed by the experts in the form of examples and concepts, and learn to map instances to themes using our neuro-symbolic procedure. We allow instances to remain unassigned if there is not a good enough match, and in this case, a consecutive portioning step is done. We refer to instances that are mapped to themes as "named partitions" and unassigned proposed partitions as "unnamed partitions". Once instances are assigned to themes, experts have access to a comprehensive visual analysis of the state of the system. The main goal of this analysis is to appreciate the trade-off between coverage (how many instances we can account for with the discovered themes) and quality (how good we are at mapping instances to themes). An illustration of the framework can be observed in Fig. 1. Additional details about the coverage and quality analysis are presented in the experimental section. Below, we discuss the representation of themes and instances, the protocol followed for interaction, and the mapping and re-partitioning procedures. Our visual interface and our analysis code have been made available to the community1. Representing Themes and Instances We represent example instances and explanatory phrases using their S-BERT embedding (Reimers and Gurevych, 2019). To measure the closeness between an instance and a theme, we compute the cosine similarity between the instance and all of the explanatory phrases and examples for the theme, and take the maximum similarity score among them. Our framework is agnostic of the representation used. The underlying embedding objective and the scoring function can easily be replaced. Operations **Description** | Adding, Editing and Removing Themes Adding and Removing Examples Adding or Correcting Concepts | |--------------------------------------------------------------------------------------------------| Finding Partitions ![3_image_0.png](3_image_0.png) Text-based Queries Finding Similar Instances Listing Themes and Instances Visualizing ![3_image_1.png](3_image_1.png) Operations **Description** Adding, Editing and Removing Themes ![3_image_2.png](3_image_2.png) Table 1: Interactive Operations Interaction Protocol We follow a simple protocol where three human coders work together using 1https://gitlab.com/mlpacheco/ machine-in-the-loop-concepts the operations described above to discover themes in large textual corpora. In addition to the three coders, each interactive session is guided by one of the authors of the paper, who makes sure the coders are adhering to the process outlined here. To initialize the system, the coders will start by using the partitioning operation to find ten initial partitions of roughly the same size. During the first session, the coders will inspect the partitions one by one by looking at the examples closest to the centroid. This will be followed by a discussion phase, in which the coders follow an inductive thematic analysis approach to identify repeating patterns and write them down. If one or more cohesive patterns are identified, the experts will create a new theme, name it, and mark a set of good example instances that help in characterizing the named theme. When a pattern is not obvious, coders will explore similar instances to the different statements found. Whenever the similarity search results in a new pattern, the coders will create a new theme, name it, and mark a set of good example instances that helped in characterizing the named theme. Next, the coders will look at the local theme explanations and have the option to enhance each theme with additional phrases. Note that each theme already contains a small set of representative instances, which are marked as "good" in the previous step. In addition to contributing "good" example phrases, coders will have the option to contribute some "bad" example phrases to push the representation of the theme away from statements that have high lexical overlap with the good examples, but different meaning. Finally, coders will examine each exemplary instance and phrase for the set of symbolic concepts (e.g. stance, moral frames). In cases where the judgement is perceived as wrong, the coders will be allowed to correct it. In this paper, we assume that the textual corpora include a set of relevant concepts for each instance. In future work, we would like to explore the option of letting coders define concepts on the fly. Mapping and Re-partitioning Each interactive session will be followed by a mapping and repartitioning stage. First, we will perform the mapping step, in which we assign instances to the themes discovered during interaction. We do not assume that experts will have discovered the full space of latent themes. For this reason, we do not try to assign a theme to each and every instance. We expect that the set of themes introduced by the human experts at each round of interaction will cover a subset of the total instances available. Following this step, we will re-partition all the unassigned instances for a subsequent round of interaction. We use DRaiL (Pacheco and Goldwasser, 2021), a neuro-symbolic modeling framework to design a mapping procedure. Our main goal is to condition new theme assignments not only on the embedding distance between instances and good/bad examples, but also leverage the additional judgements provided by experts using the "Adding or Correcting Concepts" procedure. For example, when analyzing the corpus about the Covid-19 vaccine, experts could point out that 80% of the good examples for theme *Natural Immunity is Effective* have a clear anti-vaccine stance. We could use this information to introduce inductive bias into our mapping procedure, and potentially capture cases where the embedding distance does not provide enough information. DRaiL uses weighted first-order logic rules to express decisions and dependencies between different decisions. We introduce the following rules: $\uparrow\;\mathbf{T}_{\phi}$ $$\cdot(\mathbf{i},\mathbf{c})$$ $$\mathbf{a}\mathbf{a}\mathbf{b}\mathbf{c}\mathbf{l}\,(\mathbf{a}\mathbf{b})$$ * [10] M. C. t0 − tn :Inst(i) ⇒ Theme(i, t) a0 − am :Inst(i) ⇒ Concept(i, c) c0 − cn∗m :Inst(i) ∧ Concept(i, c) ⇒ Theme(i, t) c ′0 − c ′n∗n :Inst(i) ∧ Theme(i, t) ∧ (t ̸= t ′) ⇒ ¬Theme(i, t) The first set of rules t0 − tn and a0 − am map instances to themes and concepts respectively. We create one template for each theme t and concept c, and they correspond to binary decisions (e.g. whether instance i mentions theme t). Then, we introduce two sets of soft constraints: c0 − cn∗m encode the dependencies between each concept and theme assignment (e.g. likelihood of theme *Natural Immunity is Effective* given that instance has concept *anti-vax*). Then, c′0 − c′n∗n discourages an instance from having more than one theme assignment. For each rule, we will learn a weight that captures the strength of that rule (i.e. its likelihood of being active for a given input). Then, a combinatorial inference procedure will be run to find the most likely global assignment. Each entity and relation in DRaiL is tied to a neural architecture that is used to learn its weights. In this paper, we use a BERT encoder (Devlin et al., 2019) for all rules. To generate data for learning the DRaiL model, we take the K = 100 closest instances for each good/bad example provided by the experts. Good examples will serve as positive training data. For negative training data, we take the contributed bad examples, as well as good examples for other themes and concepts. Once the weights are learned, we run the inference procedure over the full corpus. ## 4 Case Studies We explore two case studies involving discussions on social media: (1) The Covid-19 vaccine discourse in the US, and (2) The immigration discourse in the US, the UK and the EU. For the Covid19 case, we build on the corpus of 85K tweets released by Pacheco et al. (2022). All tweets in this corpus were posted by users located in the US, are uniformly distributed between Jan. and Oct. 2021, and contain predictions for vaccination stance (e.g. pro-vax, anti-vax) and morality frames (e.g. fairness/cheating and their actor/targets.) (Haidt and Graham, 2007; Roy et al., 2021). For the immigration case, we build on the corpus of 2.66M tweets released by Mendelsohn et al. (2021). All tweets in this corpus were posted by users located in the US, the UK and the EU, written between 2018 and 2019, and contain predictions for three different framing typologies: narrative frames (e.g. episodic, thematic) (Iyengar, 1991), generic policy frames (e.g. economic, security and defense, etc.) (Card et al., 2015), and immigration-specific frames (e.g. victim of war, victim of discrimination, etc.) (Benson, 2013; Hovden and Mjelde, 2019). Additional details about the datasets and framing typologies can be found the original publications. Our main goal is to evaluate whether experts can leverage our framework to identify prominent themes in the corpora introduced above. We recruited a group of six experts in Computational Social Science, four male and two female, within the ages of 25 and 45. The group of experts included advanced graduate students, postdoctoral researchers and faculty. Our studies are IRB approved, and we followed their protocols. For each corpus, we performed two consecutive sessions with three experts following the protocol outlined in Sec. 3. To evaluate consistency, we did an additional two sessions with a different group of experts for the Covid-19 dataset. Each session lasted a total of one hour. In App. A.2, A.3 and A.4, we include large tables enumerating the resulting themes, and describing in detail all of the patterns identified and coded by the experts at each step of the process. Coverage vs. Mapping Quality We evaluated the trade-off between coverage (how many tweets (a) **Covid-19** Coverage (b) **Immigration** Coverage ![5_image_2.png](5_image_2.png) Case Iter. Ground. ≤ Q1 ≤ Q2 ≤ Q3 All Study Method ![5_image_0.png](5_image_0.png) ![5_image_1.png](5_image_1.png) we can account for with the discovered themes) and mapping quality (how good we are at mapping tweets to themes). Results are outlined in Fig. 2. To do this evaluation, we sub-sampled a set of 200 mapped tweets for each scenario, uniformly distributed across themes and their proximity to the theme embedding, and validated their assignments manually. The logic behind sampling across different proximities is that we expect mapping performance to degrade the more semantically different the tweets are to the "good" examples and phrases provided by the experts. To achieve this, we look at evaluation metrics at different thresholds using the quartiles with respect to the proximity/similarity distribution. Results for Q1 correspond to the 25% most similar instances. For Q2 to the 50% most similar instances, and for Q3 to the 75% most similar instances. Note that these are continuous ranges and the quartiles serve as thresholds. To evaluate the impact of our neuro-symbolic mapping procedure (NeSym), we compared it against a nearest neighbors (NNs) approach that does not leverage conceptual frameworks and looks only at the language embedding of the tweets and theme examples and explanatory phrases. For the first iteration of Covid-19, we find that the approximate performance of the NeSym mapping at Q1 is better (+2 points) than the approximate full mapping for NNs, while increasing coverage x1.5. For immigration, we have an even more drastic result, having an approximate 15 point increase at a similar coverage gain. In both cases, experts were able to increase the number of themes in subsequent iterations2. While the coverage increased in the second iteration for Covid, it decreased slightly for Immigration. For Covid, most of the coverage increase can be attributed to a single theme (*Vax Efforts Progression*), which accounts for 20% of the mapped data. In the case of Covid, this large jump in coverage is accompanied by a slight decrease in mapping performance. In the case of Immigration, we have the opposite effect: as the coverage decreases the performance improves, suggesting that the mapping gets stricter. This confirms the expected trade-off between coverage and quality. Depending on the needs of the final applications, experts could adjust their confidence thresholds. To perform a fine-grained error analysis, we looked at the errors made by the model using manual validation. In Fig. 3 we show the confusion matrix for the Covid case. We find that the performance varies a lot, with some themes being more accurate than others. In some cases, we are good at capturing the general meaning of the theme but fail at grasping the stance similarities. For example, Anti Vax Spread Missinfo gets confused with Pro Vax Lie, where the difference is on who is doing the lying. In other cases, we find that themes that are close in meaning have some overlap (e.g. Alt Treatments with *Vax Doesn't Work*). We also find that unambiguous, neutral themes like Vax Appointments, *Got The Vax* and *Vax Efforts Progression* have the highest performance. Lastly, we observe that for some errors, none of the existing themes are appropriate (Last row: *Other*), suggesting that there are still undiscovered themes. Upon closer inspection, we found that the majority of these tweets are among the most distant from the theme embedding. The full distribution of *Other* per interval can be observed in App. A.6. We include the confusion matrix for immigration in App. A.6. Given our hypothesis that themes can be characterized by the strength of their relationship to high-level concepts, we consider mappings to be better if they are more cohesive. In the Covid case, we expect themes to have strong relationships to vaccination stance and morality frames. In the Immigration case, we expect themes to have strong relationships to the framing typologies. To measure this, we define a theme purity metric for each Figure 3: **Confusion matrix for Covid after second** ![6_image_0.png](6_image_0.png) iteration. Values are normalized over the predicted themes (cols), and sorted from best to worst. | Iter. | Ground | Covid Vaccine | Immigration | | | |----------------------------|------------|-----------------|---------------|-------|--------| | Method | # Thm | Cover | Purity # Thm | Cover | Purity | | Baselines LDA (Var. Bayes) | 9 | 39.8 63.72 | 13 | 26.8 | 57.14 | | LDA (Gibbs) | 79.8 63.90 | 55.9 | 54.86 | | | | 1 | NNs | 9.3 | 68.81 | 11.1 | 58.44 | | NeSym | 54.3 69.97 | 65.8 | 61.72 | | | | Baselines LDA (Var. Bayes) | 16 | 26.1 65.02 | 19 | 18.3 | 57.94 | | LDA (Gibbs) | 73.1 65.14 | 46.8 | 59.25 | | | | 2 | NeSym | 84.3 | 65.50 | 59.6 | 59.19 | concept. For example, for stance this is defined as: P urity*stance* = 1 N Pt∈*T hemes* maxs∈*Stance* |t∩s|. Namely, we take each theme cluster and count the number of data points from the most common stance value in said cluster (e.g. the number of data points that are *anti-vax*). Then, we take the sum over all theme clusters and divide it by the number of data points. We do this for every concept, and average them to obtain the final averaged concept purity. In Tab. 2 we show the average concept purity for our mappings at each iteration in the interaction. We can see that the NeSym procedure results in higher purity with respect to the NNs procedure, even when significantly increasing coverage. This is unsurprising, as our method is designed to take advantage of the relationship between themes and concepts. Additionally, we include topic modeling baselines that do not involve any interaction, and find that interactive themes generally result in higher purity partitions than topics obtained using LDA. Details about the steps taken to obtain LDA topics can be found in App. A.5. Effects of Consecutive Iterations In Fig. 2 we observed different behaviors in subsequent iterations with respect to coverage and performance. To further inspect this phenomenon, we looked at the tweets that shifted predictions between the first and second iterations. Fig. 4 shows this analysis for Immigration. Here, we find that a considerable number of the tweets that were assigned to a theme in the first iteration were unmatched (i.e. moved to the *Unknown*) in the second iteration. This behavior explains the decrease in coverage. Upon closer inspection, we found that the majority of these unmatched tweets corresponded to assignments that were in the last and second to last intervals with respect to their similarity to the theme embedding. We also observed a non-trivial movement from the Unknown to the new themes (shown in red), as well as some shifts between old themes and new themes that seem reasonable. For example, 1.2% of the total tweets moved from *Role of Western Countries* to *Country of Immigrants*, 1% moved from Academic Discussions to *Activism*, and close to 3% of tweets moved from *Trump Policy* and *UK Policy* to Criticize Anti Immigrant Rhetoric. This behavior, coupled with the increase in performance observed, suggests that as new themes are added, tweets move to a closer fit. In App. A.7 we include the shift matrix for Covid, as well as the distribution of the unmatched tweets with respect to their semantic similarity to the theme embedding. For Covid, we observe that the increase in coverage is mostly attributed to the addition of the *Vax Efforts Progression* theme, which encompasses all mentions to vaccine development and roll-out. Otherwise, a similar shifting behavior can be appreciated. ## Consistency Between Different Expert Groups To study the subjectivity of experts and its impact on the resulting themes, we performed two parallel studies on the Covid corpus. For each study, a different group of experts performed two rounds of interaction following the protocol outlined on Sec. 3. The side-by-side comparisons of the two studies can be observed in Tab. 3. We find that the second group of experts is able to obtain higher coverage and higher concept purity with a slightly reduced number of themes. To further inspect this phenomenon, as well as the similarities and differences between the two sets of themes, we plot the overlap coefficients between the theme-to-tweet mappings in Fig. 5. We use the Szymkiewicz–Simpson coefficient, which measures the overlap between two finite sets and is defined as: overlap(*X, Y* ) = |X∩Y | min(|X|,|Y |) . In cases where we observe high overlap between ![7_image_0.png](7_image_0.png) ![7_image_1.png](7_image_1.png) ![7_image_2.png](7_image_2.png) a word-for-word match between the two discovered themes. For example, *Vax Lessens Symptoms*, which was surprisingly named the same by the two groups, as well as *Vax Availability* vs. Vax Appointments, *Got The Vax* vs. *I Got My Vax*, and Vax Side Effects vs. *Post Vax Symptoms*. In other cases, we find that different groups came up with themes that have some conceptual (and literal) overlap, but that span different sub-segments of the data. For example, we see that the theme Reasons the US Lags On Vax defined by the second group, has overlap with different related themes in the first group, such as: Gov. Bad Policies, *Vax Efforts Progression*, and *Unjustified Fear of Vax*. Similarly, while the second group defined a single theme *Vax Personal Choice*, the first group attempted to break down references to personal choices between those direclty related to taking the vaccine (Free Choice Vax), and those that use the vaccine as analogies for other topics, like abortion (*Free Choice Other*). While some themes are clearly present in the data and identified by the two groups, we see that subjective decisions can influence the results. The first group was inclined to finer grained themes (with the exception of *Vax Efforts Progression*), while the second group seemed to prefer more general themes. In future work, we would like to study how the variation observed with our approach compares to the variation encountered when experts follow fully manual procedures, as well the impact of the crowd vs. experts working alone. Abstract Themes vs. Word-level Topics To get more insight into the differences between topics based on word distributions and our themes, we looked at the overlap coefficients between topics obtained using LDA and our themes. Fig. 6 shows the coefficients for Immigration. While some overlap exists, the coefficients are never too high (a max. of 0.35). One interesting finding is that most of our themes span multiple related topics. For example, we find that *Trump Policy* has similar overlap with *undocumented_ice_workers_trump*, migrants_migrant_trump_border, and *children_parent_kids_trump*. While all of these topics discuss Trump policies, they make reference to different aspects: workers, the border and families. This supports our hypothesis that our themes are more abstract in nature, and that they capture conceptual similarities beyond word distributions. Overlap coefficients for Gibbs sampling, Covid, and subsequent iterations can be seen in App. A.8. ![8_image_0.png](8_image_0.png) ## 5 Limitations The study presented in this paper has three main limitations. (1) While the design of the framework does not prohibit the utilization of longer textual forms, the two case studies presented deal with short texts. When dealing with longer text forms, we need to consider the cognitive load of having experts look at groups of instances. In our ongoing work, we employ strategies such as summarization, highlighting and other visualization techniques to deal with these challenges. (2) In the studies presented, qualitative researchers worked in groups to identify themes. Our goal in comparing two independent groups of researchers was to evaluate the degree of subjectivity by observing if the themes identified by the two groups would diverge. This setup might not always be realistic, as a lot of times qualitative researchers work independently or asynchronously. In the future, we will explore the effect of the crowd in minimizing subjectivity, as well as the role that the computational tools play in more challenging settings. (3) Finally, we did not include a comprehensive user study to gather input from the experts about their experience with our framework. We consider this to be an important next step and we are actively working in this direction. ## 6 Summary We presented a concept-driven framework for uncovering latent themes in text collections. Our framework expands the definitions of a theme to account for theoretically informed concepts that generalize beyond word co-occurrence patterns. We suggest an interactive protocol that allows domain experts to interact with the data and provide feedback at different levels of abstraction. We performed an exhaustive evaluation using two case studies and different groups of experts. Additionally, we contrasted the extracted themes against the output of traditional topic models, and showed that they are better at capturing conceptual similarities that go beyond word distributions. ## Acknowledgements We thank the anonymous reviewers of this paper for all of their feedback. This work was partially supported by an NSF CAREER award IIS-2048001. ## References Avnika B Amin, Robert A Bednarczyk, Cara E Ray, Kala J Melchiori, Jesse Graham, Jeffrey R Huntsinger, and Saad B Omer. 2017. Association of moral values with vaccine hesitancy. *Nature Human Behaviour*, 1(12):873–880. David Antons, Eduard Grünwald, Patrick Cichy, and Oliver Salge. 2020. The application of text mining methods in innovation research: current state, evolution patterns, and development priorities. RD Management, 50. Christian Baden, Christian Pipal, Martijn Schoonvelde, and Mariken A. C. G van der Velden. 2022. Three gaps in computational text analysis methods for social sciences: A research agenda. Communication Methods and Measures, 16(1):1–18. Rodney Benson. 2013. Shaping Immigration News: A French-American Comparison. Communication, Society and Politics. Cambridge University Press. Vibhu Bhatia, Vidya Prasad Akavoor, Sejin Paik, Lei Guo, Mona Jalal, Alyssa Smith, David Assefa Tofu, Edward Edberg Halim, Yimeng Sun, Margrit Betke, Prakash Ishwar, and Derry Tanti Wijaya. 2021. OpenFraming: Open-sourced tool for computational framing analysis of multilingual data. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 242–250, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. *J. Mach. Learn.* Res., 3(null):993–1022. Jordan Boyd-Graber, Yuening Hu, and David Minmo. 2017. *Applications of Topic Models*. Amber Boydstun, Dallas Card, Justin H. Gross, Philip Resnik, and Noah A. Smith. 2014. Tracking the development of media frames within and across policy issues. Henry E. Brady. 2019. The challenge of big data and data science. *Annual Review of Political Science*, 22(1):297–323. Virginia Braun and Victoria Clarke. 2012. *Thematic* analysis., pages 57–71. Dallas Card, Amber E. Boydstun, Justin H. Gross, Philip Resnik, and Noah A. Smith. 2015. The media frames corpus: Annotations of frames across issues. In *Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th* International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 438– 444, Beijing, China. Association for Computational Linguistics. Eugene Y Chan. 2021. Moral foundations underlying behavioral compliance during the covid-19 pandemic. Personality and individual differences, 171:110463. Dennis Chong and James N Druckman. 2007. Framing theory. *Annu. Rev. Polit. Sci.*, 10:103–126. Dorottya Demszky, Nikhil Garg, Rob Voigt, James Zou, Jesse Shapiro, Matthew Gentzkow, and Dan Jurafsky. 2019. Analyzing polarization in social media: Method and application to tweets on 21 mass shootings. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2970– 3005. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Robert M Entman. 1993. Framing: Toward clarification of a fractured paradigm. *Journal of communication*, 43(4):51–58. Jonathan Haidt and Jesse Graham. 2007. When morality opposes justice: Conservatives have moral intuitions that liberals may not recognize. *Social Justice Research*, 20(1):98–116. Martin Hilbert, George Barnett, Joshua Blumenstock, Noshir Contractor, Jana Diesner, Seth Frey, Sandra González-Bailón, PJ Lamberson, Jennifer Pan, TaiQuan Peng, Cuihua (Cindy) Shen, Paul E. Smaldino, Wouter van Atteveldt, Annie Waldherr, Jingwen Zhang, and Jonathan J. H. Zhu. 2019. Computational communication science| computational communication science: A methodological catalyzer for a maturing discipline. *International Journal of Communication*, 13(0). Jan Fredrik Hovden and Hilmar Mjelde. 2019. Increasingly controversial, cultural, and political: The immigration debate in scandinavian newspapers 1970–2016. *Javnost - The Public*, 26(2):138–157. Alexander Hoyle, Pranav Goel, Andrew Hian-Cheong, Denis Peskov, Jordan Boyd-Graber, and Philip Resnik. 2021. Is automated topic model evaluation broken? the incoherence of coherence. In *Advances in Neural Information Processing Systems*, volume 34, pages 2018–2033. Curran Associates, Inc. Yuening Hu, Jordan Boyd-Graber, and Brianna Satinoff. 2011. Interactive topic modeling. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 248–257, Portland, Oregon, USA. Association for Computational Linguistics. Shanto. Iyengar. 1991. Is anyone responsible? : how television frames political issues. American politics and political economy series. University of Chicago Press, Chicago. Xin Jin and Jiawei Han. 2010. *K-Means Clustering*, pages 563–564. Springer US, Boston, MA. Ralph H. Johnson. 1988. Gilbert harman change in view: Principles of reasoning (cambridge, ma: Mit press 1986). pp. ix 147. *Canadian Journal of Philosophy*, 18(1):163–178. Jey Han Lau, David Newman, and Timothy Baldwin. 2014. Machine reading tea leaves: Automatically evaluating topic coherence and topic model quality. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 530–539, Gothenburg, Sweden. Association for Computational Linguistics. Claire Lauer, Eva Brumberger, and Aaron Beveridge. 2018. Hand collecting and coding versus data-driven methods in technical and professional communication research. IEEE Transactions on Professional Communication, 61(4):389–408. Jeffrey Lund, Connor Cook, Kevin Seppi, and Jordan Boyd-Graber. 2017. Tandem anchoring: a multiword anchor approach for interactive topic modeling. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 896–905, Vancouver, Canada. Association for Computational Linguistics. Andrew Kachites McCallum. 2002. Mallet: A machine learning for language toolkit. Http://mallet.cs.umass.edu. Leland McInnes, John Healy, and Steve Astels. 2017. hdbscan: Hierarchical density based clustering. The Journal of Open Source Software, 2(11):205. Julia Mendelsohn, Ceren Budak, and David Jurgens. 2021. Modeling framing in immigration discourse on social media. In *Proceedings of the 2021 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2219–2263, Online. Association for Computational Linguistics. David Mimno, Hanna Wallach, Edmund Talley, Miriam Leenders, and Andrew McCallum. 2011. Optimizing semantic coherence in topic models. In *Proceedings* of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 262–272, Edinburgh, Scotland, UK. Association for Computational Linguistics. Fred Morstatter, Liang Wu, Uraz Yavanoglu, Stephen R. Corman, and Huan Liu. 2018. Identifying framing bias in online news. *Trans. Soc. Comput.*, 1(2):5:1– 5:18. Maria Leonor Pacheco and Dan Goldwasser. 2021. Modeling content and context with deep relational learning. *Transactions of the Association for Computational Linguistics*, 9:100–119. Maria Leonor Pacheco, Tunazzina Islam, Monal Mahajan, Andrey Shor, Ming Yin, Lyle Ungar, and Dan Goldwasser. 2022. A holistic framework for analyzing the COVID-19 vaccine debate. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5821–5839, Seattle, United States. Association for Computational Linguistics. Radim Rehurek and Petr Sojka. 2011. Gensim–python framework for vector space modelling. NLP Centre, Faculty of Informatics, Masaryk University, Brno, Czech Republic, 3(2). Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong, China. Association for Computational Linguistics. Kate Roberts, Anthony Dowell, and Jing-Bao Nie. 2019. Attempting rigour and replicability in thematic analysis of qualitative research data; a case study of codebook development. *BMC Medical Research Methodology*, 19. Michael Röder, Andreas Both, and Alexander Hinneburg. 2015. Exploring the space of topic coherence measures. In *Proceedings of the Eighth ACM International Conference on Web Search and Data Mining*, WSDM '15, page 399–408, New York, NY, USA. Association for Computing Machinery. T. Rogers and James L. McClelland. 2004. Semantic cognition: A parallel distributed processing approach. Jeremy Rose and Christian Lennerholt. 2017. Low cost text mining as a strategy for qualitative researchers. Electronic Journal on Business Research Methods, forthcoming. Shamik Roy and Dan Goldwasser. 2021. Analysis of nuanced stances and sentiment towards entities of US politicians through the lens of moral foundation theory. In *Proceedings of the Ninth International* Workshop on Natural Language Processing for Social Media, pages 1–13, Online. Association for Computational Linguistics. Shamik Roy, Maria Leonor Pacheco, and Dan Goldwasser. 2021. Identifying morality frames in political tweets using relational learning. In *Proceedings of* the 2021 Conference on Empirical Methods in Natural Language Processing, pages 9939–9958, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Alison Smith, Varun Kumar, Jordan Boyd-Graber, Kevin Seppi, and Leah Findlater. 2018. Closing the loop: User-centered design and evaluation of a human-in-the-loop topic modeling system. In 23rd International Conference on Intelligent User Interfaces, IUI '18, page 293–304, New York, NY, USA. Association for Computing Machinery. Keith Stevens, Philip Kegelmeyer, David Andrzejewski, and David Buttler. 2012. Exploring topic coherence over many models and many topics. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 952–961, Jeju Island, Korea. Association for Computational Linguistics. Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. *Journal of Machine* Learning Research, 9:2579–2605. Zijie J. Wang, Dongjin Choi, Shenyu Xu, and Diyi Yang. 2021. Putting humans in the natural language processing loop: A survey. In Proceedings of the First Workshop on Bridging Human–Computer Interaction and Natural Language Processing, pages 47–52, Online. Association for Computational Linguistics. ## A Appendix A.1 Tool Screenshots A.1.1 Exploratory Operations ![11_image_1.png](11_image_1.png) ![11_image_2.png](11_image_2.png) ![11_image_4.png](11_image_4.png) ![11_image_6.png](11_image_6.png) ![11_image_8.png](11_image_8.png) ![11_image_0.png](11_image_0.png) ![11_image_3.png](11_image_3.png) Figure 14: Visualizing Global Explanations: Theme ![11_image_5.png](11_image_5.png) Distribution ![11_image_7.png](11_image_7.png) ![12_image_0.png](12_image_0.png) ![12_image_1.png](12_image_1.png) ![12_image_2.png](12_image_2.png) ![12_image_3.png](12_image_3.png) Figure 18: Marking Instances as *Good* ![12_image_4.png](12_image_4.png) Figure 19: Adding *Good* Examples ![12_image_5.png](12_image_5.png) ## A.2 Interactive Sessions For Covid: First Group Of Experts Table 4 and 5 outline the patterns discovered by the the first group of experts on the first a second iteration, respectively. ## A.3 Interactive Sessions For Covid: Second Group Of Experts Table 6 and 7 outline the patterns discovered by the second group of experts on the first a second iteration, respectively. ## A.4 Interactive Sessions For Immigration Table 8 and 9 outline the patterns discovered by the experts for immigration. ## A.5 Topic Modeling Details To obtain LDA topics with Variational Bayes sampling we use the Gensim implementation (Rehurek and Sojka, 2011). To obtain LDA topics with Gibbs sampling we use the MALLET implementation (McCallum, 2002). In both cases, we follow all the prepossessing steps suggested by Hoyle et al. (2021), with the addition of the words covid, vaccin* and immigra* to the list of stopwords. ## A.6 Fine-Grained Results The confusion matrix for Immigration can be seen in Fig. 21. Distribution of errors that do not match any existing theme, according to their similarity interval can be seen in Fig 22. | Cluster | Experts Rationale | New Named Themes | |-------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------|--------------------| | K-Means 0 | Discusses what the vaccine can and cannot do. | VaxLessensSymptoms | | Emphasis in reducing COVID-19 symptoms in case of infection ("like a bad cold"). Contains tweets with both stances. | | | | K-Means 1 | A lot of mentions to political entities. | GovBadPolicies | | Politicians get in the way of public safety | | | | K-Means 2 | A lot of tweets with mentions and links. | GovGoodPolicies | | Not a lot of textual context. Some examples thanking and praising governmental policies. Theme added upon inspecting similar tweets | | | | K-Means 3 | Overarching theme related to vaccine rollout. Mentions to pharmacies that can distribute, | - | | distribution in certain states, places with unfulfilled vax appointments. Too broad to create a theme | | | | K-Means 4 | Broadcast of vaccine appointments. | VaxAppointments | | Which places you can get vaccine appointments at. | | | | K-Means 5 | "I got my vaccine" type tweets | GotTheVax | | K-Means 6 | Mixed cluster, not a clear theme in centroid. | VaxDoesntWork | | Two prominent flavors: the vaccine not working and | UnjustifiedFearOfVax | | | people complaining about those who are scared of vaccine. | | | | K-Means 7 | Tweets look the same as K-Means 5 | - | | K-Means 8 | Tweets about development and approval of vaccines | VaxApproval | | K-Means 9 | Tweets related to common vaccine side-effects | VaxSideEffects | | Table 4: First Iteration: Patterns Identified in Initial Clusters and Resulting Themes | | | | Cluster | Experts Rationale | New Named Themes | |-------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------|-----------------------| | K-Means 0 | Tweets weighting health benefits/risks, but different arguments. (e.g. it works, doesn't work, makes things worse...) | - | | Too broad to create a theme. | | | | K-Means 1 | Messy cluster, relies on link for information. | - | | K-Means 2 | Relies on link for information. | - | | K-Means 3 | A lot of mentions to government lying and misinformation. | AntiVaxSpreadMisinfo | | "misinformation" is used when blaming antivax people. | ProVaxLie | | | "experts and government are lying" is used on the other side. | AltTreatmentsGood | | | References to alt-treatments on both sides. | AltTreatmentsBad | | | Text lookup "give "us the real meds", "covid meds" | | | | K-Means 4 | Some examples are a good fit for old theme, VaxDoesntWork. | - | | Other than that no coherent theme. | | | | K-Means 5 | Tweets about free will and choice. | FreeChoiceVax | | Text lookup "big gov", "free choice", "my body my choice" | FreeChoiceOther | | | Case "my body my choice" - a lot of mentions to abortion People using covid as a metaphor for other issues. | | | | K-Means 6 | Almost exclusively mentions to stories and news. | - | | K-Means 7 | Availability of the vaccine, policy. | VaxEffortsProgression | | Not judgement of good or bad, but of how well it progresses. | | | | K-Means 8 | Assign to previous theme GotTheVax | - | | K-Means 9 | Vaccine side effects. | - | | Assign to previous theme, VaxSymptoms | | | | Table 5: Second Iteration: Patterns Identified in Subsequent Clusters and Resulting Themes | | | | Cluster | Experts Rationale | New Named Themes | |------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------|---------------------------------------| | K-Means 0 | People asking people to get vaccinated. | VaxLessensSymptoms | | Some skeptical but acknowledge it reduces symptoms. It works but it has limitations. More specifically, it lessens the symptoms. | | | | K-Means 1 | Republicans have hurt the vax rate in the US. | ReasonsUSLagsOnVax | | Finding someone (or some party) to blame. Politicians are hurting people with policy. Vaccine in the US is behind, trying to explain why | | | | K-Means 2 | A lot of them are just replies. | - | | Cluster is for links and usernames. | | | | K-Means 3 | Availability and distribution of the vaccine. | VaxDistributionIssuesDueToLocalPolicy | | How stances of people in different states affect it. Vaccine distribution issues due to local policy. | | | | K-Means 4 | Clear cluster. Vaccine info, availability info. | VaxAvailabilityInfo | | K-Means 5 | Testimonials, #IGotMyVax | #IGotMyVax | | K-Means 6 | Some themes match the vaccine lessens symptoms. | VaxDoesMoreHarmThanGood | | Other theme: no need to get the vaccine, it doesn't work. Vaccine does more harm than good. | | | | K-Means 7 | Same as K-means 5 | - | | K-Means 8 | About covid vaccine updates. FDA approval. | FDAApproval | | In other cases it depends on the content on the link. So you can't really tell. | | | | K-Means 9 | Obvious. Vaccine symptoms, vaccine effects. | PostVaxSymptoms | | Post vaccination symptoms. | | | | Table 6: Second Group's First Iteration: Patterns Identified in Initial Clusters and Resulting Themes | | | | Cluster | Experts Rationale | New Named Themes | |-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------|-----------------------| | K-Means 0 | Links and promotions | - | | K-Means 1 | Looks like previous theme IGotMyVax, assign them. | - | | K-Means 2 | Very short tweets with links, and no context. | - | | Could be availability but not sure. Decided against adding theme | | | | K-Means 3 | Two themes observed. One old one, regarding VaxAvailabilityInfo. | VaxDistributionIssues | | One new one, getting vaccines is difficult. Not related to local policy. Decided against merging with previous theme | | | | K-Means 4 | A lot of talk about skepticism regarding the vaccine. | VaxCapitalism | | Some good matches to previous MoreHarmThanGood, assign them. | VaxInequality | | | Mentions to profiting from the vaccine. Look for similar instances to mentions of profits Text look up for "vaccine getting rich" Mentions to redlining, implications of inequality Text look up for "vaccine inequality" Lots of mentions to racial and monetary inequalities in access to vaccine | | | | K-Means 5 | Both PostVaxSymptoms and IGotMyVax examples, assign them. | - | | K-Means 6 | Mentions to vaccine safety. Weighting the safety/risks of the vaccine | VaxSafety | | K-Means 7 | A lot of discussion about the pandemic not being over | CovidNotOver | | Discussion on whether to open back up or not | | | | K-Means 8 | Repetitions, IGotMyVax. Assign them. | - | | K-Means 9 | Mentions to mandates. | VaxPersonalChoice | | The vaccine should be a personal choice, mandates should not be there. Different reasons: personal choice, no proof of whether it works. For no proof, assign to previous MoreHarmThanGood | | | | Table 7: Second Group's Second Iteration: Patterns Identified in Subsequent Clusters and Resulting Themes | | | | Cluster | Experts Rationale | New Named Themes | |-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------|----------------------------------| | K-Means 0 | Headlines, coverage. Some have an agenda (pro) | AcademicDiscussions | | Others are very academic and research-oriented Opinion pieces. | | | | K-Means 1 | Talking about apprehending immigrants at the border | JustifiedDetainmentEnforce | | Some report about the border but no stance. Deportation. Leaning negative towards immigrants. | | | | K-Means 2 | Less US-centric, more general. | EconomicMigrantsNotAsylumSeekers | | Talking about immigration as a global issue | SituationCountryOfOrigin | | | Humanitarian issues, mentions to refugees, forced migration | RoleOfWesternCountries | | | Situation in country of origin that motivates immigration Mentions to how the west is responsible The role of the target countries in destabilizing countries Mentions to economic migrants. Look up for "economic work migrants", "asylum seekers" | | | | K-Means 3 | About Trump. Trump immigration policy. | TrumpImmiPolicy | | Politicizing immigration. | | | | K-Means 4 | Attacking democrats. | DemocratImmiPolicyBad | | A lot of mentions to democrats wanting votes Common threads is democrats are bad | | | | K-Means 5 | Lacks context, lots of usernames. | ImmigrantInvasion | | Not a cohesive theme. Both pro and con, and vague. | ImmigrantCrime | | | Some mentions to invasion. Look for "illegal immigrants invade" Mentions to caravan, massive exodus of people. Mentions to crime. Look for immigrants murder, immigrants dangerous. A lot of tweets linking immigrants to crime | | | | K-Means 6 | Looks very varied. Not cohesive. | - | | K-Means 7 | Very cohesive. Mentions to detaining children, families. | DetainingChildren | | K-Means 8 | All tweets are about the UK and Britain. | UKProImmiPolicy | | Both pro and anti immigration. | UKAntiImmiPolicy | | | Only common theme is the UK. Almost exclusively policy/politics | | | | K-Means 9 | Economic cost of immigration. | FinacialCostOfImmigration | | Immigration is bad for the US economy Some about crime, and democrats. Assign to existing themes. | | | | Table 8: First Iteration Immigration: Patterns Identified in Initial Clusters and Resulting Themes | | | ![15_image_0.png](15_image_0.png) ## A.7 Shifting Predictions Between Iterations Heatmaps of shifting predictions for Covid can be seen in Fig. 23. The distribution of the unmatched predictions for both Covid and Immigration, according to their similarity intervals can be seen in ![16_image_0.png](16_image_0.png) ![16_image_1.png](16_image_1.png) Fig. 24. Additionally, some examples of shifting predictions for the two themes with the most movement for the Immigration case can be seen in Tabs. 10 and 11. ![16_image_3.png](16_image_3.png) ![16_image_2.png](16_image_2.png) ![16_image_4.png](16_image_4.png) ## A.8 Lda Vs. Our Themes An overlap coefficient heatmap between LDA topics with Variational Bayes sampling and our themes for the first iteration of Covid can be seen in Fig. 25. Similarly, they can be seen for the second iterations of both Covid and Immigration in Fig. 26. We also include these heatmaps for LDA with Gibbs sampling in Figs. 27, 28 and 29 ![17_image_0.png](17_image_0.png) ![17_image_1.png](17_image_1.png) 0.15 0.10 0.05 0.00 Second Iteration Figure 23: Shifting predictions for Covid . Themes added during second iteration are shown in red, and values are normalized over the full population. | Distance to | Example Tweets Kept on Role of Western Countries | Example Tweets Shifted to Unknown | |------------------------------------------------------------|--------------------------------------------------------------|-----------------------------------------------------------| | Centroid | Interesting that your problem is with "migrants", where | | | The U.S. Helped Destabilize Honduras. Now Honduran | | | | 0.27 | the U.S. has issues with illegal aliens, that even our legal | | | Migrants Are Fleeing Political and Economic Crisis | migrants wish to be rid of. | | | The root causes of migration aren't being addressed ASAP, | | | | These people are fleeing their countries DIRECTLY because | as they must be. The governments are all busy talking about | | | 0.29 | of U.S. ForeignPolicy. If you don't like refugees. Don't | stopping the consequences without concrete plans to solve | | create 'em. | the causes | | | What's missing in the US corporate news on migrants is the | | | | 0.30 | Don't want migrants? Stop blowing their countries to pieces | way American "aid" is used to overturn democracies, prop | | up strongmen and terrify the opposition. | | | Table 10: Role of Western Countries : Examples of tweets kept on theme (Left) and shifted to unknown (Right) between the first and second iteration. On Right are the tweets closest to the theme centroid that shifted to Unknown . On Left are tweets that did not shift, but have the same distance. ![17_image_2.png](17_image_2.png) Figure 28: Overlap Coefficients between LDA Gibbs Sampling and our Themes (First Iteration for Immigration ). | Distance to | Example Tweets Kept on Trump Immigration Policy | Example Tweets Shifted to Unknown | |--------------------------------------------------------------|-------------------------------------------------------------|-------------------------------------------------------| | Centroid | The anti-migrant cruelty of the Trump Admin knows no | | | Racist realDonaldTrump wastes our tax money on lock- | bounds. | This targeting of migrant families is meant to | | 0.24 | ing up little kids in #TrumpConcentrationCamps and steals | induce fear and doesnt address our broken immigration | | from our military to waste money on his #ReElectiomHate- | system. We should be working to make our immigration | | | Wall and spends little on anything else. | system more humane, not dangerous and cruel. | | | This is unlawful and is directed at mothers with their chil- | | | | dren! He had no remorse for separating immigrants earlier, | | | | Trump promises immigration crackdown ahead of U.S. elec- | | | | 0.25 | now he's threatening their lives! It's heart wrenching, but | | | tion | Trumpf has no heart! He's void of feeling empathy! Read | | | they are in prison camps? WH ignoring cries | | | | Trump to end asylum protections for most Central American | BC News - Daca Dreamers: Trump vents anger on immi- | | | 0.26 | migrants at US-Mexico border | grant programme | ![18_image_0.png](18_image_0.png) ![19_image_0.png](19_image_0.png) ![19_image_1.png](19_image_1.png) (b) Immigration ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 5: Limitations A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract, Section 1: Introduction ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Yes, Section 3: Framework ✓ B1. Did you cite the creators of artifacts you used? Yes, Section 3: Framework ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Linked to gitlab, where this information is provided. ✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Linked to gitlab, where this information is provided. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. No new data introduced. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 4: Case Studies ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 4: Case Studies ## C ✓ **Did You Run Computational Experiments?** Section 4: Case Studies ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 4: Case Studies The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4: Case Studies ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 4: Case Studies ✗ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Linked to gitlab, where this information is provided. ## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Section 3: Framework, Section 4: Case Studies ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Section 3: Framework, Section 4: Case Studies ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Section 3: Framework, Section 4: Case Studies ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Section 3: Framework, Section 4: Case Studies ✓ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Section 3: Framework, Section 4: Case Studies ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Section 3: Framework, Section 4: Case Studies
moghimifar-etal-2023-normmark
{N}orm{M}ark: A Weakly Supervised {M}arkov Model for Socio-cultural Norm Discovery
https://aclanthology.org/2023.findings-acl.314
Norms, which are culturally accepted guidelines for behaviours, can be integrated into conversational models to generate utterances that are appropriate for the socio-cultural context. Existing methods for norm recognition tend to focus only on surface-level features of dialogues and do not take into account the interactions within a conversation. To address this issue, we propose NormMark, a probabilistic generative Markov model to carry the latent features throughout a dialogue. These features are captured by discrete and continuous latent variables conditioned on the conversation history, and improve the model{'}s ability in norm recognition. The model is trainable on weakly annotated data using the variational technique. On a dataset with limited norm annotations, we show that our approach achieves higher F1 score, outperforming current state-of-the-art methods, including GPT3.
# Normmark**: A Weakly Supervised Markov Model For** Socio-Cultural Norm Discovery Farhad Moghimifar and **Shilin Qu** and **Tongtong Wu** Yuan-Fang Li and **Gholamreza Haffari** Department of Data Science and AI, Monash University, Australia {first.lastname}@monash.edu ## Abstract ![0_Image_0.Png](0_Image_0.Png) Norms, which are culturally accepted guidelines for behaviours, can be integrated into conversational models to generate utterances that are appropriate for the socio-cultural context. Existing methods for norm recognition tend to focus only on surface-level features of dialogues and do not take into account the interactions within a conversation. To address this issue, we propose NORMMARK, a probabilistic generative Markov model to carry the latent features throughout a dialogue. These features are captured by discrete and continuous latent variables conditioned on the conversation history, and improve the model's ability in norm recognition. The model is trainable on weakly annotated data using the variational technique. On a dataset with limited norm annotations, we show that our approach achieves higher F1 score, outperforming current stateof-the-art methods, including GPT3. ## 1 Introduction Norms can be thought of as pre-defined socioculturally acceptable boundaries for human behaviour (Fehr and Fischbacher, 2004), and incorporating them into conversational models helps to produce contextually, socially and culturally appropriate utterances. For instance, identifying the sociocultural norm of *Greeting* in a negotiation helps to generate responses suitable for the power dynamics and social setting. Whereas, failing to detect and adhere to such norms can negatively impact social interactions (Hovy and Yang, 2021). Recent advances in developing chatbots have also highlighted the necessity of incorporating such implicit socio-cultural information into machine-generated responses, in order to approximate human-like interactions (Huang et al., 2020; Liu et al., 2021). Norm discovery is a nascent research problem, and current approaches (Hwang et al., 2021) heavily rely on manually constructed sets of rules from available resources such as Reddit (Forbes et al., 2020; Ziems et al., 2022). In addition to the time and cost inefficiency of such approaches, the construction and use of these banks of norms treat each sentence or segment in isolation, and they fail to take the dependencies between norms in the flow of a dialogue into account (Fung et al., 2022; Chen and Yang, 2021). For instance, it is most likely that a dialogue segment containing the norm of *Request* follows a segment that includes *Request* and *Criticism* (Figure 1). Furthermore, such approaches require a large amount of annotated data, limiting their performance on sparsely labelled resources. To address these limitations, in this paper, we propose a deep generative Markov model that captures the inter-dependencies between turns (segments) of partially-labelled dialogues. The model includes two types of latent variables (LVs): (i) the discrete LVs capture the socio-cultural norms of the dialogue turns, and (ii) the continuous LVs capture other aspects, e.g. related to fluency, topic, and meaning. These latent variables facilitate capturing label- and content-related properties of the previous turns of the conversation, and are conditioned on the previous turns in a Markovian manner. We train the model on weakly annotated data using ![1_image_0.png](1_image_0.png) the variational technique, building on variational autoencoders (Kingma and Welling, 2014). To evaluate the performance of our model in the task of socio-cultural norm discovery, we conducted experiments on an existing dataset. Experimental results show superiority of our model, by 4 points in F1 score, over the state-of-the-art approaches, in which each segment of a dialogue is modelled independently of the others. Furthermore, by evaluating our model on low amounts of training data, we show the capability of our proposed approach in capturing socio-cultural norms on partially-labeled data. ## 2 Related Works Recent approaches have tried to develop models with the human psychological and behavioural capabilities (Jiang et al., 2021; Botzer et al., 2022; Lourie et al., 2021). Other approaches targeted identifying implicit social paradigms by developing sequence generation models (Moghimifar et al., 2020; Bosselut et al., 2019). However, the task of socio-cultural norm discovery has been overlooked, mostly due to the lack of proper annotated data (Fung et al., 2022). Forbes et al. (2020) present a dataset of social norms, collected from Reddit and propose a generative model to expand this collection. Zhan et al. (2022) also showed how social norms can be useful in conducting better negotiation dialogues. In a similar approach, Ziems et al. (2022) present a corpus of moral norms. Zhan et al. (2023) and Fung et al. (2022) use a prompt-based large-scale language model to generate rules from dialogues. More similar to our approach, existing models identify labels associated with utterances of dialogues (Chen and Yang, 2021; Yang et al., 2019; Yu et al., 2020). However, these approaches fail to take into account the flow of contextual information throughout a dialogue. In contrast to these studies, our approach addresses this task by considering the inter-dependencies between turns of dialogues. ## 3 A Generative Markov Model For Socio-Cultural Norm Discovery We are given a set of dialogues D = {d i} n i=1, where each dialogue consists of a set of turns (or segments) d i = {s i j} m j=1. Each turn consists of a sequence of tokens from a vocabulary set V. The dialogue set D consists of two subsets of labeled (DL) and unlabeled (DU ) dialogues, where each turn s im ∈ DL is annotated with a socio-cultural norm label ci ∈ C with a total of K norm classes. The turns in the unlabeled dataset lack socio-cultural norm labels. Our goal is to develop a model that, by using contextual information carried from previous turns of the dialogue, discovers the socio-cultural norm associated with the turns of a dialogue. Probabilistic Generative Model. Our model (shown in Fig. 2) assumes a directed generative model, in which a turn is generated by a factor capturing the socio-cultural norms and another factor capturing other aspects, e.g. topic and syntax. For each turn, the socio-cultural norm factor is captured by a discrete latent variable ci, and the other aspects are captured by a continuous latent variable zi. As our aim is to leverage the contextual information, the latent variables of each turn of the dialogue are conditioned on those from the previous turn in Markovian manner. As such, our proposed generative model for each turn is as follows: $$\begin{array}{l}{{p_{\theta}(s_{i},z_{i},c_{i}|z_{i-1},c_{i-1})=}}\\ {{p_{\theta}(s_{i}|c_{i},z_{i})p_{\theta}(z_{i}|z_{i-1})p_{\theta}(c_{i}|c_{i-1})}}\end{array}$$ where pθ(ci|ci−1) and pθ(zi|zi−1) capture the dependency of the causal factors on the previous turn, and pθ(si|ci, zi) is a sequence generation model conditioned on the causal factors. Training. To train the model, the likelihood function for a dialogue in DU is: $$\begin{array}{l}{{p_{\theta}(\mathbf{s}_{1}..\mathbf{s}_{n})=\sum_{c_{1}..c_{n}}\int d(\mathbf{z}_{1})..d(\mathbf{z}_{n})\times}}\\ {{\prod_{i=1}^{n}p_{\theta}(\mathbf{s}_{i}|c_{i},\mathbf{z}_{i})p_{\theta}(\mathbf{z}_{i}|\mathbf{z}_{i-1})p_{\theta}(c_{i}|c_{i-1}).}}\end{array}$$ Intuitively, the training objective for each dialogue turn corresponds an extension of the variational autoencoder (VAE) which involves: (i) both discrete and continuous latent variables, and (ii) conditioning on the latent variables of the previous turn. As such, we resort to the following variational evidence lowerbound (ELBO) for the unlabeled turns: log p(si|ci−1, zi−1) ≥ Eqϕ(ci|si,ci−1) Eqϕ(zi|si,zi−1) -log pθ(si|zi, ci) − KL[qϕ(zi|si, zi−1)||pθ(zi|zi−1)] − KL[qϕ(ci|si, ci−1)||pθ(ci|ci−1)] where qϕ's are variational distributions. We have nested ELBOs, each of which corresponds to a turn in the dialogue. We refer to the collection of these ELBOs for all dialogues in DU by L(DU ). For the labeled turns, the ELBO for a dialogue turn is, $$\begin{array}{l}{{\log p(\mathbf{s}_{i},c_{i}|c_{i-1},\mathbf{z}_{i-1})\geq\log p_{\theta}(c_{i}|c_{i-1})}}\\ {{+\mathbb{E}_{q_{\phi}}(\mathbf{z}_{i}|\mathbf{s}_{i},\mathbf{z}_{i-1})\left[\log p_{\theta}(\mathbf{s}_{i}|\mathbf{z}_{i},c_{i})\right]}}\\ {{-K L[q_{\phi}(\mathbf{z}_{i}|\mathbf{s}_{i},\mathbf{z}_{i-1})||p_{\theta}(\mathbf{z}_{i}|\mathbf{z}_{i-1})]}}\end{array}$$ where we also add the term log qϕ(ci|si, ci−1) to the training objective. We refer to the collection of ELBOs for all dialogues in the labeled data as L(DL). Finally, the training objective based on the labeled and unlabeled dialogues is L = DU +λDL, where λ trades off the effect of the labeled and unlabeled data. We resort to the reparametrisation trick for continuous and discrete (Gumble-softmax (Jang et al., 2017)) latent variables when optimising the training objective. Architectures. We use a transformer-based encoder to encode the turns si with a hidden representation h s i . The classifier qϕ(ci|si, ci−1) is a 2-layer MLP with tanh non-linearity whose inputs are h s i and the embedding of ci−1. For qϕ(zi|si, zi−1), we use a a multivariate Gaussian distribution, whose parameters are produced by MLPs from h s i and zi−1. For pθ(si|zi, ci), we use an LSTM decoder, where this is performed by replacing pre-defined special tokens in the embedding space with zi and ci. For pθ(ct|ct−1), we use MLP with a softmax on top. ## 4 Experiments In this section we report the performance of our model on the task of socio-cultural norm discovery in comparison to the current state-of-the-art models. Dataset In our experiments, we use LDC2022E20. This dataset consists of 13,074 segments of dialogues in Mandarin Chinese. The dialogues are from text, audio, and video documents, where we transcribed the audio and video files using Whisper (Radford et al., 2022). The segments have been labelled from the set of socio-cultural norm labels of none, Apology, Criticism, Greeting, Request, Persuasion, Thanks, and *Taking leave*. We split the data into train/test/development sets with the ratio of 60:20:20. Each dialogue is divided into sequences of segments of length 5, where on average each segment consists of 8 sentences. We report the performance of our model, in comparison to the baselines, when using the maximum number of labeled data in the training set (Max). In addition, to evaluate the effect of the amount of training data on the performance of our model, we randomly select 50 and 100 of these sequences of dialogues for training, and report the results on the test set. Baselines. We compare our model with LSTM (Hochreiter and Schmidhuber, 1997) and BERT (Devlin et al., 2019), where each turn of a dialogue is encoded separately. We use WS-VAEBERT (Chen and Yang, 2021) as another baseline, which encodes the contextual representation of a segment via a latent variable. However, WS-VAEBERT does not capture the connections between segments. To experiment with the performance of our model on limited labeled data, we compare it to SetFit (Tunstall et al., 2022), which has proven to be a strong few-shot learning model. Similar to our model, we use 'bert-base-chinese' as the backbone of BERT and WS-VAE-BERT, and 'sbert-base-chinese-nli' has been used in SetFit. Additionally, we compare our model with a prompt-base large-scale language model GPT-3 text-davinci-003 (Brown et al., 2020) and ChatGLM (Du et al., 2022), where the norm labels are given to the model with segments of dialogue, and the model is asked to generate a socio-cultural norm label from the list. Evaluation Metrics. Following previous works in classification tasks, we report the macro averaged precision, recall, and F1 score of the models in predicting the socio-cultural norm label of each segment of a dialogue. | Max | |-------| Model P R F1 Size LSTM 9.13 12.57 10.08 0.4M BERT 38.42 32.33 33.34 109M WS-VAE-BERT 42.03 **40.74** 39.01 132M SetFit 41.42 40.23 40.54 102M ChatGLM 17.64 20.55 17.19 6B GPT-3 39.86 35.05 33.61 175B NORMMARKzero 44.14 36.97 39.49 136M NORMMARK **47.92** 38.67 **44.20** 131M ## 4.1 Results Table 1 summarises the main results of the conducted experiment on LDC2022E20 data. On Max setting, where the model uses the maximum number of datapoints in the training set, our model outperforms all of the baselines with a margin of 4 and 6 points on F1 and precision, respectively, and achieves a comparable result in recall. This gap between our model and WS-VAE-BERT indicates the effect of carrying contextual information from previous turns of conversation. In addition, lower results of GPT-3 suggest that discovering sociocultural norms is a challenging task, which needs higher-level reasoning. Amount of Labeled Data. To evaluate the performance of our model with less amount of training data, we report the results on using only 50 and 100 datapoints during training, in Table 3. When using 100 sequences of turns, our model achieves the highest score in F1, and improves the precision and recall by more than 3 points over non-prompt based models. However, GPT-3 outperforms our proposed model in these two metrics. Similarly, on a more limited number of training data (setting 50), GPT-3 shows its dominance. Nevertheless, our model performs the best amongst the other baselines, by improving the F1 score by 3 points. Model 50 100 Max NORMMARKzero-extended 13.41 14.33 20.33 NORMMARKextended 13.48 14.76 20.25 NORMMARK 32.46 34.43 44.2 Table 2: Segment-level socio-cultural norm prediction of two variation of our approach, in comparison to our model. The results are macro-averaged F1 score. | 50 | 100 | | | | | | |--------------|-------|-------|-------|-------|-------|-------| | Model | P | R | F1 | P | R | F1 | | LSTM | 7.92 | 12.5 | 9.69 | 11.06 | 12.58 | 9.90 | | BERT | 10.85 | 15.74 | 12.58 | 10.46 | 15.50 | 11.82 | | WS-VAE-BERT | 20.43 | 17.21 | 16.60 | 36.38 | 22.48 | 23.37 | | SetFit | 30.42 | 26.25 | 27.86 | 32.12 | 30.54 | 31.32 | | ChatGLM | 17.64 | 20.55 | 17.19 | 17.64 | 20.55 | 17.19 | | GPT-3 | 39.86 | 35.05 | 33.61 | 39.86 | 35.05 | 33.61 | | NORMMARKzero | 25.94 | 19.71 | 20.04 | 35.41 | 27.73 | 28.33 | | NORMMARK | 32.46 | 30.02 | 30.72 | 36.41 | 33.48 | 34.43 | Conditioning on the Context. To analyse the effect of carrying contextual information from previous turns of dialogue, we report the performance of the simplified version of our model (NORMMARKzero), where the connections from previous turn are omitted. As can be seen in Table 1, in all of the settings NORMMARK outperforms the simplified version, indicating the importance of inter-dependencies between turns. Furthermore, we developed two variations of NORMMARK and NORMMARKzero where the contextual information from previous turns is carried directly through the previous segment (Figure 4). In Table 2, the lower performance of these models suggests that the contextual information from the previous turn overshadows the representation of the latent variable as well as the norm label, and consequently the norm classifier is profoundly biased towards the previous turn of dialogue. Markov Order. We further analysed the effect of carrying contextual meaning from previous turns of dialogues, by varying the size of the Markov conditioning context l from 1 to 9, i.e. each of our ![3_image_0.png](3_image_0.png) proposed latent variables is conditioned on previous l turns of dialogue. Figure 3 summarises the results. It shows that shorter context results in lower performance, due to passing less contextual information to the next turns. On the other hand, too long context results in lower performance as well, due to extra complexity of modelling longer dependencies in latent variables and norm labels. As shown in the figure, our model performs best with a context size of 5 on this dataset. ## 5 Conclusion In this work, we address the task of socio-cultural norm discovery from open-domain conversations. We present a probabilistic generative model that captures the contextual information from previous turns of dialogues. Through empirical results, we show that our model outperforms state-of-the-art models in addressing this task. ## 6 Limitations We have studied the task of socio-cultural norm discovery based LDC2022E20 dataset, which consists of everyday situational interactions in Mandarin Chinese. Although we believe that our approach can used in other cultural settings, the current state of the model might not be generalisable to other cultures, unless further tuning is possible. Our model's ability in discovering such norms can help to improve conversational agents, however, real-world scenarios involving duplicitous or ambiguous terms might confuse our proposed approach. In addition, our model is limited to the textual modality, and we believe incorporating audio and visual features into the model can improve identifying socio-cultural norms. Nonetheless, the reliance of our model on large-scale pre-trained language models might result in some deployment challenges in situations with limited resources. Besides, all the reported results are by fixing a random seed running all experiments once. ## 7 Ethics Statement Our work leverages pre-trained language models (BERT), therefore similar potential risks of this model is inherited by our work. ## 8 Acknowledgements This material is based on research sponsored by DARPA under agreement number HR001122C0029. The U.S. Government is authorised to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The authors are grateful to the anonymous reviewers for their helpful comments. ## References Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Celikyilmaz, and Yejin Choi. 2019. COMET: Commonsense transformers for automatic knowledge graph construction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4762–4779, Florence, Italy. Association for Computational Linguistics. Nicholas Botzer, Shawn Gu, and Tim Weninger. 2022. Analysis of moral judgment on reddit. *IEEE Transactions on Computational Social Systems*. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. *Advances in neural information processing* systems. Jiaao Chen and Diyi Yang. 2021. Weakly-supervised hierarchical models for predicting persuasive strategies in good-faith textual requests. In Proceedings of the AAAI Conference on Artificial Intelligence. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, and Jie Tang. 2022. Glm: General language model pretraining with autoregressive blank infilling. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. Ernst Fehr and Urs Fischbacher. 2004. Social norms and human cooperation. *Trends in cognitive sciences*. Maxwell Forbes, Jena D. Hwang, Vered Shwartz, Maarten Sap, and Yejin Choi. 2020. Social chemistry 101: Learning to reason about social and moral norms. In *Proceedings of the 2020 Conference on* Empirical Methods in Natural Language Processing (EMNLP), pages 653–670, Online. Association for Computational Linguistics. Yi R Fung, Tuhin Chakraborty, Hao Guo, Owen Rambow, Smaranda Muresan, and Heng Ji. 2022. Normsage: Multi-lingual multi-cultural norm discovery from conversations on-the-fly. *arXiv preprint* arXiv:2210.08604. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. *Neural computation*. Jimin Hong, Jungsoo Park, Daeyoung Kim, Seongjae Choi, Bokyung Son, and Jaewook Kang. 2022. Tess: Zero-shot classification via textual similarity comparison with prompting using sentence encoder. *arXiv* preprint arXiv:2212.10391. Dirk Hovy and Diyi Yang. 2021. The importance of modeling social factors of language: Theory and practice. In *Proceedings of the 2021 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 588–602, Online. Association for Computational Linguistics. Minlie Huang, Xiaoyan Zhu, and Jianfeng Gao. 2020. Challenges in building intelligent open-domain dialog systems. *ACM Transactions on Information* Systems (TOIS). Jena D Hwang, Chandra Bhagavatula, Ronan Le Bras, Jeff Da, Keisuke Sakaguchi, Antoine Bosselut, and Yejin Choi. 2021. (comet-) atomic 2020: On symbolic and neural commonsense knowledge graphs. In *Proceedings of the AAAI Conference on Artificial* Intelligence. Eric Jang, Shixiang Gu, and Ben Poole. 2017. Categorical reparametrization with gumble-softmax. In International Conference on Learning Representations. Liwei Jiang, Jena D Hwang, Chandra Bhagavatula, Ronan Le Bras, Maxwell Forbes, Jon Borchardt, Jenny Liang, Oren Etzioni, Maarten Sap, and Yejin Choi. 2021. Delphi: Towards machine ethics and norms. arXiv preprint arXiv:2110.07574. Jacob Devlin Ming-Wei Chang Kenton and Lee Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL-HLT. Diederik P. Kingma and Max Welling. 2014. AutoEncoding Variational Bayes. In *2nd International* Conference on Learning Representations. Siyang Liu, Chujie Zheng, Orianna Demasi, Sahand Sabour, Yu Li, Zhou Yu, Yong Jiang, and Minlie Huang. 2021. Towards emotional support dialog systems. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics* and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3469–3483, Online. Association for Computational Linguistics. Ilya Loshchilov and Frank Hutter. 2018. Decoupled weight decay regularization. In *International Conference on Learning Representations*. Nicholas Lourie, Ronan Le Bras, and Yejin Choi. 2021. Scruples: A corpus of community ethical judgments on 32,000 real-life anecdotes. In *Proceedings of the* AAAI Conference on Artificial Intelligence. Farhad Moghimifar, Lizhen Qu, Yue Zhuo, Mahsa Baktashmotlagh, and Gholamreza Haffari. 2020. CosMo: Conditional Seq2Seq-based mixture model for zeroshot commonsense question answering. In *Proceedings of the 28th International Conference on Computational Linguistics*, pages 5347–5359, Barcelona, Spain (Online). International Committee on Computational Linguistics. Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever. 2022. Robust speech recognition via large-scale weak supervision. *arXiv preprint arXiv:2212.04356*. Lewis Tunstall, Nils Reimers, Unso Eun Seo Jo, Luke Bates, Daniel Korat, Moshe Wasserblat, and Oren Pereg. 2022. Efficient few-shot learning without prompts. *arXiv preprint arXiv:2209.11055*. Diyi Yang, Jiaao Chen, Zichao Yang, Dan Jurafsky, and Eduard Hovy. 2019. Let's make your request more persuasive: Modeling persuasive strategies via semisupervised neural nets on crowdfunding platforms. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3620–3630, Minneapolis, Minnesota. Association for Computational Linguistics. Wenmeng Yu, Hua Xu, Fanyang Meng, Yilin Zhu, Yixiao Ma, Jiele Wu, Jiyun Zou, and Kaicheng Yang. 2020. CH-SIMS: A Chinese multimodal sentiment analysis dataset with fine-grained annotation of modality. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 3718–3727, Online. Association for Computational Linguistics. Haolan Zhan, Zhuang Li, Yufei Wang, Linhao Luo, Tao Feng, Xiaoxi Kang, Yuncheng Hua, Lizhen Qu, Lay-Ki Soon, Suraj Sharma, et al. 2023. Socialdial: A benchmark for socially-aware dialogue systems. arXiv preprint arXiv:2304.12026. Haolan Zhan, Yufei Wang, Tao Feng, Yuncheng Hua, Suraj Sharma, Zhuang Li, Lizhen Qu, and Gholamreza Haffari. 2022. Let's negotiate! a survey of negotiation dialogue systems. arXiv preprint arXiv:2212.09072. Zhilu Zhang and Mert Sabuncu. 2018. Generalized cross entropy loss for training deep neural networks with noisy labels. Advances in neural information processing systems. Caleb Ziems, Jane Yu, Yi-Chia Wang, Alon Halevy, and Diyi Yang. 2022. The moral integrity corpus: A benchmark for ethical dialogue systems. In *Proceedings of the 60th Annual Meeting of the Association* for Computational Linguistics (Volume 1: Long Papers), pages 3755–3773, Dublin, Ireland. Association for Computational Linguistics. ![6_image_0.png](6_image_0.png) ## A Experimental Details To train our model, we have used the pre-trained 'bert-base-chinese', which is licensed free to use for research purposes, as the encoder (Kenton and Toutanova, 2019), and we have used LSTM (Hochreiter and Schmidhuber, 1997) with hidden dimension of 128 and one hidden layer as the decoder. We used a dropout of 0.6 over the input. We implemented the norm classifier with a 2-layers MLP with tanh non-linearity on top. We used CrossEntropyLoss (Zhang and Sabuncu, 2018) as loss function over the predictions of our model. We used AdamW (Loshchilov and Hutter, 2018) as the optimiser with the learning rate of 1e-5 for the encoder and 1e-3 for the rest of the network. We trained our model for 50 epochs, on a single machine with NVIDIA one A100 gpu, with an early stop if the validation accuracy is not improved for more than 20 iterations. For the baselines, we have developed a network with a two-stacked LSTM layers followed by two linear layers. We compared out model with BERT, where uses the 'bert-base-chinese' pre-trained model. Each of these two models where trained for 100 epochs, using AdmaW optimiser with the learning rates of 1e-3 and 5e-5, respectively. For WS-VAE-BERT (Chen and Yang, 2021), we followed the source code provided in the paper. For replicating the document level labels, when a segment within the sequence of segments contained a socio-cultural norm, we labeled them 1, otherwise 0. We trained SetFit (Hong et al., 2022) by following the online instructions on their GitHub repository 1. Figure 4 shows the variations of our model, which we used in for the ablation study. GPT-3 (Brown et al., 2020) was used by incorporating the list of socio-cultural norms into the prompt as well as dialogues, and asking to generate the corresponding label. Our experiments on GPT3 showed that using random examplars from the training set of LDC2022E20 results in a decrease in the performance. The LDC2022E20 dataset is the copyrighted property of (c) 2022 Trustees of the University of Pennsylvania and has been used for research purposes in CCU program. This dataset was developed to help models to identify sociocultural norms in courses of dialogues. In all of our experiments we used a fix random seed, hence all results are reported based on singlerun of the models. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 6 ✓ A2. Did you discuss any potential risks of your work? Section & ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and Section 5 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4 And Appendix A ✓ B1. Did you cite the creators of artifacts you used? Section 4 and Appendix A ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Appendix A ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Appendix A ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? We reached out to the provider of the dataset to get information about data collection, but we haven't got any responses back yet. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 4 and Appendix A ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 4 ## C ✓ **Did You Run Computational Experiments?** Section 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4 and Appendix A ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 4 and Appendix A ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 4 and Appendix A D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
nguyen-son-etal-2023-votetrans
{V}ote{TRANS}: Detecting Adversarial Text without Training by Voting on Hard Labels of Transformations
https://aclanthology.org/2023.findings-acl.315
Adversarial attacks reveal serious flaws in deep learning models. More dangerously, these attacks preserve the original meaning and escape human recognition. Existing methods for detecting these attacks need to be trained using original/adversarial data. In this paper, we propose detection without training by voting on hard labels from predictions of transformations, namely, VoteTRANS. Specifically, VoteTRANS detects adversarial text by comparing the hard labels of input text and its transformation. The evaluation demonstrates that VoteTRANS effectively detects adversarial text across various state-of-the-art attacks, models, and datasets.
# Votetrans**: Detecting Adversarial Text Without Training** By Voting On Hard Labels Of Transformations Hoang-Quoc Nguyen-Son1, Seira Hidano1**, Kazuhide Fukushima**1, Shinsaku Kiyomoto1, and **Isao Echizen**2 1KDDI Research, Inc., Japan 2National Institute of Informatics, Japan 1{xso-guen,se-hidano,ka-fukushima,sh-kiyomoto}@kddi.com [email protected] ## Abstract Adversarial attacks reveal serious flaws in deep learning models. More dangerously, these attacks preserve the original meaning and escape human recognition. Existing methods for detecting these attacks need to be trained using original/adversarial data. In this paper, we propose detection without training by voting on hard labels from predictions of transformations, namely, VoteTRANS. Specifically, VoteTRANS detects adversarial text by comparing the hard labels of input text and its transformation. The evaluation demonstrates that VoteTRANS effectively detects adversarial text across various state-of-the-art attacks, models, and datasets. ## 1 Introduction Deep learning models are sensitive to changes in input text from an adversarial attack. Even a slight change enormously impacts the prediction of models. More dangerously, these changes still preserve the input meaning, so attacks remain unrecognized by humans. This vulnerability has negatively affected the reputation of deep learning models. In contrast to adversarial text defense, fewer works have been proposed to detect adversarial texts. Previous works detected such texts via perturbed word identification (Zhou et al., 2019; Mozes et al., 2021), synonyms (Wang et al., 2022b), density (Yoo et al., 2022), attention (Biju et al., 2022), PCA (Raina and Gales, 2022), transformer (Wang et al., 2022a), and word importance (Mosca et al., 2022). Since existing works need original/adversarial data to train detectors, they are sensitive to new adversarial attacks. Motivation: Adversarial text must satisfy two criteria: the text must (1) change the prediction of a target model while (2) preserving the original meaning. Few texts can comply with both criteria. For example, we randomly selected original text from AG News and used a probability-weighted word saliency (*PWWS*) attack (Ren et al., 2019) to generate adversarial text (Figure 1). *PWWS* replaces original words to fool a target model (CNN). During this generation process, only the final text fooled the target CNN, while other texts were still correctly predicted by the target CNN and another model, such as RoBERTa. We find the same trend for other AG News texts and IMDB movie reviews as shown in Appendix A. Contributions: We propose a simple detector by voting on hard labels of transformations (VoteTRANS). In particular, we generate a transformation set for each word in the input text. We then compare the original hard label from the input text and the majority vote from each transformation set. If we find any difference in the comparison, the adversarial text is identified. In summary, our contributions are listed as follows: - To the best of our knowledge, VoteTRANS is the first model to detect adversarial text from various attacks without training. Moreover, we do not modify a target model and only use the target as a black-box setting for prediction. VoteTRANS can thus be applied to a wide range of various models. - Experiments on various attacks, models, and datasets demonstrate that VoteTRANS outperforms state-of-the-art detectors. - VoteTRANS can run with all seventeen current attacks related to text classification from TextAttack framework (Morris et al., 2020). VoteTRANS is also automatically compatible with future attacks from this framework without changing its source code1. | Original: | …a project to | test | wireless | public transport… | Sci/Tech | Sci/Tech | |------------------------------------------------------------------------|-----------------|----------|------------|---------------------|------------|------------| | Adversarial: …a project to tryout radiocommunication public transport… | Business | Sci/Tech | | | | | Figure 1: A process generates adversarial text by synonym-based transformation targeting a CNN model. During this process, only the final adversarial text fools the CNN, while other texts are still correctly predicted by the CNN and RoBERTa. | CNN | RoBERTa | |-------|-----------| ## 2 Related Work 2.1 Adversarial Attack Many adversarial attacks have emerged since 2018 and have been supported by TextAttack (Morris et al., 2020). We categorize all seventeen attacks from TextAttack related to text classification by their levels. In word level, *Alzantot* (Alzantot et al., 2018), Faster Alzantot (Jia et al., 2019), and an improved generic algorithm (IGA)(Wang et al., 2021) ran a generic algorithm to generate adversarial text. PWWS (Ren et al., 2019) and *TextFooler* (Jin et al., 2020) transformed a text with synonyms from a fixed lexical database (WordNet) and word embedding, respectively. Zang et al. (2020) applied particle swarm optimization (PSO) to synonyms from a big database (HowNet). *Kuleshov* (Kuleshov et al., 2018) and A2T (Yoo and Qi, 2021) measured the similarity with GPT-2 and DistillBERT, respectively. *BERT-Attack* (Li et al., 2020) extracted the synonyms from a masked language model. BAE (Garg and Ramakrishnan, 2020) used both word insertion and replacement with the masked model. *CLARE* (Li et al., 2021) extended this model by merging two consecutive words. *Checklist* (Ribeiro et al., 2020) verified the model consistency by changing an input word into a neutral entity (e.g., location, name, or number). *InputReduction* (Feng et al., 2018) removes the word of lowest importance until the target model changes the prediction. In character and hybrid levels, *HotFlip* (Ebrahimi et al., 2018) accessed the gradient of a target model to manipulate the loss change. *DeepWordBug* (Gao et al., 2018) transformed words using four-character operations including swapping, substitution, deletion, and insertion. *Pruthi* (Pruthi et al., 2019) added a QWERTY keyboard operator. *TextBugger* (Li et al., 2019) combined character and word operators. ## 2.2 Adversarial Detection Zhou et al. (2019) trained a BERT model to identify the perturbed words. Mozes et al. (2021) focused on low-frequency words that were likely changed by an attacker. Wang et al. (2022b) detected the adversarial text via its synonyms. Yoo et al. (2022) and Biju et al. (2022) distinguished adversarial text with original text based on density estimation and attention input, respectively. Raina and Gales (2022) showed that adversarial text induces residues (larger components than original text) in PCA eigenvector. Wang et al. (2022a) finetuned a transformer model on compliant and adversarial text, which is not required to fool a target model. The change in the requirement efficiently defends against a wide range of adversarial attacks. Mosca et al. (2022) extracted the word importance as a feature to classify original and adversarial text. Compared with VoteTRANS, previous detectors need adversarial or/and original data to train their models or optimizes their models on training data to satisfy some requirements (such as FPR as in Yoo et al. (2022)). These methods often more suitable for word-based than character-based attacks. On the other hand, VoteTRANS can be applied to various kinds of attacks at both word-level as in *PWWS* and character-level as in *TextBugger* as well as other modifications, such as deletion as in Input-Reduction or *CLARE* and insertion as in BAE. For example, *CLARE* deletes a word by merging it with a nearby word. When applying the same merging operator on each input word, VoteTRANS will change the attacked words and observe the change in target predictions to detect the attacked text. ## 3 Voting On The Hard Labels Of Transformations (Votetrans) Problem statement: We follow notations from TextFooler paper (Jin et al., 2020). Given N texts, X = {X1, X2*, . . . , X*N } corresponding to 5091 ![2_image_0.png](2_image_0.png) …project to tryout *radiocommunication*… adv/org ![2_image_1.png](2_image_1.png) N labels Y = {Y1, Y2*, . . . , Y*N }, a target model F : *X → Y* maps the input space X to the label space Y. Adversarial text Xadv generated from input X ∈ X should comply with the following constraints: $$F(X_{\mathrm{adv}})\neq F(X),{\mathrm{and~Sim}}(X_{\mathrm{adv}},X)\geq\epsilon,\;\;(1)$$ where Sim(Xadv, X) measures the similarity between adversarial text Xadv and original text X; ϵ is the minimum similarity. Our objective is to determine whether input text X is adversarial or original text. The process of VoteTRANS is depicted in Figure 2, and the overall algorithm is summarized in Algorithm 1. Model detail: To process input text X, we use a target model F and an auxiliary attack A (e.g., PWWS). An optional word ratio α can be used to speed up the detection process. Moreover, a support model list F sup is used to improve the performance. The support models and the target model should solve the same task such as sentiment analysis. First, we create a list W including input words that are sorted by their importance using word importance score estimation in the auxiliary attack A (lines 2-5). For example, *PWWS* calculates the change in the predictions before and after replacing a certain word with an unknown word to estimate word importance. The impact of word importance is shown in Appendix B. Second, we select the top words W∗(line 6) with the word ratio threshold α (100% by default). Since VoteTRANS maintains the performance with α = 100%, it remarkably improves the run time with a smaller α, with a slight change in performance, as shown in Figures 4a and 4b in the experimental results. Third, we obtain a transformed set Wtrans for each word wj in W∗(line 9) using word transformation in A. For example, *PWWS* uses synonyms from WordNet to form the transformed set. Fourth, we replace each transformed word in Wtrans for the corresponding wj (line 11) in X and checks the second constraint in Equation 1 (line 12). The Sim(·) and ϵ are provided by A. This step is mainly used to preserve the original meaning of the input in the transformed text. For example, PWWS uses stop word checking for this step. Fifth, we construct Y trans containing the label predictions for the valid X′from target model F and support model list F sup (lines 13-16). We then use the majority vote on Y trans to obtain the top majority classes Y′ with the highest occurrence in Y trans. Finally, we check whether the input text X is adversarial or original based on the input label Y and majority set Y′. If Y does not belong to Y′ or Y belongs to Y′, which contains more than one majority class, we decide that X is adversarial text (lines 20-22). If we cannot decide if X is adversarial text after checking all words in W∗, X is considered original text. ## 4 Evaluation 4.1 Comparison We follow recent work (Mosca et al., 2022) to conduct the same experiments including attacks, datasets, models, parameter settings, evaluation metrics, number of training/testing samples, etc. In particular, evaluation was performed with adversarial text generated by four attacks2including *PWWS*, TextFooler, IGA, and BAE. These attacks targeted four common models3(LSTM, CNN, DistilBERT, and BERT) on IMDB (235.72 words/text), Yelp 2All attacks ran with the default settings from the TextAttack framework. 3All pretrained models were reused from the TextAttack framework. Algorithm 1: Adversarial text detection by VoteTRANS. Input :Input text X = {w1, w2 *· · · }*; Target model F; Auxiliary attack A; Optional: {Word ratio α (100% as default); Support models F sup = {F sup 1, Fsup 2*· · · }* (an empty as default)} Output :Adversarial text detection (True/*False*) 1 Y ← F(X) 2 for *each word* wiin X do 3 calculate importance score Iwi using A 4 **end for** 5 Create a list W of all words wi ∈ X by sorting in descending order of the importance score Iwi 6 W∗ ← obtains the top words from W with ratio α 7 for *each word* wj in W∗ do 8 Transformation label list Y trans = {} 9 Create transformation set Wtrans from wj using A 10 for *each word* w trans kin Wtrans do 11 X′ = replace wj with w trans kin X 12 if *checking* (Sim(X′, X) ≥ ϵ) *is satisfied by using* A **then** 13 Add F(X′) to list Y trans 14 for *each* F sup lin F sup do 15 Add F sup l(X′) to list Y trans 16 **end for** 17 **end if** 18 **end for** 19 Y′ ← majority vote on Y trans 20 if (Y ̸∈ Y′)or(Y ∈ Y′ and Y′ *has more than one majority class*) **then** 21 **return** *True* ▷ adversarial text 22 **end if** 23 **end for** 24 **return** *False* ▷ original text Polarity (YELP) (135.21 words/text), AG News (38.78 words/text) and Rotten Tomatoes movie reviews (RTMR) (18.65 words/text). We compared VoteTRANS with FGWS (Mozes et al., 2021) and WDR (Mosca et al., 2022). FGWS is claimed as a state-of-the-art in all existing detection papers that we know; WDR is one of the most recent published work. Both FGWS and WDR need to be trained on the adversarial text generated from a specific configuration (model, data, *attack*). Table 1 shows the results on two configurations: (DistilBERT, IMDB, *PWWS*) with 3000 training samples (Table 1a) 4and (DistilBERT, AG News, *PWWS*) with 2400 training samples (Table 1b). We generated adversarial text from testing sets and 4Since VoteTRANS does not need to train, it produce the same performance with any data-splits of the training configuration. We average the performance on 30 different data-splits of FGWS and WDR and ignore their variance scores for simplification. put them aside with corresponding original text to form balanced samples. However, the two configurations (DistillBERT, RTMR, IGA) and (DistillBERT, AG News, IGA) had only 480 and 446 balanced samples, respectively. We thus chose 500 balanced samples as testing data for other configurations. VoteTRANS uses an auxiliary attack to detect adversarial text without training. The auxiliary attack may be the same as the attack used to generate the adversarial text, namely, VoteTRANSsame. We also demonstrate the capability of VoteTRANS using a fixed auxiliary attack (*PWWS*) to detect adversarial text generated from other attacks, namely, VoteTRANSdiff. Both VoteTRANSsame and VoteTRANSdiff uses RoBERTa as a support for all experiments5. Other auxiliary attacks and supports will be discussed 5Since TextAttack does not support RoBERTa for YELP, we used ALBERT as the support instead. | Configuration | FGWS | WDR | VoteTRANSsame VoteTRANSdiff | | | | | | | | |------------------------------------------------------------------------|-----------------|-----------------|-------------------------------|--------|------|--------|-------|--------|------|--------| | Model | Data | Attack | F1 | Recall | F1 | Recall | F1 | Recall | F1 | Recall | | DistilBERT IMDB | PWWS | 89.5 | 82.7 | 92.1 | 94.2 | 96.9 | 98.4 | - | - | | | LSTM | IMDB | PWWS | 80.0 | 69.6 | 84.1 | 86.8 | 94.8 | 97.6 | - | - | | CNN | IMDB | PWWS | 86.3 | 79.6 | 84.3 | 90.0 | 95.4 | 98.8 | - | - | | BERT | IMDB | PWWS | 89.8 | 82.7 | 92.4 | 92.5 | 97.4 | 98.4 | - | - | | DistilBERT AG News PWWS | 89.5 | 84.6 | 93.1 | 96.1 | 95.3 | 97.6 | - | - | | | | DistilBERT IMDB | TextFooler 86.0 | 77.6 | 94.2 | 97.3 | 97.8 | 99.6 | 97.7 | 100.0 | | | | DistilBERT IMDB | IGA | 83.8 | 74.8 | 88.5 | 95.5 | 95.8 | 99.2 | 95.9 | 99.2 | | | BERT | YELP | PWWS | 91.2 | 85.6 | 89.4 | 85.3 | 97.4 | 98.4 | - | - | | BERT | YELP | TextFooler 90.5 | 84.2 | 95.9 | 97.5 | 97.4 | 98.8 | 97.0 | 98.0 | | | DistilBERT RTMR | PWWS | 78.9 | 67.8 | 74.1 | 85.1 | 83.8 | 88.0 | - | - | | | DistilBERT RTMR | IGA | 68.1 | 55.2 | 70.4 | 90.2 | 86.9 | 90.4 | 80.5 | 82.4 | | | DistilBERT IMDB | BAE | 65.6 | 50.2 | 88.0 | 96.3 | 97.7 | 100.0 | 96.3 | 99.2 | | | DistilBERT AG News BAE | 55.8 | 44.0 | 81.0 | 95.4 | 85.8 | 93.2 | 85.3 | 92.8 | | | | DistilBERT RTMR | BAE | 29.4 | 18.5 | 68.5 | 82.2 | 79.1 | 80.8 | 65.5 | 60.8 | | | Overall average | 77.5 | 68.4 | 85.4 | 91.7 | 93.0 | 95.7 | - | - | | | | (a) FGWS and WDR trained on configuration (DistilBERT, IMDB, PWWS). | | | | | | | | | | | | Configuration | FGWS | WDR | VoteTRANSsame VoteTRANSdiff | | | | | | | | | Model | Data | Attack | F1 | Recall | F1 | Recall | F1 | Recall | F1 | Recall | | DistilBERT AG News PWWS | 89.5 | 84.6 | 93.6 | 94.8 | 95.3 | 97.6 | - | - | | | | LSTM | AG News PWWS | 88.9 | 84.9 | 94.0 | 94.2 | 95.5 | 98.4 | - | - | | | CNN | AG News PWWS | 90.6 | 87.6 | 91.1 | 91.2 | 96.5 | 99.2 | - | - | | | BERT | AG News PWWS | 88.7 | 83.2 | 92.5 | 93.0 | 94.6 | 98.0 | - | - | | | DistilBERT IMDB | PWWS | 89.5 | 82.7 | 91.4 | 93.0 | 96.9 | 98.4 | - | - | | | DistilBERT AG News TextFooler 87.0 | 79.4 | 95.7 | 97.3 | 96.7 | 98.4 | 95.7 | 98.0 | | | | | DistilBERT AG News IGA | 68.6 | 58.3 | 86.7 | 93.6 | 96.5 | 99.2 | 93.2 | 95.6 | | | | BERT | YELP | PWWS | 91.2 | 85.6 | 86.2 | 77.2 | 97.4 | 98.4 | - | - | | BERT | YELP | TextFooler 90.5 | 84.2 | 95.4 | 94.7 | 97.4 | 98.8 | 97.0 | 98.0 | | | DistilBERT RTMR | PWWS | 78.9 | 67.8 | 75.8 | 78.5 | 83.8 | 88.0 | - | - | | | DistilBERT RTMR | IGA | 68.1 | 55.2 | 73.7 | 85.4 | 86.9 | 90.4 | 80.5 | 82.4 | | | DistilBERT IMDB | BAE | 65.6 | 55.2 | 88.1 | 97.0 | 97.7 | 100.0 | 96.3 | 99.2 | | | DistilBERT AG News BAE | 55.8 | 44.0 | 86.4 | 94.5 | 85.8 | 93.2 | 85.3 | 92.8 | | | | DistilBERT RTMR | BAE | 29.4 | 18.5 | 71.0 | 75.2 | 79.1 | 80.8 | 65.5 | 60.8 | | | Overall average | 77.1 | 68.8 | 86.4 | 89.4 | 92.4 | 95.2 | - | - | | | | (b) FGWS and WDR trained on configuration (DistilBERT, AG News, PWWS). | | | | | | | | | | | ## Later. In general, WDR is better than FGWS when they are trained on (DistilBERT, IMDB, *PWWS*) as shown in Table 1a. While FGWS achieves an F1 score of 77.5 and recall of 68.4 on average, WDR exhibits improved F1 score and recall metrics of 85.4 and 91.7, respectively. All F1 scores of VoteTRANSsame outperform those of WDR. VoteTRANSsame also exhibits a recall improved by 4.0 points from 91.7 of WDR. VoteTRANSdiff is competitive with VoteTRANSsame for medium (AG News) and long text (IMDB), while the performance of VoteTRANSdiff on short text (RTMR) is degraded, similar to other detectors. | Scenario | Method | F1 | Recall | |---------------------------------------------------------|----------|-------|----------| | VoteTRANSdiff (BAE) without support | 94.4 | 93.6 | | | VoteTRANSdiff (DeepWordBug) without support | 94.2 | 97.2 | | | VoteTRANSdiff (BAE) with RoBERTa as support | 96.7 | 99.6 | | | VoteTRANSdiff (DeepWordBug) with RoBERTa as support | 95.2 | 100.0 | | | VoteTRANSdiff (Checklist) with RoBERTa as support | 83.9 | 74.0 | | | VoteTRANSdiff (Input-Reduction) with RoBERTa as support | 92.8 | 100.0 | | | VoteTRANSdiff (A2T) with RoBERTa as support | 95.7 | 96.8 | | | VoteTRANSdiff (IGA) with RoBERTa as support | 95.7 | 97.2 | | | VoteTRANSdiff (Pruthi) with RoBERTa as support | 95.8 | 99.6 | | | VoteTRANSdiff (Alzantot) with RoBERTa as support | 95.9 | 97.6 | | | VoteTRANSdiff (PSO) with RoBERTa as support | 96.1 | 97.6 | | | VoteTRANSdiff (Faster-Alzantot) with RoBERTa as support | 96.4 | 97.6 | | | VoteTRANSdiff (TextBugger) with RoBERTa as support | 96.5 | 98.8 | | | VoteTRANSdiff (TextFooler) with RoBERTa as support | 96.5 | 98.0 | | | VoteTRANSdiff (Kuleshov) with RoBERTa as support | 96.6 | 97.6 | | | Unknown Attack | FGWS | 90.6 | 87.6 | | WDR | 91.1 | 91.2 | | | VoteTRANSsame (PWWS) without support | 95.2 | 94.8 | | | VoteTRANSsame (PWWS) with LSTM as support | 95.9 | 98.8 | | | VoteTRANSsame (PWWS) with RoBERTa as support | 96.5 | 99.2 | | | VoteTRANSsame (PWWS) with LSTM+RoBERTa as supports | 97.4 | 98.4 | | | Known Attack | | | | Table 2: Detecting adversarial text generated by *PWWS* targeting CNN on AG News. | Category | RTMR(Adv/Org) AG News(Adv/Org) IMDB(Adv/Org) | | | |---------------------------------------|------------------------------------------------|-----------------|------------------| | PWWS attack time | 0.77 | 2.84 | 26.36 | | FGWS/WDR | 0.04 | 0.08 | 1.01 | | VoteTRANSsame without support | 0.03(0.02/0.04) | 0.08(0.03/0.13) | 2.00(0.67/3.33) | | VoteTRANSsame with RoBERTa as support | 0.69(0.28/1.09) | 1.42(0.15/2.69) | 8.94(0.37/17.52) | Table 3: Run time for attacking original text by *PWWS* and detecting adversarial text generated by *PWWS* targeting the CNN model. In detail, we cluster the experimental results into three groups based on their performances. The first group with high performances includes configurations from similar attacks (PWWS, *TextFooler*, and IGA) on long (IMDB and YELP) and medium text (AG News). The second group includes configurations from *PWWS* and IGA on short text (RTMR). The last group includes the remaining configurations related to BAE. While BAE uses flexible synonyms based on word context, the other attacks use fixed synonyms for a certain word. All of the detectors work well for the lengthy text of the first group, especially VoteTRANSsame, with both scores being between 94.8 and 99.6. However, less information can be extracted from the short text in the second group. While FGWS is competitive with WDR in this group, VoteTRANSsame performs better, especially in the F1 score. In the last group, BAE remarkably affects the detectors, especially with FGWS in medium and short text. FGWS is defeated, with scores less than 50.0 (random guess). VoteTRANSsame still maintains its high performance for long text and is competitive with WDR for medium and short text. Table 1b shows experiments where FGWS and WDR are trained on (DistilBERT, AG News, PWWS). We train WDR with other word-based attacks including TextFooler, IGA, and BAE and reach similar results as shown in Appendix C. While FGWS and WDR obtain scores less than 90.0 on average, VoteTRANSsame retains its performance, with 92.4 F1 and 95.2 recall. This demonstrates the resilience of VoteTRANSsame across various models, datasets, and attacks. ## 4.2 Ablation Studies We studied variants of VoteTRANS and compared them with FGWS and WDR. These detectors identified adversarial texts generated by PWWS targeting the CNN on AG News (Table 2). VoteTRANS is presented in two scenarios: an unknown attack (VoteTRANSdiff) and a known attack (VoteTRANSsame). For an unknown attack, we use a word-based BAE and character-based *DeepWordBug* as the auxiliary attacks for VoteTRANSdiff without support. VoteTRANSdiff achieves high performances with both auxiliaries. Other auxiliaries for VoteTRANSdiff without support are mentioned in Appendix D. The use of the RoBERTa model as support boosts the overall performance. Other attacks from the TextAttack were conducted and are listed in increasing order of F1 scores. Among these attacks, *BERT-Attack* and CLARE are ignored because both use the same masked language model used in BAE, and the three attacks reached similar performances. *HotFlip* is not supported for CNN. The results show that VoteTRANSdiff can use any attack as the auxiliary, with all scores being greater than or equal to 92.8, except for those of Checklist. *Checklist* generates independent adversarial text with any model and causes low performance, as mentioned in the owner paper (Ribeiro et al., 2020). The results from various auxiliaries demonstrate that VoteTRANSdiff can detect adversarial text without attack information. For a known attack, VoteTRANSsame without support outperforms FGWS and WDR. VoteTRANSsame is improved by using random support such as LSTM or RoBERTa. A stronger model (RoBERTa) helps VoteTRANSsame more than LSTM. Both can also be used together to support VoteTRANSsame and improve the F1 score, but the adversarial recall is slightly affected. Other available supports from TextAttack for AG News are mentioned in Appendix E. While character-based attacks are still a challenge for both WDR and FGWS as mentioned in their papers, VoteTRANS still detects such adversarial text upto 97.6% F1 and 99.6% recall as shown in Appendix F. It indicates the flexible of VoteTRANS with various attack levels. ## 4.3 Run Time Since VoteTRANS uses an auxiliary attack to detect adversarial text, we compared the run time of VoteTRANSsame with that of the corresponding attack. Table 3 shows a comparison of adversarial text generated by *PWWS* targeting the CNN model on short text (RTMR), medium text (AG News), and long text (IMDB); VoteTRANSdiff, other attacks, and models reached similar ratios. We also compared the detection times obtained from WDR/FGWS, which both use the target model to predict the text n times, where n is the number of words in an input text. VoteTRANSsame is reported without support and with RoBERTa support. We also separately appended the detection time for adversarial and original text for VoteTRANSsame, while other detectors ran for the same time for both. The run times of both the attack and detectors are affected by the text length. FGWS and WDR need less than 2 seconds to detect text. VoteTRANSsame processes adversarial text much faster than original text because most of the adversarial text is identified early with lines 2022 of Algorithm 1. Thanks to line 12 of Algorithm 1 for filtering many transformed texts, VoteTRANSsame without support run similar to FGWS and WDR for short and medium text. For long text, VoteTRANSsame needs 0.67 seconds for adversarial text. VoteTRANSsame with RoBERTa as support even completes processing the adversarial text from IMDB with only 0.37 seconds. This demonstrates that RoBERTa accelerates adversarial text processing. VoteTRANSsame runs faster by decreasing the word ratio α in Algorithm 1. α determines the number of words that are processed. While VoteTRANSsame with RoBERTa as support processes RTMR text in a reasonable time, we evaluate the change in α for processing AG News and IMDB text, as shown in Figure 4a and Figure 4b, respectively. For AG News, although the detection time is mostly steady at 0.25 seconds, with α between 12% and 19%, the F1/recall ratio remarkably increases from 93.9/91.6 to 95.4/95.2. The run time is worsened to 1.42 seconds with the largest α of 100%, but the F1 scores are only slightly increased. For IMDB text, VoteTRANSsame achieves a high performance, even with a small α. When α is increased from 3% to 10%, the run time/F1/recall increases from 0.28/89.1/81.6 to 0.91/96.6/97.6. The recall is improved up to 98.8 with a maximum α of 100%, but the corresponding F1 score drops slightly to 95.4. The F1 score is affected by some of the original text misclassified as adversarial text. ![7_image_0.png](7_image_0.png) In particular, when processing with 12% and 5% of the medium text (AG News) and long text (IMDB), respectively, V oteT RANS*same* with RoBERTa as support takes approximately 0.21 seconds and 0.47 seconds, competitive with 0.08 seconds and 1.01 seconds produced from FGWS/WDR, while keeping F1/recall scores (93.9/91.6 for AG News and 94.7/92.4 for IMDB), higher than FGWS (90.6/87.6 and 86.3/79.6) and WDR (91.1/91.2 and 84.3/90.0). With these ratios, V oteT RANS*same* with RoBERTa only needs 32.0 and 61.1 for AG News and IMDB, respectively (see Appendix G for other ratios). ## 4.4 Discussion Detection with high confidence text: VoteTRANS still keeps 79.2% of F1 score on detecting adversarial text from AG News with its confidence greater than or equal to 90%. In contrast, WDR drops the score to 25.0% (see other confidences and IMDB text in Appendix H). Detection with only hard labels: VoteTRANS only uses soft labels of predictions from a target model via an auxiliary attack to calculate importance scores (line 3 of Algorithm 1). However, these scores are only used to accelerate detection ![7_image_1.png](7_image_1.png) Table 4: Success rate under an adaptive attack. by selecting the top words in line 6. Without these scores, VoteTRANS achieves an identical performance by processing all words. Therefore, VoteTRANS is compatible with any target model that only provides hard labels. Parallel processing: An adversarial attack needs to perturb individual words of input text in sequence to optimize the perturbed text in each step until a target model is fooled. On the other hand, VoteTRANS can create independent transformation sets for individual words and process them in parallel. VoteTRANS can accelerate the process with parallel or distributed computing. Adaptive attack: An attacker may be aware of the existence of a detector and fool both a target and the detector. We evaluate *PWWS* targeting CNN models on AG News and IMDB as shown in Table 4. Although *PWWS* strongly attacks the CNN models with more than 88% of success rate, it hardly bypasses detectors, especially VoteTRANS. ## 5 Conclusion We propose VoteTRANS, a method for detecting adversarial text without training by voting on hard labels of text after transformation. VoteTRANS outperforms state-of-the-art detectors under various attacks, models, and datasets. Moreover, VoteTRANS is flexible at detecting a restricted scenario when an attack is unknown. VoteTRANS also straightforwardly detects adversarial text from a new attack without modifying the architecture. ## 6 Limitations Auxiliary attack and supports: VoteTRANS without support works well with an auxiliary attack which is the same with the target attack. In contrast, VoteTRANS with support achieves stable results with any auxiliary attack but it runs slower. Short text and susceptible text: A short text is more difficult to detect than a long text. Susceptible text may bypass VoteTRANS as mentioned in Appendix I. However, the short text and susceptible text are often unnatural and unclear meaning, respectively, so they are easily recognized by humans. Therefore, we recommend that humans recheck suspicious text with an abnormal ratio in the voting process of VoteTRANS (line 19 of Algorithm 1). Beyond word-based attacks: We detect adversarial text up to word-based attacks, which change a few characters or words and are often imperceptible to humans. Other attacks remarkably affect the naturalness with a large change such as sentencebased attacks as in Iyyer et al. (2018). Beyond text classification: We evaluate VoteTRANS on adversarial attacks targeting text classification. In contrast, the other tasks do not well-define a standard for generating adversarial text. For example, attacks targeting sequence models need to determine a threshold for *BLEU* score, which is aimed to minimize, but whether the score is sufficient for an adversarial text is still in question. ## Acknowledgments This work was partially supported by JST CREST Grants JPMJCR20D3, Japan. ## References Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. 2018. Generating natural language adversarial examples. In *Proceedings of the Conference on Empirical* Methods in Natural Language Processing (EMNLP), pages 2890–2896. Emil Biju, Anirudh Sriram, Pratyush Kumar, and Mitesh M Khapra. 2022. Input-specific attention subnetworks for adversarial detection. In *Findings of* the Association for Computational Linguistics (ACL), pages 31–44. Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2018. Hotflip: White-box adversarial examples for text classification. In *Proceedings of the 56th* Annual Meeting of the Association for Computational Linguistics (ACL), pages 31–36. Shi Feng, Eric Wallace, Alvin Grissom II, Pedro Rodriguez, Mohit Iyyer, and Jordan Boyd-Graber. 2018. Pathologies of neural models make interpretation difficult. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3719–3728. Ji Gao, Jack Lanchantin, Mary Lou Soffa, and Yanjun Qi. 2018. Black-box generation of adversarial text sequences to evade deep learning classifiers. In Proceedings of the IEEE Security and Privacy Workshops (SPW), pages 50–56. Siddhant Garg and Goutham Ramakrishnan. 2020. Bae: Bert-based adversarial examples for text classification. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6174–6181. Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. In Proceedings of the 16th Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), pages 1875–1885. Robin Jia, Aditi Raghunathan, Kerem Göksel, and Percy Liang. 2019. Certified robustness to adversarial word substitutions. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4120–4133. Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2020. Is bert really robust? a strong baseline for natural language attack on text classification and entailment. In *Proceedings of the 34th* Conference on Artificial Intelligence (AAAI), pages 8018–8025. Volodymyr Kuleshov, Shantanu Thakoor, Tingfung Lau, and Stefano Ermon. 2018. Adversarial examples for natural language classification problems. In *OpenReview*. Dianqi Li, Yizhe Zhang, Hao Peng, Liqun Chen, Chris Brockett, Ming-Ting Sun, and William B Dolan. 2021. Contextualized perturbation for textual adversarial attack. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), pages 5053– 5069. J Li, S Ji, T Du, B Li, and T Wang. 2019. Textbugger: Generating adversarial text against real-world applications. In Proceedings of the 26th Annual Network and Distributed System Security Symposium (NDSS). Linyang Li, Ruotian Ma, Qipeng Guo, Xiangyang Xue, and Xipeng Qiu. 2020. Bert-attack: Adversarial attack against bert using bert. In *Proceedings of the* Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6193–6202. John Morris, Eli Lifland, Jin Yong Yoo, Jake Grigsby, Di Jin, and Yanjun Qi. 2020. Textattack: A framework for adversarial attacks, data augmentation, and adversarial training in nlp. In Proceedings of the Conference on Empirical Methods in Natural Language Processing: System Demonstrations (EMNLP), pages 119–126. Edoardo Mosca, Shreyash Agarwal, Javier RandoRamirez, and Georg Groh. 2022. " that is a suspicious reaction!": Interpreting logits variation to detect nlp adversarial attacks. In *Proceedings of the* 60th Annual Meeting of the Association for Computational Linguistics (ACL), pages 7806–7816. Maximilian Mozes, Pontus Stenetorp, Bennett Kleinberg, and Lewis Griffin. 2021. Frequency-guided word substitutions for detecting textual adversarial examples. In *Proceedings of the 16th Conference of* the European Chapter of the Association for Computational Linguistics (EACL), pages 171–186. Danish Pruthi, Bhuwan Dhingra, and Zachary C Lipton. 2019. Combating adversarial misspellings with robust word recognition. In *Proceedings of the 57th* Annual Meeting of the Association for Computational Linguistics (ACL), pages 5582–5591. Vyas Raina and Mark Gales. 2022. Residue-based natural language adversarial attack detection. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), pages 3836–3848. Shuhuai Ren, Yihe Deng, Kun He, and Wanxiang Che. 2019. Generating natural language adversarial examples through probability weighted word saliency. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL), pages 1085–1097. Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. 2020. Beyond accuracy: Behavioral testing of nlp models with checklist. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL), pages 4902–4912. Jiayi Wang, Rongzhou Bao, Zhuosheng Zhang, and Hai Zhao. 2022a. Distinguishing non-natural from natural adversarial samples for more robust pre-trained language model. In *Findings of the Association for* Computational Linguistics (ACL), pages 905–915. Xiaosen Wang, Hao Jin, Yichen Yang, and Kun He. 2021. Natural language adversarial defense through synonym encoding. In *Proceedings of the 37th* Conference on Uncertainty in Artificial Intelligence (UAI), pages 823–833. Xiaosen Wang, Yifeng Xiong, and Kun He. 2022b. Randomized substitution and vote for textual adversarial example detection. In Proceedings of the 38th Conference on Uncertainty in Artificial Intelligence (UAI). Jin Yong Yoo and Yanjun Qi. 2021. Towards improving adversarial training of NLP models. In *Findings of the Association for Computational Linguistics* (EMNLP), pages 945–956. KiYoon Yoo, Jangho Kim, Jiho Jang, and Nojun Kwak. 2022. Detection of word adversarial examples in text classification: Benchmark and baseline via robust density estimation. In Findings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL), pages 3656–3672. Yuan Zang, Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Meng Zhang, Qun Liu, and Maosong Sun. 2020. ![9_image_0.png](9_image_0.png) Word-level textual adversarial attacking as combinatorial optimization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL), pages 6066–6080. Yichao Zhou, Jyun-Yu Jiang, Kai-Wei Chang, and Wei Wang. 2019. Learning to discriminate perturbations for blocking adversarial attacks in text classification. In *Proceedings of the Conference on Empirical* Methods in Natural Language Processing (EMNLP), pages 4906–4915. ## A Prediction Change Under One-Word Transformation The example in Figure 1 shows that the prediction of an adversarial text is more susceptible than that of an original text. We verify this observation on other AG News texts and IMDB movie reviews. In particular, we inspect the whole testing set containing 500 balanced samples of original and adversarial text generated by *PWWS* targeting CNN. For each sample, we transform a word by synonyms and measure the rate of prediction change from the target CNN. The maximum rate among words is represented for the sample and is plotted in histogram graphs (Figure 4). The maximum rate of an original text is often lower than that of an adversarial text in both AG News and IMDB. ![10_image_0.png](10_image_0.png) | Attack | WDR | VoteTRANS | | | |------------|--------|-------------|--------|------| | F1 | Recall | F1 | Recall | | | TextFooler | 95.4 | 95.2 | 96.5 | 98.0 | | IGA | 94.1 | 95.2 | 95.7 | 97.2 | | BAE | 88.7 | 82.0 | 96.7 | 99.6 | Table 5: WDR training on other word-based attacks. ## B Word Importance Score We change word ratio α with 10% step to show the impact of word importance score in VoteTRANS (line 2-6 in Algorithm 1). The experiment is conducted with adversarial text generated by *PWWS* targeting a CNN model on IMDB. To eliminate the impact of support models, we evaluate VoteTRANSsame without support as shown in Figure 5. We also compare between VoteTRANSsame with word importance score and that with random score. VoteTRANSsame with word importance score achieves better than that with random score across all α, especially with small α. It demonstrates the impact of word importance score in VoteTRANS. ## C Wdr **Training On Other Word-Based** Attacks We train WDR on other main word-based attacks including TextFooler, IGA, and BAE to detect adversarial text generated by *PWWS* as shown in Table 5. | Method | F1 | Recall | |---------------------------------|------|----------| | VoteTRANSdiff (Checklist) | 41.4 | 26.4 | | VoteTRANSdiff (PSO) | 88.1 | 81.6 | | VoteTRANSdiff (A2T) | 88.2 | 82.0 | | VoteTRANSdiff (Faster-Alzantot) | 90.0 | 84.8 | | VoteTRANSdiff (IGA) | 90.1 | 86.0 | | VoteTRANSdiff (Kuleshov) | 90.8 | 86.4 | | VoteTRANSdiff (TextFooler) | 91.1 | 88.0 | | VoteTRANSdiff (Input-Reduction) | 93.8 | 97.6 | | VoteTRANSdiff (Pruthi) | 94.3 | 96.8 | | VoteTRANSdiff (TextBugger) | 94.6 | 97.2 | The adversarial text in both testing and training data targets the CNN model on AG News. Since VoteTRANS performs without training, we use the corresponding attack as an auxiliary. WDR is more compatible with *TextFooler* and IGA than BAE. In contrast, VoteTRANS achieves similar performance across three attacks. ## D Votetransdiff **Without Support** Besides BAE and *DeepWordBug* as mentioned in Table 2, we show other auxiliary attacks in Table 6. VoteTRANSdiff efficiently detects adversarial text except with *Checklist*, which generates independent adversarial text with any model and does not focus on the performance. ## E Votetranssame **With Other** Supports Besides LSTM and RoBERTa as mentioned in Table 2, we show all other available support models for AG News from TextAttack and their combination in Table 7. VoteTRANS achieves performances of at least 97.5 with individual supports and their combination. Similar to the results in Table 2, multiple supports reach more stable results than an individual support. ## F **Detecting Character-Based Attacks** We conduct experiments to detect adversarial text from all character-based attacks from TextAttack compatible with the CNN model as shown in Table 8. VoteTRANS achieves similar performances for AG News and IMDB. It demonstrates the resilience of VoteTRANS in detecting characterbased attacks from different extents and tasks (medium text with multiclass classification as in | Method | F1 | Recall | |-------------------------------------------------------------|------|----------| | VoteTRANSsame (PWWS) with BERT as support | 97.5 | 100.0 | | VoteTRANSsame (PWWS) with ALBERT as support | 97.6 | 99.6 | | VoteTRANSsame (PWWS) with DistilBERT as support | 97.5 | 99.6 | | VoteTRANSsame (PWWS) with BERT+ALBERT as support | 98.4 | 98.4 | | VoteTRANSsame (PWWS) with BERT+DistilBERT as support | 98.2 | 98.8 | | VoteTRANSsame (PWWS) with ALBERT+DistilBERT as support | 98.2 | 98.8 | | VoteTRANSsame (PWWS) with BERT+ALBERT+DistilBERT as support | 98.4 | 99.2 | Table 7: Other available support models from TextAttack for VoteTRANSsame. Table 8: Detecting all character-based attacks compatible with CNN model from TextAttack. | Dataset | Method | F1 | Recall | |-----------------------------------------------------|---------------------------------------------|------|----------| | VoteTRANSsame (DeepWordBug) without support | 94.3 | 89.2 | | | VoteTRANSsame (Pruthi) without support | 66.4 | 76.8 | | | VoteTRANSsame (TextBugger) without support | 92.7 | 93.6 | | | VoteTRANSsame (DeepWordBug) with RoBERTa as support | 95.0 | 98.8 | | | VoteTRANSsame (Pruthi) with RoBERTa as support | 80.3 | 98.0 | | | VoteTRANSsame (TextBugger) with RoBERTa as support | 96.1 | 98.8 | | | AG News | VoteTRANSsame (DeepWordBug) without support | 90.5 | 93.2 | | VoteTRANSsame (Pruthi) without support | 77.8 | 90.8 | | | VoteTRANSsame (TextBugger) without support | 92.1 | 95.2 | | | VoteTRANSsame (DeepWordBug) with RoBERTa as support | 93.3 | 99.6 | | | VoteTRANSsame (Pruthi) with RoBERTa as support | 82.0 | 98.4 | | | VoteTRANSsame (TextBugger) with RoBERTa as support | 97.6 | 99.6 | | | IMDB | | | | AG News and long text with binary classification as in IMDB). ## G Votetrans **Complexity** Let N, M, and K be the number of words, the number of transformations for each word, and the number of models used in Algorithm 1. The worst-case of VoteTRANS complexity is O(N × M × K), approximately with the number of predictions on the K models. For example, if VoteTRANS processes AG News using *PWWS*, CNN, and RoBERTa as auxiliary, target, and support; in this case, N, M, and K are 42.6, 10.7, and 2, respectively. N of IMDB is increased to 241.9 while other values are unchanged. Theoretically, the number of predictions is 910.6 (AG News) and 5165.6 (IMDB). However, this number is remarkably reduced by the constraint checking (line 12) and early stopping (line 21) in Algorithm 1. As a result, it is reduced to 216.3 (76.2%) and 1151.3 (77.7%) predictions as shown in Figures 6a and 6b, respectively. VoteTRANS can also adjust the number of predictions suitable for the resource capacity by using small α. For example, α at 12% and 5% needs 32.0 and 61.1 predictions while keeping higher per- ![11_image_0.png](11_image_0.png) formance on the existing works as mentioned in Section 4.3. ![12_image_0.png](12_image_0.png) ## H Detection With High Confidence Text We evaluate WDR and VoteTRANS on detecting high confidence of adversarial text, which is generated by *PWWS* targeting CNN models on AG News and IMDB. PWWS attacks the CNN model until overcoming minimum confidence. Since any confidence of AG News (4 classes) and IMDB (2 classes) is greater than 25% and 50%, respectively, we set minimum confidences starting at 30% and 60% with 10% step. While the minimum confidences at 80% and 90% on AG News have 71 and 19 adversarial texts, respectively, other confidences have sufficient 500 balanced samples. While WDR and VoteTRANS achieve similar F1 on AG News until minimum confidence at 60%, WDR suddenly drops down to 25.0% at confidence 90%. In contrast, VoteTRANS still keeps 79.2% at this confidence as shown in Figure 7a. For IMDB, the margins between WDR and VoteTRANS gradually increase from 4.2% to 18.4%. It demonstrates the resilience of VoteTRANS in detecting adversarial text with high confidence. ## I Error Analysis We analyze the errors of VoteTRANSsame for the short text (MR). Here we especially focus on the results when DistilBERT and RoBERTa are the target and support models and the adversarial text is generated with *PWWS*. MR is harder to detect than long text as shown in Table 1. 40.7% of all the errors that VoteTRANSsame fails to detect are caused by susceptible original text, which is easily attacked. For example, although the original text "*your children will be occupied for 72 minute*" is correctly predicted as negative by DistilBERT, 29 out of 39 perturbations with one-word replacements change the prediction into positive. It is opposite to our hypothesis as mentioned in line 38 in Section 1 and thus bypasses VoteTRANS. Its adversarial text "your child will be occupied for 72 minutes" also bypasses our detector (1 out of 42 perturbations change the prediction). However, such text is a little harmful because it has unclear sentiment and is unpopular (4.4% and 1.2% of MR and IMDB testing text, respectively). ## Acl 2023 Responsible Nlp Checklist A For Every Submission: A1. Did you describe the limitations of your work? Left blank. A2. Did you discuss any potential risks of your work? Left blank. A3. Do the abstract and introduction summarize the paper's main claims? Left blank. A4. Have you used AI writing assistants when working on this paper? Left blank. ## B **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? Left blank. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Left blank. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Left blank. ## C **Did You Run Computational Experiments?** Left blank. C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Left blank. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Left blank. C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Left blank. C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Left blank. D **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Left blank.
sun-etal-2023-fusion
Fusion or Defusion? Flexible Vision-and-Language Pre-Training
https://aclanthology.org/2023.findings-acl.316
Existing approaches in the vision-and-language pre-training (VLP) paradigm mainly deploy either fusion-based encoders or dual-encoders, failing to achieve both effectiveness and efficiency in downstream multimodal tasks. In this paper, we build a flexible VLP model by incorporating cross-modal fusions into a dual-encoder architecture, where the introduced fusion modules can be easily decoupled from the dual encoder so as to switch the model to a fusion-free one. To better absorb cross-modal features from the fusion modules, we design a cross-modal knowledge transfer strategy along with other comprehensive pre-training tasks to guide the training process, which can further strengthen both the fusion-based and fusion-free representation learning. Extensive experiments conducted on various downstream vision-language tasks show that our proposed model is well-equipped with effectiveness as well as efficiency, demonstrating a superior performance compared with other strong VLP models.
## Fusion Or Defusion? Flexible Vision-And-Language Pre-Training Rongyi Sun1∗ , Ziran Li2∗ , Yifeng Ding1**, Qifan Wang**3, Jingang Wang2† , Hai-Tao Zheng1,4† , Wei Wu2and **Yunsen Xian**2 1Tsinghua Shenzhen International Graduate School, Tsinghua University 2Meituan 3Meta AI 4Peng Cheng Laboratory, Shenzhen, China {sry20,dingyf20}@mails.tsinghua.edu.cn, [email protected] {liziran02,wangjingang02,xianyunsen}@meituan.com [email protected], [email protected] ## Abstract Existing approaches in the vision-and-language pre-training (VLP) paradigm mainly deploy either fusion-based encoders or dual-encoders, failing to achieve both effectiveness and efficiency in downstream multimodal tasks. In this paper, we build a flexible VLP model by incorporating cross-modal fusions into a dualencoder architecture, where the introduced fusion modules can be easily decoupled from the dual encoder so as to switch the model to a fusion-free one. To better absorb cross-modal features from the fusion modules, we design a cross-modal knowledge transfer strategy along with other comprehensive pre-training tasks to guide the training process, which can further strengthen both the fusion-based and fusionfree representation learning. Extensive experiments conducted on various downstream visionlanguage tasks show that our proposed model is well-equipped with effectiveness as well as efficiency, demonstrating a superior performance compared with other strong VLP models. ## 1 Introduction With the great development of self-supervised pretraining in both the community of natural language processing (Devlin et al., 2019; Raffel et al., 2020) and computer vision (Dosovitskiy et al., 2021; Bao et al., 2022a), recent researches have also witnessed the success of Vision-and-Language Pretraining (VLP). VLP learns generic multimodal representations from large-scale image-text pairs and can be further finetuned on various downstream Vision-Language (VL) tasks, including image-text retrieval (Lin et al., 2014), visual question answering (Goyal et al., 2017), visual reasoning (Suhr et al., 2019) and visual entailment (Xie et al., 2019). The core of VLP resides in modeling the interaction between image and text representations. Most of the mainstreams first represent the input image ∗Equal contribution. †Corresponding authors. ![0_image_0.png](0_image_0.png) Figure 1: Different designs for vision-language fusions based on multi-head attention, "I" and "T" are short for image and text respectively. (a) Lightweight: only a few or even no parameters are used for VL fusions. (b) Concatenation: multi-head cross-attentions are applied to fuse the concatenation of image and text. (c) Cascading: first uses self-attentions to fully encode the unimodal input, then fuses the encoded features via cross-attentions. (d) Parallel: self-attentions and cross-attentions are independently calculated. via pre-trained deep feature extractors, then feed the derived visual features along with the text embeddings into multi-layer Transformers(Vaswani et al., 2017), in which cross-modal attention is used to fuse multimodal representations. Despite demonstrating superior performances on downstream VL tasks, the fusion-based methods need to jointly encode image and text representations, significantly degrading the efficiency in retrieval tasks with massive candidates of image-text pairs. To make VLP models applicable in real-world scenarios, another line of methods independently encode text and image with dual encoders, shown in Fig. 1(a), in which cross-modal fusion is conducted by lightweight modules such as dot production. Thanks to the dual-encoder architecture, encoded features of image and text can be precomputed offline for inference efficiency. Nevertheless, independent encoding with shallow interaction fails to fully exploit the cross-modal interaction, making the performance far from satisfactory in VL classification tasks that require a strong ability of multimodal reasoning. There are some recent works that attempt to keep ![1_image_0.png](1_image_0.png) both effectiveness and efficiency in downstream VL tasks. In particular, Wang et al. (2021b) empower a dual-encoder model by distilling knowledge from a fusion-based model. Although the distilled dualencoder learns useful knowledge from cross-modal fusions while keeping its efficiency, this kind of method needs to pre-train a fusion-based model as a teacher and the performance is severely limited by the ability of the teacher model. VLMo (Bao et al., 2022b) introduces mixture-of-experts to encode various modalities with a modality-agnostic Transformer, which can be used as either a fusion encoder or a dual encoder. However, to fully train such a sparse model with experts towards different modalities, not only the image-text pairs but also massive images and text are required. In this paper, we propose a unified and flexible VLP model named FOD, which incorporates cross-modal fusions into a dual-encoder architecture for achieving both efficacy and efficiency in multimodal scenarios. Specifically, we adopt a dual architecture with one image encoder and one text encoder, in which cross-modal fusions are placed in the text encoder side. Considering that conventional fusions are based on either concatenation (Kim et al., 2021; Singh et al., 2022) or cascading (Li et al., 2021a; Dou et al., 2022) that can't be directly decoupled from the boarding encoder, we employ a parallel-style fusion module to model cross-modal interactions, shown in Fig. 1. In this way, FOD can explicitly capture the complex interaction between modalities during training while switching the fusion-based text encoder to a fusionfree one by removing the fusion module. In order to retain more cross-modal knowledge in FOD when the fusion modules are removed, we further design a cross-modal knowledge transfer strategy that forces both the unimodal features of image and text to approximate the multimodal representation produced by the fusion-based encoder. Intuitively, since paired image and text describe the same object in different views, we can naturally associate a set of relevant images when given a caption (and vice versa). Thus, if the text feature learns to "associate" its related images and absorbs them to enhance itself, the enhanced text feature can become closer to the relevant image candidates (and also farther to the unrelated ones) in inference. A concrete example illustrating this intuition is shown in Fig. 2. We evaluate our model on both image-text retrieval tasks and vision-language understanding tasks. Experimental results show that our model outperforms other VLP methods on all downstream VL tasks, and even performs competitively with models that use a larger order of magnitude of data for pre-training. Thanks to the detachable fusion module and the strategy of knowledge transfer, our model can be flexibly switched to a fusion-free pattern to enjoy a much faster inference speed of retrieval while retaining most of the performance. ## 2 Related Work Without considering the ways of visual feature extraction, the approaches of vision-language pretraining can be divided into two categories based on the interaction form between image and text. The first category, fusion-based model, explicitly utilizes deep fusion layers with cross-modal attention to model the interaction of images and texts (Tan and Bansal, 2019; Lu et al., 2019; Su et al., 2019; Li et al., 2019; Chen et al., 2020; Li et al., ![2_image_0.png](2_image_0.png) 2020, 2021b; Gan et al., 2020; Zhang et al., 2021; Huang et al., 2020, 2021; Kim et al., 2021; Li et al., 2021a; Wang et al., 2021c; Li et al., 2022; Zeng et al., 2022; Wang et al., 2022). These models perform well on vision-language understanding tasks due to the ability of capturing deep cross-modal features. However, for vision-language retrieval tasks, the fusion-based methods need to encode all the possible image-text pairs to find the most relevant candidate, resulting in extremely high time cost. The second category, dual-based model, utilizes a visual encoder and a text encoder to separately encode images and text, while the interaction between images and text is modeled by cosine similarity or linear projection (Radford et al., 2021; Jia et al., 2021; Yao et al., 2021). Although dualbased models are effective for retrieval tasks since features can be pre-computed and cached offline, the shallow interaction is insufficient to tackle the vision-language understanding tasks that require complex VL reasoning. Besides, training a dualbased model often necessitates a large number of image-text pairs (e.g. 300M for Filip (Yao et al., 2021) and 1.8 B for ALIGN (Jia et al., 2021)). Recently, some researchers have devoted themselves to investigating a unified model that is wellperformed on vision-language understanding tasks while maintaining the efficiency towards retrieval tasks (Wang et al., 2021b; Liu et al., 2021; Wang et al., 2021a; Bao et al., 2022b; Dou et al., 2022). To achieve this, one line of the works leverage knowledge distillation, in which a fusion-encoder model is pre-trained as a teacher model to guide the training of a dual-encoder model (Wang et al., 2021b), but the performance is inevitably limited by the teacher model. Other efforts attempt to train a modality-agnostic encoder with shared parameters, which can be used as either a fusion encoder or a dual encoder (Wang et al., 2021a; Bao et al., 2022b). Despite the benefits of modeling all the modalities into a single encoder, it is hard to fully train such a huge model and a large number of training samples in different modalities are required. Different from these methods, we incorporate a detachable cross-modal fusion module into a dualencoder architecture, which can easily remove the fusion module in inference and switch to a fusionfree model. More importantly, our model does not rely on teacher models or massive data in other modalities. ## 3 Model Architecture As shown in Fig. 3, FOD is in a transformer-based dual-encoder architecture that includes a visual encoder and a text encoder. The text encoder can be flexibly switched between a fusion-based pattern and a fusion-free pattern. For the fusion-based pattern, cross-modal fusions are incorporated into the text encoder to model multimodal interactions. For the fusion-free pattern, the fusion module is decoupled from the text encoder so as to get rid of the cross-modal calculation. During training, both fusion-based and fusion-free patterns are involved in the learning process, while in inference, the text encoder will be switched to one of the two patterns according to the type of downstream tasks. In the following sections, we introduce the visual encoder and the two patterns of the text encoder, followed by the pre-training strategies. ## 3.1 Visual Encoder We utilize Vision Transformer (Dosovitskiy et al., 2021) to build the visual encoder. Given a 2D image I ∈ R C×H×W , we first reshape I into a sequence of 2D image patches V p ∈ R N×(P 2·C), where (*H, W*) is the original image resolution, C is the number of channels, (*P, P*) is the patch resolution, and N = *HW/P*2is the number of patches. $$V^{P}=[v_{1}^{P};\cdots;v_{N}^{P}].$$ Then we flatten the patches and embed them to V e ∈ R N×D with a trainable linear projection ω ∈ R (P 2·C)×D, where D is the hidden size. $$V^{e}=[v_{1}^{p}\omega;\cdots;v_{N}^{p}\omega].$$ N ω]. (2) We also prepend a learnable embedding V e cls ∈ R D to the patch embeddings V e. Besides, positional information is also important for path representations. Therefore, the embedded patches V¯ are obtained by summing [V e cls; V e] and learnable 1D position embeddings Vpos ∈ R (N+1)×D. Finally, we obtained visual features V by encoding V¯ with the visual encoder VE. $$\bar{V}=[V^{e}_{cls};v^{e}_{1};\cdots;v^{e}_{N}]+V_{pos},\tag{3}$$ $$V=\mbox{VE}(\bar{V}).$$ ## 3.2 Text Encoder As mentioned before, there are two patterns of the text encoder: fusion-free text encoder and fusionbased text encoder. These two patterns are both based on Transformers (Vaswani et al., 2017) and share all the fusion-free parameters except the output linear projection in the last encoding layer. Given the input text t = {tcls; w1; *· · ·* ; wS}, we first embed t to T 0 ∈ R S×D via a word embedding matrix and a position embedding matrix. Then the text embedding T 0can be fed into different patterns of the text encoder to produce different output features. ## 3.2.1 Fusion-Free Text Encoder In this pattern, the text encoder skips the crossmodal fusions and outputs text-only features. The text encoder is a L-layer Transformer, and the output of the l-th layer T lis computed as follows: $$T^{l}=\text{MSA}(T^{l-1},T^{l-1},T^{l-1}),$$ $$\hat{T}^{l}=\text{LN}(T^{l}_{s}+T^{l-1}),\tag{4}$$ $$T^{l}=\text{LN}(\text{MLP}(\hat{T}^{l})+\hat{T}^{l}),$$ $$T^{l}=T^{L},$$ where MSA, LN and MLP are shot for Multi-Head Self-Attention, layer normalization and multi-layer perceptron respectively, T is the final features of the fusion-free text encoder. ## 3.2.2 Fusion-Based Text Encoder $$(2)$$ To fully capture vision-and-language interactions, both self-attention and cross-modal attention are considered in the fusion-based encoder. Specifically, in the l-th layer, we separately compute the fusion-free self-attention and the image-fused cross-attention, and then sum them up to produce the multimodal features. The detailed process is shown as follows: $M^{0}=T^{0}$, $M^{l}_{s}=\text{MSA}(M^{l-1},M^{l-1},M^{l-1})$, $M^{l}_{c}=\text{MCA}(M^{l-1},V,V)$, $\hat{M}^{l}=\frac{1}{2}\times(M^{l}_{s}+M^{l}_{c})$, $\hat{M}^{l}=\text{LN}(\hat{M}^{l}+M^{l-1})$, $M^{l}=\text{LN}(\text{MLP}(\hat{M}^{l})+\hat{M}^{l})$, $M^{l}=M^{L}$, where MCA is Multi-Head Cross Attention, V is the final visual features produced by the visual encoder. The MCA, LN and MLP modules are reused from the fusion-free text encoder. Notably, the cross-modal attention is introduced in a parallel manner, which is parameter-efficient and can be easily decoupled from the encoder. In addition, the cross-modal fusions can also be placed in the visual side to build a fusion-based visual encoder, or be placed in both sides for deeper interaction. We will discuss this in the experiment section. ## 4 Pre-Training Strategies FOD is jointly trained with three different strategies, namely fusion-free learning, fusion-based learning and cross-modal knowledge transfer, which are complementary to each other. ## 4.1 Fusion-Free Learning For this strategy, we utilize image-text contrastive learning to train the dual architecture with the ability of unimodal encoding, which is not only beneficial to other cross-modal learning strategies, but also the basis for applying the model to downstream retrieval tasks. ## 4.1.1 Image-Text Contrast We select Vcls and Tcls produced by visual encoder and fusion-free text encoder to compute the loss of contrastive learning. In order to have more negative examples here, we maintain two queues to store the most recent K image and text representations computed by momentum encoders like MoCo (He et al., 2020). For convenience, we denote these representations in queues as V k cls and T k cls, where k ∈ {1, · · · , K}. For each image representation V j cls and text representation T j cls in the current batch, the image-to-text similarities p i2t j and text-to-image similarities p t2i jare computed by: $$\begin{split}s_{j,k}^{i2t}&=g(f_{v}(V_{cls}^{j}))^{\top}g(f_{t}(T_{cls}^{k})),\\ s_{j,k}^{i2t}&=g(f_{t}(T_{cls}^{j}))^{\top}g(f_{v}(V_{cls}^{k})),\end{split}\tag{6}$$ $$p_{j}^{i2t}=\frac{\exp(s_{j,j}^{i2t}/\sigma)}{\sum_{k=1}^{K}\exp(s_{j,k}^{i2t}/\sigma)},\;p_{j}^{t2i}=\frac{\exp(s_{j,j}^{t2i}/\sigma)}{\sum_{k=1}^{K}\exp(s_{j,k}^{t2i}/\sigma)},\tag{7}$$ where fv and ft are linear projections, g is L2 normalization, and σ is a learnable temperature parameter. Let y i2tand y t2i denote the ground-truth ont-hot similarity, where positive pairs have a probability of 1 and negative pairs have a probability of 0. The image-text contrastive loss Litc is defined as the cross-entropy H between p and y: $${\cal L}_{\rm tc}=\frac{1}{2}\times\big{[}{\cal H}({y^{\rm2t}},{p^{\rm2t}})+{\cal H}({y^{\rm t}}^{\rm2i},{p^{\rm t}}^{\rm2i})\big{]}.\tag{8}$$ ## 4.2 Fusion-Based Learning For this strategy, we apply image-text matching (ITM) and mask language modeling (MLM) to the fusion-based text encoder for learning both coarsegrained and fine-grained cross-modal fusions. ## 4.2.1 Image-Text Matching ITM focuses on coarse-grained multimodal learning, which aims to predict whether a pair of image and text is matched or not. Since the imagetext pairs in a batch are all positive, we sample global hard negative image-text pairs from all input batches on all the GPUs based on the similarity scores calculated in Eq. 7. Then we feed the final hidden vector of the fusion-based encoder Mcls into a binary classifier to predict a two-class probability p itm. Given the ground-truth label y itm ∈ {0, 1}, the image-text matching loss Litm is defined as the cross-entropy H between y itm and p itm: Litm = H(y itm, p itm). (9) ## 4.2.2 Masked Language Modeling MLM predicts masked tokens on the image-fused text features, which serves as the fine-grained crossmodal learning. Formally, we randomly mask 15% of the tokens in the text sequence t with a whole word masking strategy (Cui et al., 2021) and denote the input embedding of the masked text as T¯0. Then the model is trained to predict the masked tokens based on the final outputs M¯ by feeding T¯0 into the fusion-based encoder. The detailed process is similar to Eq. 5. Let y mask denote the groundtruth label of the masked tokens, and p mask denote the models' prediction for the masked tokens, then the masked language modeling loss is defined as the cross-entropy H between y mask and p mask: $${\mathcal{L}}_{\mathrm{mlm}}={\mathcal{H}}(y^{\mathrm{mask}},p^{\mathrm{mask}}).$$ $$(10)$$ mask). (10) ## 4.3 Cross-Modal Knowledge Transfer In our preliminary experiments, we observe that if the ITM loss is removed from the training process, the performance in retrieval tasks would dramatically degrade. From the perspective of feature distributions, we believe that ITM can better close the spatial distance between the unimodal features of image and text, which encourages us to explicitly utilize ITM to enhance unimodal representations. To achieve this, we further design the strategy of cross-modal knowledge transfer (CKT). Given an image-text pair, we can first extract its image Vcls, text Tcls and multimodal representations Mcls. Obviously, Mcls is the most comprehensive feature that describes the image-text pair among them, but only Vcls and Tcls are used to compute similarity score in retrieval tasks. In this case, if we enhance the text feature to actively associate its related images by transferring knowledge from Mcls to Tcls, it will be easier to find the relevant image candidates based on the enhanced text feature in inference (and similar for Vcls). Thus, we force both Vcls and Tcls to approximate Mcls via meansquared loss in the last layer, which are calculated as follows: $$\begin{array}{l}\mathcal{L}_{\rm12m}=MSE(f_{v}(V_{cls}),f_{t}(M_{cls})),\\ \mathcal{L}_{\rm12m}=MSE(f_{t}(T_{cls}),f_{t}(M_{cls})),\end{array}\tag{11}$$ 5109 | MSCOCO (5K) | Flickr30k (1K) | | | | | | | | | | | | | |-----------------------------------------------------------|------------------|----------------|-----------------|----------------|-----------------|------|-------|-------|-------|-------|------|------|------| | Model | # Pretrain | Text Retrieval | Image Retrieval | Text Retrieval | Image Retrieval | | | | | | | | | | Images | R@1 | R@5 | R@10 | R@1 | R@5 | R@10 | R@1 | R@5 | R@10 | R@1 | R@5 | R@10 | | | UNITER-B | 4M | 64.4 | 87.4 | 93.1 | 50.3 | 78.5 | 87.2 | 85.9 | 97.1 | 98.8 | 72.5 | 92.4 | 96.1 | | OSCAR-B | 4M | 70.0 | 91.1 | 95.5 | 54.0 | 80.8 | 88.5 | - | - | - | - | - | - | | ViLT-B | 4M | 61.5 | 86.3 | 92.7 | 42.7 | 72.9 | 83.1 | 83.5 | 96.7 | 98.6 | 64.4 | 88.7 | 93.8 | | Inference based on "Dual" setting ALIGN 1.2B 77.0 | 93.5 | 96.9 | 59.9 | 83.3 | 89.8 | 95.3 | 99.8 | 100.0 | 84.9 | 97.4 | 98.6 | | | | Distill | 4M | - | - | - | - | - | - | 82.2 | 96.7 | 98.5 | 68.2 | 89.8 | 94.2 | | ALBEF | 4M | 65.9 | 88.5 | 93.8 | 49.1 | 76.4 | 84.9 | 89.7 | 98.5 | 99.7 | 74.5 | 93.2 | 96.3 | | X-VLM | 4M | 71.4 | 91.9 | 96.4 | 54.5 | 81.6 | 88.9 | 90.2 | 99.1 | 99.7 | 78.4 | 95.2 | 97.8 | | VLMo-B | 4M | 74.8 | 93.1 | 96.9 | 57.2 | 82.6 | 89.8 | 92.3 | 99.4 | 99.9 | 79.3 | 95.7 | 97.8 | | Ours | 3M | 77.3 | 94.3 | 96.9 | 58.9 | 83.2 | 90.0 | 94.6 | 99.7 | 99.9 | 83.5 | 96.4 | 98.1 | | Inference based on "Re-Rank" setting BLIP† 129M 81.2 95.7 | 97.9 | 64.1 | 85.8 | 91.6 | 97.2 | 99.9 | 100.0 | 87.5 | 97.7 | 98.9 | | | | | ALBEF† | 4M | 73.1 | 91.4 | 96.0 | 56.8 | 81.5 | 89.2 | 94.3 | 99.4 | 99.8 | 82.8 | 96.7 | 98.4 | | X-VLM† | 4M | 80.4 | 95.5 | 98.2 | 63.1 | 85.7 | 91.6 | 96.8 | 99.8 | 100.0 | 86.1 | 97.4 | 98.7 | | Ours† | 3M | 82.2 | 95.8 | 97.9 | 65.2 | 86.4 | 91.9 | 97.4 | 100.0 | 100.0 | 87.3 | 97.7 | 98.9 | | Flickr30k (1K) | | | | | | | | | | | | |---------------------------------------------------------------|-------------------|----------------|-----------------|-------|-------|------|------|-------|-------------------|-------|-------| | Model | # Pretrain Images | Text Retrieval | Image Retrieval | | | | | | | | | | R@1 | R@5 | R@10 | R@1 | R@5 | R@10 | | | | | | | | UNITER | 4M | 83.6 | 95.7 | 97.7 | 68.7 | 89.2 | 93.9 | | | | | | ViLT | 4M | 69.7 | 91.0 | 96.0 | 51.3 | 79.9 | 87.9 | | | | | | Inference based on "Dual" setting CLIP 400M 88.0 98.7 | 99.4 | 68.7 | 90.6 | 95.2 | | | | | | | | | ALIGN | 1.2B | 88.6 | 98.7 | 99.7 | 75.7 | 93.8 | 96.8 | | | | | | ALBEF | 4M | 81.3 | 96.4 | 98.3 | 67.9 | 89.2 | 93.8 | | | | | | X-VLM | 4M | 84.7 | 97.9 | 99.3 | 72.5 | 92.7 | 96.3 | | | | | | VLMo | 4M | 88.2 | 97.9 | 99.4 | 73.4 | 92.9 | 96.6 | | | | | | Ours | 3M | 89.8 | 98.8 | 99.7 | 77.5 | 94.5 | 97.1 | | | | | | Inference based on "Re-Rank" setting ALBEF† 4M 90.5 98.8 99.7 | 76.8 | 93.7 | 96.7 | | | | | | | | | | X-VLM† | 4M | 94.1 | 99.3 | 99.9 | 82.3 | 96.1 | 98.0 | | | | | | Ours† | 3M | 95.5 | 99.8 | 100.0 | 84.5 | 96.2 | 98.2 | Model | # Pretrain Images | VQAv2 | NLVR2 | | test-dev | test-std | dev | test-P | | | | | | | | | | SimVLM | 1.8B | 77.87 | 78.14 | 81.72 | 81.77 | | | | | | | | BLIP | 129M | 78.25 | 78.32 | 82.15 | 82.24 | | | | | | | | UNITER | 4M | 72.70 | 72.91 | 77.18 | 77.85 | | | | | | | | OSCAR | 4M | 73.16 | 73.44 | 78.07 | 78.36 | | | | | | | | ViLT | 4M | 71.26 | - | 75.70 | 76.13 | | | | | | | | Distill | 4M | 68.05 | - | 74.16 | 74.30 | | | | | | | | ALBEF | 4M | 74.54 | 74.70 | 80.24 | 80.50 | | | | | | | | VLMo | 4M | 76.64 | 76.89 | 82.77 | 83.34 | | | | | | | | X-VLM | 4M | 78.07 | 78.09 | 84.16 | 84.21 | | | | | | | | Ours | 3M | 78.91 | 78.91 | 84.75 | 85.29 | | | | | | | | Table 3: Results on vision-language understanding tasks, including visual question answering (VQAv2) and visual reasoning (NLVR2). | | | | | | | | | | | | where fv and ft are the linear projections used in Eq. 6. We do not freeze Mcls in knowledge transfer so that multimodal and unimodal features can be jointly trained. ## 5 Experiment 5.1 Pre-Training Settings 5.1.1 Datasets Following previous works (Chen et al., 2020; Kim et al., 2021), we use four well-known image captioning datasets for pre-training: SBU Captions (Ordonez et al., 2011), Microsoft COCO (Lin et al., 2014), Visual Genome (Krishna et al., 2017) and Google Conceptual Captions (GCC) (Sharma et al., 2018). Since images in GCC and SBU are provided in url format and some of them are inaccessible, we only collected **3.4M** images, which is around **600K** less than the original settings. In the experiments, we term the setting of 3.4M images as 3M. ## 5.1.2 Implementation Details For model settings, the visual encoder adopts the same architecture as ViT-Base (Dosovitskiy et al., 2021) and we initialize it with pre-trained weights of Beit (Bao et al., 2022b). The text encoder is modified on Bert-Base (Devlin et al., 2019) by adding a multi-head cross attention and we initialize it with pre-trained weights of uncased-bert-base. For hyper-parameter settings during pre-training, the resolution of input images is 256 × 256 and the patch size is 16 × 16. RandAugment (Cubuk et al., 2020) is applied to the input images. We use AdamW optimizer (Loshchilov and Hutter, 2017) with weight decay of 1e-2 and the learning rate is | Fusion | MSCOCO (5K) | Flickr30k (1K) | | | |---------------|---------------|------------------|------|------| | Methods | TR@1 | IR@1 | TR@1 | IR@1 | | Concatenation | 72.5 | 54.2 | 92.6 | 80.5 | | Cascading | 73.0 | 54.5 | 91.7 | 81.2 | | Parallel | 73.5 | 55.4 | 93.1 | 81.6 | | Objectives | MSCOCO (5K) | VQAv2 | NLVR2 | | | |--------------|---------------|---------|---------|----------|--------| | I2M | T2M | TR | IR | test-dev | test-P | | % | % | 86.3 | 73.3 | 77.56 | 83.45 | | % | ! | 86.6 | 74.1 | - | - | | ! | % | 86.8 | 73.2 | - | - | | ! | ! | 87.2 | 74.3 | 77.57 | 83.37 | warmed up to 1e-4 over the first 1k steps. We pretrain for 300K steps on 32 NVIDIA A100 GPUs with a batch size of 2048. ## 5.2 Downstream Vision-Language Tasks 5.2.1 Image-Text Retrieval Tasks The vision-language retrieval tasks include imageto-text retrieval and text-to-image retrieval. We evaluate our model on the Karpathy and Fei-Fei (2015) split of MSCOCO (Lin et al., 2014) and Flickr30K. During fine-tuning, we preserve the loss of image-text contrastive learning, image-text matching and cross-modal knowledge transfer. For a better comparison with various methods, we have two settings in the inference phase, namely "Dual" and "Re-Rank". For the "Dual" setting, we use Eq. 6 to precompute images and text representations separately, and compute the similarity scores of all possible image-text pairs by dot production. For the "ReRank" setting, we first utilize the similarity scores derived from Eq. 6 to select the top-k candidates, and then predict the final results by calculating their ITM scores (p itm). ## 5.2.2 Visual Question Answering The VQAv2 (Goyal et al., 2017) task requires to predict answers based on the given pair of an image and a question. Following Cho et al. (2021) and Li et al. (2021a), we treat VQA as an answer generation problem. In order to compare fairly with other methods, we restrict the answer generation space ![6_image_0.png](6_image_0.png) ## 5.2.3 Natural Language For Visual Reasoning The NLVR2 (Suhr et al., 2019) task asks the model to predict whether a text correctly describes a pair of images. We follow previous work (Li et al., 2021a; Zeng et al., 2022) to extend the fusion-based encoder to enable reasoning over image pairs and feed the encoded vector of the input pair into a classification layer to predict answer. ## 5.3 Main Results 5.3.1 Image-Text Retrieval Results Table 1 and Table 2 show the results of fine-tuned and zero-shot image-text retrieval on MSCOCO and Flickr30K. For a fair comparison, only basesize models pre-trained on the standard 4M data are selected as the compared models. In this setting, our model achieves state-of-the-art performance on both datasets, and even performs competitively with CLIP, ALIGN and BLIP that are pre-trained on a larger order of magnitude of data. Furthermore, thanks to the designed parallel-style fusions and cross-modal knowledge transfer strategy, more cross-modal knowledge is retrained when the fusion module is decoupled in inference, narrowing the gap between "Dual" and "Re-Rank" settings. Detailed analysis of performances between "Dual" and "Re-Rank" settings are given in Appendix. ## 5.3.2 **Vision-Language Understanding Results** The VQA2 and NLVR2 are categorized as understanding tasks since they both require the ability of VL reasoning, and the results are shown in Table 3. Our model achieves the best performances on both tasks among all the competitors that are also A Boston Terrier is **running** on lush green **grass** in front of a white **fence**. ![7_image_0.png](7_image_0.png) in base-size and pre-trained with the standard 4M data, and even outperforms models pre-trained on more data like SimVLM and BLIP, demonstrating the effectiveness and efficiency of our model. ## 5.4 Ablation Studies 5.4.1 Different Designs Of Fusions We incorporate cross-modal fusions into a dual architecture to better model vision-language interactions. Conventional fusions are based on two kinds of methods, namely concatenation and cascading. (1) Concatenation jointly encodes the concatenation of image and text, which is quadratic in time complexity and twice in memory consumption. (2) Cascading first uses self-attention to encode the text input and then fuses it with image via crossattentions, which has a strong dependency between cross-attention and self-attention. Table 4 reports the ablation results of different fusions, our design that incorporates cross-modal fusions in a parallel manner outperforms other methods on retrieval tasks, showing that parallel-style fusion can switch our model into the "Dual" setting more flexibly. ## 5.4.2 Cross-Modal Knowledge Transfer We conduct the ablation experiments towards the strategy of cross-modal knowledge transfer, which is shown in Table 5. The objectives of I2M and T2M are defined in Eq. 11. From the results we can observe that: (1) I2M specifically improves the performance on image-text retrieval (TR) while T2M is beneficial for the text-image (IR) side, which are consistent with their intuitions; (2) I2M and T2M are complementary to each other. Adding both I2M and T2M during training can further bring improvements for retrieval tasks while keeping the performances on VL understanding tasks. ## 5.4.3 Fusions On Both Sides Intuitively, in addition to placing cross-modal fusions in the text encoder, we can also add the fusion modules into the visual side in a similar way. In this setting, ITM and downstream classifications are based on the concatenation of the multimodal features produced by both text and visual encoders. Fig. 4 shows the results of placing fusions in different sides, from which we find that when fusions are placed on both sides, the performance unexpectedly drops on all downstream tasks. We analyze that one possible reason comes to the difference between text and vision in self-supervised learning. It is obvious that BERT naturally works better in selfsupervision than ViT, and thus we can utilize the MLM task from BERT to learn fine-grained crossmodal interaction. When it comes to the visual side, self-supervised tasks are much more complex than MLM, inevitably making it more difficult to train such a VLP model. ## 6 Qualitative Analysis We further provide a qualitative analysis by using Grad-CAM (Selvaraju et al., 2017) to illustrate the per-word visualizations of the cross-modal attention maps of the fusion-based encoder. As shown in Fig. 5, from the visualizations we observe that when conducting image-text matching tasks, our model can focus on specific regions in an image according to different words in each sentence, including objects, actions, attributes and background. More examples are given in Appendix. ## 7 Conclusion In this work, we propose a flexible VLP model that incorporates cross-modal fusions into a dualencoder architecture. The fusion module can be easily decoupled in inference, enabling the model to be switched between fusion-based and a fusionfree patterns according to different scenarios. Extensive experiments conducted on both image-text retrieval and vision-language understanding tasks show that our model is well-equipped with effectiveness and efficiency compared with existing VLP models. ## Limitations The findings of this study have to be seen in light of some limitations. (1) It is non-trivial to extend our model for generation tasks. Since the main focus of this work is to improve both effectiveness and efficiency of the dual-encoders, text-decoder is not considered in model design. In the future, autoregressive mechanisms will be consider to applied in model architecture so that the model can be directly used for generation tasks like image captioning. (2) There may be disadvantages of the model in region-level VL tasks such as Object Detection. The reason is that these tasks require images in high resolution and fine-grained annotations of bounding boxes, which are non-trivial in generic VLP settings. To solve this problem, exploring different levels of granularity between image-text pairs is a promising direction and will be considered as the future work. ## Acknowledgements This research is supported by National Natural Science Foundation of China (Grant No.62276154), Research Center for Computer Network (Shenzhen) Ministry of Education, Beijing Academy of Artificial Intelligence (BAAI), the Natural Science Foundation of Guangdong Province (Grant No. 2023A1515012914), Basic Research Fund of Shenzhen City (Grant No.JCYJ20210324120012033 and JSGG20210802154402007), the Major Key Project of PCL for Experiments and Applications (PCL2021A06), and Overseas Cooperation Research Fund of Tsinghua Shenzhen International Graduate School (HW2021008). Jingang Wang is funded by Beijing Nova Program (Grant No.20220484098). ## References Hangbo Bao, Li Dong, Songhao Piao, and Furu Wei. 2022a. BEiT: BERT pre-training of image transformers. In *International Conference on Learning* Representations. Hangbo Bao, Wenhui Wang, Li Dong, Qiang Liu, Owais Khan Mohammed, Kriti Aggarwal, Subhojit Som, Songhao Piao, and Furu Wei. 2022b. VLMo: Unified vision-language pre-training with mixture-ofmodality-experts. In *Advances in Neural Information* Processing Systems. Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. 2020. Uniter: Universal image-text representation learning. In *European conference on* computer vision, pages 104–120. Springer. Jaemin Cho, Jie Lei, Hao Tan, and Mohit Bansal. 2021. Unifying vision-and-language tasks via text generation. In *International Conference on Machine Learning*, pages 1931–1942. PMLR. Ekin D Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V Le. 2020. Randaugment: Practical automated data augmentation with a reduced search space. In *Proceedings of the IEEE/CVF conference* on computer vision and pattern recognition workshops, pages 702–703. Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, and Ziqing Yang. 2021. Pre-training with whole word masking for chinese bert. *IEEE/ACM Transactions on Audio, Speech, and Language Processing*, 29:3504–3514. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171– 4186. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. 2021. An image is worth 16x16 words: Transformers for image recognition at scale. *ICLR*. Zi-Yi Dou, Aishwarya Kamath, Zhe Gan, Pengchuan Zhang, Jianfeng Wang, Linjie Li, Zicheng Liu, Ce Liu, Yann LeCun, Nanyun Peng, Jianfeng Gao, and Lijuan Wang. 2022. Coarse-to-fine visionlanguage pre-training with fusion in the backbone. In Advances in Neural Information Processing Systems. Zhe Gan, Yen-Chun Chen, Linjie Li, Chen Zhu, Yu Cheng, and Jingjing Liu. 2020. Large-scale adversarial training for vision-and-language representation learning. *Advances in Neural Information Processing Systems*, 33:6616–6628. Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6904–6913. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. 2020. Momentum contrast for unsupervised visual representation learning. In *Proceedings of the IEEE/CVF conference on computer vision* and pattern recognition, pages 9729–9738. Zhicheng Huang, Zhaoyang Zeng, Yupan Huang, Bei Liu, Dongmei Fu, and Jianlong Fu. 2021. Seeing out of the box: End-to-end pre-training for visionlanguage representation learning. In *Proceedings of* the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12976–12985. Zhicheng Huang, Zhaoyang Zeng, Bei Liu, Dongmei Fu, and Jianlong Fu. 2020. Pixel-bert: Aligning image pixels with text by deep multi-modal transformers. arXiv preprint arXiv:2004.00849. Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. 2021. Scaling up visual and vision-language representation learning with noisy text supervision. In International Conference on Machine Learning, pages 4904–4916. PMLR. Andrej Karpathy and Li Fei-Fei. 2015. Deep visualsemantic alignments for generating image descriptions. In *Proceedings of the IEEE conference on* computer vision and pattern recognition, pages 3128– 3137. Wonjae Kim, Bokyung Son, and Ildoo Kim. 2021. Vilt: Vision-and-language transformer without convolution or region supervision. In *Proceedings of the* 38th International Conference on Machine Learning, volume 139 of *Proceedings of Machine Learning* Research, pages 5583–5594. PMLR. Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. 2017. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International journal of computer vision, 123(1):32– 73. Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. 2022. Blip: Bootstrapping language-image pretraining for unified vision-language understanding and generation. *arXiv preprint arXiv:2201.12086*. Junnan Li, Ramprasaath Selvaraju, Akhilesh Gotmare, Shafiq Joty, Caiming Xiong, and Steven Chu Hong Hoi. 2021a. Align before fuse: Vision and language representation learning with momentum distillation. Advances in neural information processing systems, 34:9694–9705. Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. 2019. Visualbert: Asimple and performant baseline for vision and language. arXiv preprint arXiv:1908.03557. Wei Li, Can Gao, Guocheng Niu, Xinyan Xiao, Hao Liu, Jiachen Liu, Hua Wu, and Haifeng Wang. 2021b. Unimo: Towards unified-modal understanding and generation via cross-modal contrastive learning. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2592– 2607. Xiujun Li, Xi Yin, Chunyuan Li, Pengchuan Zhang, Xiaowei Hu, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, et al. 2020. Oscar: Objectsemantics aligned pre-training for vision-language tasks. In *European Conference on Computer Vision*, pages 121–137. Springer. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In *European conference on computer vision*, pages 740–755. Springer. Haoliang Liu, Tan Yu, and Ping Li. 2021. Inflate and shrink: Enriching and reducing interactions for fast text-image retrieval. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language* Processing, pages 9796–9809. Ilya Loshchilov and Frank Hutter. 2017. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101. Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. *Advances in neural information processing systems*, 32. Vicente Ordonez, Girish Kulkarni, and Tamara Berg. 2011. Im2text: Describing images using 1 million captioned photographs. In *Advances in Neural Information Processing Systems*, volume 24. Curran Associates, Inc. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In *International* Conference on Machine Learning, pages 8748–8763. PMLR. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21(140):1–67. Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. 2017. Grad-cam: Visual explanations from deep networks via gradient-based localization. In IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017, pages 618–626. IEEE Computer Society. Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. 2018. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In *Proceedings of the 56th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2556–2565. Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela. 2022. FLAVA: A foundational language and vision alignment model. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, pages 15617–15629. IEEE. Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, and Jifeng Dai. 2019. Vl-bert: Pre-training of generic visual-linguistic representations. In *International Conference on Learning Representations*. Alane Suhr, Stephanie Zhou, Ally Zhang, Iris Zhang, Huajun Bai, and Yoav Artzi. 2019. A corpus for reasoning about natural language grounded in photographs. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 6418–6428. Hao Tan and Mohit Bansal. 2019. Lxmert: Learning cross-modality encoder representations from transformers. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5100–5111. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *Advances in neural information processing* systems, 30. Jianfeng Wang, Xiaowei Hu, Zhe Gan, Zhengyuan Yang, Xiyang Dai, Zicheng Liu, Yumao Lu, and Lijuan Wang. 2021a. Ufo: A unified transformer for visionlanguage representation learning. arXiv preprint arXiv:2111.10023. Peng Wang, An Yang, Rui Men, Junyang Lin, Shuai Bai, Zhikang Li, Jianxin Ma, Chang Zhou, Jingren Zhou, and Hongxia Yang. 2022. OFA: unifying architectures, tasks, and modalities through a simple sequence-to-sequence learning framework. In International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pages 23318–23340. PMLR. Zekun Wang, Wenhui Wang, Haichao Zhu, Ming Liu, Bing Qin, and Furu Wei. 2021b. Distilled dualencoder model for vision-language understanding. arXiv preprint arXiv:2112.08723. Zirui Wang, Jiahui Yu, Adams Wei Yu, Zihang Dai, Yulia Tsvetkov, and Yuan Cao. 2021c. Simvlm: Simple visual language model pretraining with weak supervision. *arXiv preprint arXiv:2108.10904*. Ning Xie, Farley Lai, Derek Doran, and Asim Kadav. 2019. Visual entailment: A novel task for fine-grained image understanding. arXiv preprint arXiv:1901.06706. Lewei Yao, Runhui Huang, Lu Hou, Guansong Lu, Minzhe Niu, Hang Xu, Xiaodan Liang, Zhenguo Li, Xin Jiang, and Chunjing Xu. 2021. Filip: Fine-grained interactive language-image pre-training. arXiv preprint arXiv:2111.07783. Yan Zeng, Xinsong Zhang, and Hang Li. 2022. Multigrained vision language pre-training: Aligning texts with visual concepts. In International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pages 25994–26009. PMLR. Pengchuan Zhang, Xiujun Li, Xiaowei Hu, Jianwei Yang, Lei Zhang, Lijuan Wang, Yejin Choi, and Jianfeng Gao. 2021. Vinvl: Revisiting visual representations in vision-language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5579–5588. ## A Appendix A.1 Statistics Of Pre-Training Datasets In the expriments, we use four widely-used datasets for pre-training: SBU Captions (Ordonez et al., 2011), Microsoft COCO (Lin et al., 2014), Visual Genome (Krishna et al., 2017) and Google Conceptual Captions (Sharma et al., 2018). Due to the inaccessible problem of url, we can only collected 3.4M images, which is around 600K less than the original settings (Kim et al., 2021; Li et al., 2021a; Bao et al., 2022b). Details are shown in Table 6. Intuitively, if we can access to the full 4M data, our model could perform better. | MSCOCO | VG | SBU | GCC | Sum | | |----------|------|-------|-------|-------|------| | Original | 113K | 108K | 867K | 3.01M | 4M | | Ours | 113K | 108K | 853K | 2.36M | 3.4M | Table 6: Comparison of \# Images used in Pre-training between official settings and ours. ## A.2 Implementation Details Image-Text Retrieval. Different from pretraining, we set the resolution of images to 384 × 384 and use the tasks of ITC, ITM and CKT in image-text retrieval. The batch size is 256 and the initial learning rate is 1e-5. For Flickr30K and MSCOCO, we finetune 10 epochs and 5 epochs respectively. For the "Re-Rank" setting that selects top-k candidates, k is set to 128 for Flickr30K and 256 for MSCOCO following Li et al. (2021a) and Zeng et al. (2022). Visual Question Answering. For visual question answering, most methods convert VQAv2 to a classification task by preserving the most frequent 3192 answers in datasets. However, this will prevent some data from being used for fine-tuning because their answers are not in the candidate set. Thus, we follow previous work (Cho et al., 2021; Li et al., 2021a; Zeng et al., 2022) and treat VQA as an answer generation problem. More specifically, we predict the probability distribution on the vocabulary of the first token, and select the top-k candidates with the highest probability from the distribution. Finally, we use language-modeling loss to predict the final answer from the top-k candidates. For a fair comparison, we restrict the answer generation space to the same candidate set (Kim et al., 2021; Bao et al., 2022b) during inference. We finetune our model for 8 epochs with 256 batch size and the learning rate is 2e-5. The resolution of images is set to 576 × 576 (Dou et al., 2022) and k is set to 128. Natural Language for Visual Reasoning. For NLVR2, we follow previous work (Li et al., 2021a; Zeng et al., 2022) and extend the fusion-based encoder to enable reasoning over image pairs, in which an additional pre-training step is applied for training model to reason the relations among text and images. Then, we fine-tune the model for 15 epochs. The batch size is 128, learning rate is 2e5 and the resolution of the input image is set to 384 × 384. ## A.3 Performance Retaining For VL retrieval tasks that involve massive candidates of image-text pairs, it is crucial for a VLP model to have the ability of acting as a dual-encoder for efficient inference. Table 7 reports the comparisons between "Re-Rank" and "Dual" settings on retrieval tasks. Our model performs best in terms of performance retraining when switched from "ReRank" to "Dual" setting, showing the effectiveness of the designed parallel-style fusions and crossmodal knowledge transfer strategy. | Model | Flickr30k | MSCOCO | | | | | |---------|-------------|----------|------------|------|-------|------------| | R | D | drop↓ | R | D | drop↓ | | | ALBEF | 95.2 | 92.0 | 3.2 (3.4%) | 81.3 | 76.4 | 4.9 (6.0%) | | X-VLM | 96.5 | 93.4 | 3.1 (3.2%) | 85.8 | 80.8 | 5.0 (5.8%) | | Ours | 96.9 | 95.4 | 1.5 (1.5%) | 86.6 | 83.4 | 3.2 (3.7%) | Table 7: Results of different retrieval settings on MSCOCO (5K) and Flickr30k (1K). "R" and "D" are short for "Re-Rank" and "Dual" settings. We report the average of TR and IR. ## A.4 Inference Speed We further evaluate the inference time of our models and other compared methods on MSCOCO dataset. All the models are evaluated on a single A100 GPU. From the results reported in Table 8, we can observe that our model is well-equipped both efficacy and efficiency in retrieval tasks. Notably, when our model is switched to the fusionfree (dual) pattern, it can still achieve a comparable performance compared with other methods while enjoy a much faster inference speed. ![12_image_0.png](12_image_0.png) ![12_image_1.png](12_image_1.png) | Time | Speedup | MSCOCO | | | |----------|-----------|----------|------|------| | TR@1 | IR@1 | | | | | OSCAR-B‡ | - | ≪ 1.0× | 70.0 | 54.0 | | ViLT-B | ∼ 10h | 1.0× | 61.5 | 42.7 | | ALBEF† | ∼ 900s | 40× | 73.1 | 56.8 | | VLMo-B | ∼ 30s | 1, 200× | 74.8 | 57.2 | | Ours† | ∼ 900s | 40× | 82.2 | 65.2 | | Ours | ∼ 30s | 1, 200× | 77.3 | 58.9 | ## A.5 Visualization We provide more examples of per-word visualizations of our fusion-based encoder finetuned on VL retrieval tasks, as shown in Fig. 6. The visualizations suggest that our model can focus on specific regions of the image according to different words in text when conducts image-text matching. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? The 'limitation' section on page 9. ✗ A2. Did you discuss any potential risks of your work? This study is a fondamental work based on public datasets, and does not involve privacy, ethics and other dangerous related content. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and section 1. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** 5.2/5.2/Appendix A.1 B1. Did you cite the creators of artifacts you used? Not applicable. Left blank. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? The datasets used in this paper are widely used in corresponding fields. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? 5.2/5.2/Appendix A.1 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. 5.2/5.2/Appendix A.1 ## C ✓ **Did You Run Computational Experiments?** 5.3/5.4 Appendix A.3/A.4/A.5 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 5.1.2 / Appendix A.2 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 5.2 / Appendix A.2 ✗ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? We report the average results of many experiments ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 5.1.2/ 5.2 / Appendix A.2 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
jourdan-etal-2023-cockatiel
{COCKATIEL}: {CO}ntinuous Concept ran{K}ed {AT}tribution with Interpretable {EL}ements for explaining neural net classifiers on {NLP}
https://aclanthology.org/2023.findings-acl.317
Transformer architectures are complex and their use in NLP, while it has engendered many successes, makes their interpretability or explainability challenging. Recent debates have shown that attention maps and attribution methods are unreliable (Pruthi et al., 2019; Brunner et al., 2019). In this paper, we present some of their limitations and introduce COCKATIEL, which successfully addresses some of them. COCKATIEL is a novel, post-hoc, concept-based, model-agnostic XAI technique that generates meaningful explanations from the last layer of a neural net model trained on an NLP classification task by using Non-Negative Matrix Factorization (NMF) to discover the concepts the model leverages to make predictions and by exploiting a Sensitivity Analysis to estimate accurately the importance of each of these concepts for the model. It does so without compromising the accuracy of the underlying model or requiring a new one to be trained. We conduct experiments in single and multi-aspect sentiment analysis tasks and we show COCKATIEL{'}s superior ability to discover concepts that align with humans{'} on Transformer models without any supervision, we objectively verify the faithfulness of its explanations through fidelity metrics, and we showcase its ability to provide meaningful explanations in two different datasets. Our code is freely available: \url{https://github.com/fanny-jourdan/cockatiel}
# Cockatiel: Continuous Concept Ranked Attribution With Interpretable Elements For Explaining Neural Net Classifiers On Nlp Tasks Fanny Jourdan* IRIT, Université Paul-Sabatier Toulouse, France [email protected] Agustin Picard*† IRT Saint-Exupéry Toulouse, France [email protected] Thomas Fel Brown University, USA SNCF, Toulouse, France Laurent Risser IMT, Université Paul-Sabatier Toulouse, France Jean-Michel Loubes IMT, Université Paul-Sabatier Toulouse, France Nicholas Asher ![0_image_0.png](0_image_0.png) IRIT, Université Paul-Sabatier Toulouse, France ## Abstract Transformer architectures are complex and their use in NLP, while it has engendered many successes, makes their interpretability or explainability challenging. Recent debates have shown that attention maps and attribution methods are unreliable (Pruthi et al., 2019; Brunner et al., 2019). In this paper, we present some of their limitations and introduce COCKATIEL, which successfully addresses some of them. COCKATIEL is a novel, post-hoc, conceptbased, model-agnostic XAI technique that generates meaningful explanations from the last layer of a neural net model trained on an NLP classification task by using Non-Negative Matrix Factorization (NMF) to discover the concepts the model leverages to make predictions and by exploiting a Sensitivity Analysis to estimate accurately the importance of each of these concepts for the model. It does so without compromising the accuracy of the underlying model or requiring a new one to be trained. We conduct experiments in single and multiaspect sentiment analysis tasks and we show COCKATIEL's superior ability to discover concepts that align with humans' on Transformer models without any supervision, we objectively verify the faithfulness of its explanations through fidelity metrics, and we showcase its ability to provide meaningful explanations in two different datasets. Our code is freely available: https://github. com/fanny-jourdan/cockatiel ## 1 Introduction NLP models have undeniably gotten increasingly more complex since the introduction of the transformer architecture (Vaswani et al., 2017; Devlin * Denotes equal contribution †Work done as a Scalian employee, before April 2023 and joining IRT Saint-Exupéry. Figure 1: **An illustration of COCKATIEL**. Given some sentences of IMDB reviews, COCKATIEL (i) identifies concepts for prediction, *(ii)* ranks them, and (iii) gives the most important elements for each concept (to help us interpret the concept). et al., 2018; Liu et al., 2019a). This trend, which is also occurring in the domain of Computer Vision, has brought about a need for understanding how these models make their predictions. The presence of bias in these models could indeed be prejudicial in applications where the user's lives are at stake (De-Arteaga et al., 2019). Humans should be able to comprehend the reasons behind the model's decisions if these models are to gain general acceptance. Also, companies need to ensure that they are deploying algorithms which are free of harmful biases and that the explanations that they are obligated to issue are easily understandable by employees and end-users alike (Kop, 2021). Intelligibility by humans has then become a key topic in explainable AI systems. As AI systems become more sophisticated and are deployed in increasingly complex environments, the ability to provide clear and concise explanations of their decisions becomes more pressing. Researchers have proposed multiple solutions to address this challenge. The most straightforward approach analyzes how each part of the input influences the model's prediction. There are different ways of doing this, through perturbation (Ribeiro et al., 2016; Zeiler and Fergus, 2014) or by leveraging the gradients inside the neural network (Sundararajan et al., 2017a). However, these approaches suffer from being vulnerable to adversarial manipulation (Wang et al., 2020), from only performing partial input recovery (Adebayo et al., 2018) and from a general lack of stability with respect to the input (Ghorbani et al., 2019a). Another research path for transformer models harnesses the information in the attention maps of the transformers' layers to understand how the elements in the input relate to the output, implying that the attention mechanism is inherently interpretable. In spite of a number of supporters initially for this approach, there has been a recent wave of detractors of attentionbased explanations (Jain and Wallace, 2019; Pruthi et al., 2019; Serrano and Smith, 2019). More in line with our proposal work, researchers in the field of rationalization have proposed specific architectures to extract excerpts from whole inputs and predict a model's output based on these rationales (Lei et al., 2016; Jain et al., 2020; Chang et al., 2020; Yu et al., 2019; Bastings et al., 2019; Paranjape et al., 2020). These rationales can be seen as explanations that are sufficiently high-level to be easily understood by humans. However, they require to train an entirely new model. Only one rationale can also be found per input text, when there might intuitively be several predictions for a given prediction. Finally, these approaches use architectures that have mostly been left behind since the introduction of the transformer architecture, due to their inferior predictive capabilities. In line with the project of generating explanations that are meaningful to humans, concept-based explainable AI (XAI) has lately advanced the sate of the art. The pioneer method TCAV (Kim et al., 2018) goes beyond widespread attribution methods to create high-level explanations based on handpicked concepts. More recently, Fel et al. (2022) has extended this technique to discover automatically pertinent concepts inside the network's activation space and to find the parts of the input space that most align with each concept. Still, it has only been applied to convolutional architectures for image classification tasks. In this paper, we present COCKATIEL, a novel technique for generating reliable and meaningful explanations for NLP neural architectures for classification problems. It extends CRAFT (Fel et al., 2022) and our contributions can be summarized as follows: - We introduce a post-hoc explainability technique that is applicable to any neural network architecture containing non-negative activation functions. The technique is capable of explaining predictions of individual instances as well as providing insights of the model's general behavior. - We measure COCKATIEL's ability to discover concepts that align with those that Humans would employ in a sentiment analysis application. Although we did not train the model on data annotated with these human concepts, COCKATIEL's explanations find them with high accuracy. - We demonstrate that in addition to generating meaningful concepts for Humans, these explanations are faithful to the models: An explanation X provided by method C is faithful to a model M just in case if X is returned as a putative explanation of M's behavior by C, the X plays a causal role in M's behavior. - We provide examples of explanations on finetuned RoBERTa models (Liu et al., 2019a) and bidirectional LSTMs trained from scratch to show how the concept decomposition can be used to understand the inner workings of complex models. ## 2 Related Work 2.1 Explaining Through Rationalization Finding rationales in text refers to the process of identifying expressions that provide the key reasons or justifications that are provided for a particular claim or decision about that text. Lei et al. (2016) defined rationales as "a minimal set of text spans that are sufficient to support a given claim or decision". They should satisfy two desiderata: they should be interpretable, and they should reach nearly the same prediction as the original output. To do so, they use a generator network that finds interesting excerpts and an encoder network that generates predictions based on them. However, their scheme requires the use of reinforcement learning (Williams, 1992) for the optimization procedure. Bastings et al. (2019) proposed to include a reparametrization trick to allow for better gradient estimations without the need for reinforcement learning techniques, and a sparsity constraint to encourage the retrieval of minimal excerpts. Yu et al. (2019) and Paranjape et al. (2020) studied the problem of producing adequate rationales from a game-theoretic point of view. However, these models can be quite complex to train, as they either require a reparametrization trick or a reinforcement learning procedure. Jain et al. (2020) proposed to solve this problem by introducing a support model capable of producing continuous importance scores for instances of the input text, that the rationale extractor can use to decide whether an excerpt will make a good rationale or not. All these rationales will serve as an explanation for single instances, but won't explain how models predict whole classes. Chang et al. (2019) introduced a rationalization technique that allows for the retrieval of rationales for factual and counterfactual scenarios using three players. However, all these techniques are not model agnostic and require specific architectures, in particular rather simple architectures or LSTMs, and training procedures. But these architectures have been shown to not produce optimal results. ## 2.2 Concept-Based Explanations Concept-based explainability is a growing area of research in AI, focused on generating humanunderstandable explanations for the decisions made by machine learning models. One popular approach for generating concepts is TCAV (Kim et al., 2018). It uses gradient-based techniques to identify the important features of a model. However, TCAV relies on Human inputs, as it requires the user to manually specify the concepts to be tested. This can be time-consuming and may not always produce the most comprehensive explanations (Ghorbani et al., 2019b). Another approach, ACE (Ghorbani et al., 2019b), aims to automate the concept extraction process. ACE uses a clustering algorithm to identify interpretable concepts in the model's activations, without the need for Human input. While this approach has the potential to greatly reduce the time and effort required for concepts extraction, the authors criticize their own reliance on pre-defined clustering algorithms, which may not always produce the most relevant or useful concepts. An alternative uses matrix factorization techniques, such as non-negative matrix factorization (NMF) (Lee and Seung, 1999), to identify interpretable factors in the data (Zhang et al., 2021; Fel et al., 2022). As presented in Section 3, or strategy is inspired by (Fel et al., 2022) and is therefore a concept-based explanations XAI method. In (Fel et al., 2022), the authors developed a framework for generating global and local explanations. They successfully tested the meaningfulness and the capacity of these explanations to help Humans to understand the model's behavior through psychological experiments. However, this approach has only been applied to convolutional neural networks for image classification tasks so far. For NLP applications, Bouchacourt and Denoyer (2019) proposed a self-interpretable neural architecture capable of simultaneously generating a prediction on classification tasks and its concept-based explanation. These concepts are learned without supervision from excerpts using a bidirectional LSTM during the training phase of the model, and the predictions are only based on the presence or absence of the individual concepts in the input sentences. Despite of its capacity to generate interesting concepts, its low prediction accuracy for the classification task is a serious limitation (see Table 1). Going further, Antognini and Faltings (2021) introduced ConRAT, a technique that includes orthogonality, cosine similarity and knowledge distillation constraints, as well as a concept pruning procedure to improve on both the quality of the extracted concepts and the model's accuracy. ## 3 Cockatiel In this section, we describe COCKATIEL, our concept-based XAI technique for NLP models to generate human-understandable explanations. It has three main components: (i) it uses NonNegative Matrix Factorization (NMF) to discover the concepts that the neural network under study leverages to make predictions; *(ii)* it exploiting Sensitivity Analysis to estimate accurately the importance of each of these concepts for the model; and (iii) it uses a black-box explainability technique to generate instance-wise explanations at a per-word ![3_image_0.png](3_image_0.png) and per-clause level. Fig. 2 presents a schematic outline of COCKATIEL. Notation In a supervised learning framework, we assume that a neural network model f : X n → Yn has already been trained for some classification task. We denote by (x1, ..., xn) ∈ X na set of n input texts and (y1, ..., yn) ∈ Yntheir associated labels. We consider f to be a composition of h, the last embedding of x (*i.e.* the last layer of the feature extractor model), and c, the classification function, f(x) = c ◦ h(x) with h(x) ⊆ R p. COCKATIEL will factorize h through NMF, so we require h to be non-negative - i.e. h(x) ≥ 0 ∀x ∈ X . This constraint is typically verified when the last layer has an activation function such that σ(x) ≥ 0, which is the case in (but it's not limited to) layers or blocks using *ReLU*. ## 3.1 Unsupervised Concept Discovery - "Concept Part" COCKATIEL discovers concepts without supervision by factorizing the neural network's intermediary activations by using a NMF algorithm. Because we are factorizing h, we can generate explanations on embeddings without needing to deal with the complexities of attention layers (Pruthi et al., 2019); nor do we have to deal with the nonidentifiability of transformer models (Brunner et al., 2019). Thus, the concept extraction phase of our method does not depend on the specificities of attention. We will address this later on in Section 3.3 to be able to generate our instance-level explana- ## Tions. NMF algorithm: We choose an excerptextraction function τ1 to generate a database of excerpts coming from texts that the model places in the desired class dc - i.e. Xi = τ1(xi) such that f(xi) = dc. Then, we place ourselves at the model's last layer and we extract the activations A = h(Xi) for each of the excerpts Xiin the database. With this information, we solve the constrained optimization problem engendered by the NMF algorithm: $$(\mathbf{U},\mathbf{W})=\operatorname*{arg\,min}_{\mathbf{U}\geq0,\mathbf{W}\geq0}\ {\frac{1}{2}}\|\mathbf{A}-\mathbf{U}\mathbf{W}^{T}\|_{F}^{2},\quad(1)$$ where *|| · ||*F is the Frobenius norm. This allows us to decompose the high-rank matrix containing all activations A ∈ R n×pinto two low-rank matrices U ∈ R n×rand W ∈ R p×r. Intuitively, this corresponds to W being a matrix whose columns represent the concepts that we will use to generate explanations, and U is a matrix containing the coefficients quantifying the presence of each concept. These matrices are built so as to minimize the reconstruction error 12∥A − UW∥ 2 F , enforcing the relevance of the concepts, and with a non-negative constraint for each matrix, thus encouraging sparsity in their elements. It is important to note that these coefficients uij ∈ R+, so the presence of a concept can be determined by where its value stands in the concept's coefficients distribution. In practice, we have found that fixing a threshold at the quantile representing the 10% highest values leads to accurate ![4_image_0.png](4_image_0.png) and easy to interpret explanations. Choice of τ1: As we want the concepts to be descriptive enough to convey an abstraction but short enough to only contain one, we work with excerpts chosen by an excerpt-extraction function τ1. The choice of τ1, which should depend on the dataset and the text's format, heavily impacts the type of explanations that we are able to generate. We have identified 3 possible τ1 functions: (i) take all the full text ; *(ii)* split the text into sentences (of at least 6 words) ; *(iii)* split the text into clauses. Linguistically, it doesn't make sense to take smaller tokens like one or two words since their meaning is typically too unfocused to provide a real explanation. We therefore chose τ1 to respond specifically to each use-case. If we want to capture the mood of whole inputs, we can designate the inputs as the excerpts, and then interpret them by leveraging the local part of our method. If we instead wish to extract more simple but structured concepts, we can choose τ1 to pick sentences of at least 6 words and ending in a full-stop. The first condition is necessary in the case of the *beer review* dataset, which is composed of short sentences containing very simple descriptions. For this dataset, using only very short excerpts would fail to convey the complexity of the ideas conveyed by the concepts. In this paper, we present results using these two excerpt-extraction functions. ## 3.2 Concept Importance Estimation - "Ranking Part" A common issue when utilizing concept extraction methods is the discrepancy between concepts deemed relevant by humans and those utilized by the model for classification. To mitigate the potential for confirmation bias during the concept analysis phase, we estimate the overall importance of the extracted concepts. To determine which concept has the most significant impact on the model output, we use a counterfactual reasoning (Peters et al., 2017; Pearl et al., 2016), and then use sensitivity analysis (Cukier et al., 1973; Iooss and Lemaître, 2015). A classic strategy in this area is the use of total Sobol indices (Sobol, 1993). This method captures the importance of a concept, along with its interactions with other concepts, on the model output by calculating the expected variance that would remain if all the indices of the masks except Mi were fixed. Definition 3.1 (**Total Sobol indices**). *The total* Sobol index STi*, which measures the contribution* of a concept Ui *as well as its interactions of any* order with any other concepts to the model output variance, is given by: _variance_, is given by. $$\mathcal{S}_{T_{i}}=\frac{\mathbb{E}_{M\sim i}\left(\mathbb{V}_{M_{i}}(\mathbf{Y}|\mathbf{M}_{\sim i})\right)}{\mathbb{V}(\mathbf{Y})}\tag{2}$$ $$=\frac{\mathbb{E}_{M\sim i}\left(\mathbb{V}_{M_{i}}(c((\mathbf{U}\odot\mathbf{M})\mathbf{W}^{T})|\mathbf{M}_{\sim i})\right)}{\mathbb{V}(c((\mathbf{U}\odot\mathbf{M})\mathbf{W}^{T}))}.\tag{3}$$ To estimate the importance of a concept Ui, we measure the fluctuations of the model output c(UWT) in response to perturbations of the concept coefficient Ui. Specifically, we use a sequence of random variables M to introduce concept fluctuations and reconstruct a perturbed activation A˜ = (U ⊙ M)WT. We then propagate this perturbed activation to the model output Y = c(A˜). An important concept will have a large variance in the model output, while an unused concept will barely change it. The method for calculating (2) and (3) exploits the Sobol-Hoeffding decomposition and is in the supplementary materials (appendix A). There are already a plethora of different techniques that allow us to compute this index efficiently (Saltelli et al., 2010; Marrel et al., 2009; Janon et al., 2014; Owen, 2013; Tarantola et al., 2006). But concretely, we estimate the total Sobol indices using the Jansen estimator (Janon et al., 2014), a widely recognized efficient method (Puy et al., 2022). The Jansen estimator is commonly utilized in conjunction with a Monte Carlo sampling strategy, but we improve over Monte Carlo by using a Quasi-Monte Carlo sampling strategy. This technique generates sample sequences with low discrepancy, resulting in a more rapid and stable convergence rate (Gerber, 2015). ## 3.3 Instance-Level Explanation Generation - "Interpretable Elements Part" In this part, we interpret the concepts found previously. To do this, we find which words and clauses are associated with each concept. We adapt Occlusion (Zeiler and Fergus, 2014): a black-box attribution method that works by masking each word looking at the impact on the model output. In this case, to get an idea of the importance of each word for a given concept, we mask words in a sentence and measure the effect of the new sentence (without the words) on the concept. This operation can be performed at word or clause level - i.e. mask words or whole clauses - to obtain explanations that are more or less fine-grained depending on the application. Motivations: This choice has been shown to perform particularly well on NLP models (Fel et al., 2021a) and doesn't suffer from the inefficiency of having to sample a considerable amount of masks for each explanation. Indeed, in (Fel et al., 2021a), they compared Occlusion to other explainability techniques that are commonly used in NLP, and they showed that it is more faithful to the model than Saliency (Simonyan et al., 2014), Grad-Input, SmoothGrad (Smilkov et al., 2017), Integrated Gradients (Sundararajan et al., 2017b), and their own Sobol method on both LSTM and BERT models. In addition, in the case of transformer models, using a black-box method such as Occlusion avoids manipulating the attention layers between the input and the activation matrix A, where our concepts are located. In doing so, we avoid the non-identifiability problem of transformer models (Pruthi et al., 2019). Application: Empirically, we perform the following operations: For a sentence Xi, Ai = h(Xi). We have a fixed W calculate with the NMF and Wk, the k concept of W. As before, we get the importance of the sentence Xi for the concept k: $$U_{i}^{k}=\operatorname*{arg\,min}_{U\geq0}\ \frac{1}{2}\|A_{i}-U W_{k}^{T}\|_{F}^{2}.$$ Then, we remove the element j from the sentence i: X˜i−j (i.e. we replace the (tokenized) feature by a zero). So we have A˜i−j = h(X˜i−j ), and: $$\tilde{U}_{i-j}^{k}=\operatorname*{arg\,min}_{U\geq0}\ \frac{1}{2}\|\tilde{A}_{i-j}-U W_{k}^{T}\|_{F}^{2},$$ So, ϕ(*k, i, j*) quantifies the influence of the element j in the sentence i for the concept k: $$\phi(k,i,j)=U_{i}^{k}-\tilde{U}_{i-j}^{k},$$ For the visualisations (see e.g. Fig. 6), we color the element with the color of the concept for which it is most important. In addition, the darker the color, the more important the element is for the concept. Choice of τ2: Just like in the case of the NMF, the choice of the form of the elements of the input to occlude will have an impact on the understandability of the explanations. This can be generalized via another excerpt extraction function τ2, whose optimal shape will depend on the dataset, the text's format and the learned concepts (i.e. Occlusion shouldn't be applied at a per-clause level if the concepts were learned using a τ1 providing single words, so this first exceprt extraction function must be taken into consideration). There is a certain trade-off between the granularity and the interpretability of the explanations, as illustrated in Figure 11 in the appendix which contains some examples with different choices of τ2. In general, we advise to try different combinations of τito find the desired level of granularity in the explanations for each use-case. ## 4 Experimental Evaluation For all of our results, we fine-tuned RoBERTa (Liu et al., 2019a) based models on each dataset. We ensured the non-negativity of at least one layer of the model by adding a ReLU activation after the first layer of the 1-hidden-layer, dense MLP of the classification head. For the qualitative analysis, we also tested COCKATIEL's performance on bidirectional LSTM models trained from scratch. More details about the implementations are left in appendix B. | Average | Appearance | Aroma | Palate | Taste | | | | | | | | | | |---------------|----------------------|---------|-------------------------------------------------------------|-------------------------------------------------------------|--------------------------------------------------------|---------------------|------|-----|----|----|----|----|----| | Model | Acc. Prec. Rec. Fsc. | P | R | F | P | R | F | P | R | F | P | R | F | | RNP | 81.1 | 24.7 | 21.3 | 24.9 28.6 23.2 26.5 22.1 21.0 21.5 17.7 24.1 20.4 28.1 16.7 | 20.9 | | | | | | | | | | RNP-3P 80.5 | 26 | 21.8 | 23.3 30.4 25.6 27.8 19.3 20.4 19.8 10.3 12.0 11.1 43.9 28.4 | 34.5 | | | | | | | | | | | Intro-3P 85.6 | 21 | 18.0 | 19.1 28.7 24.8 26.6 14.3 14.4 14.3 16.6 19.3 17.9 24.2 13.6 | 17.4 | | | | | | | | | | | InvRAT | 82.9 | 37.5 | 31.6 | 33.8 54.5 45.5 49.6 26.1 27.6 26.9 22.6 25.9 24.1 46.6 27.4 | 34.5 | | | | | | | | | | ConRAT 91.4 | 43.8 | 39.7 | 40.9 57.8 53.0 55.3 31.9 35.5 33.6 29.0 36.3 32.3 56.5 33.9 | 42.4 | | | | | | | | | | | Ours | 95.2 | 40.6 | 58.4 | 47 | 67.5 71.4 69.4 34.1 42.3 37.7 24.8 46.7 32.4 36.1 73.3 | 48.4 | | | | | | | | | RNP | 84.4 | 32.7 | 14.5 | 19.5 40.1 12.0 18.5 33.3 18.7 24.0 25.1 17.4 20.6 32.3 | 9.8 | 15.07 | | | | | | | | | RNP-3P 83.1 | 28.4 | 13.2 | 17.8 41.8 19.2 26.3 22.2 12.4 15.9 16.5 10.4 12.7 33.2 10.6 | 16.1 | | | | | | | | | | | Intro-3P 80.9 | 24 | 12.2 | 16.1 51.0 26.0 34.4 18.8 | 9.7 | 12.8 16.5 10.6 12.9 | 9.7 | 2.6 | 4.1 | | | | | | | InvRAT | 81.9 | 36.6 | 15.7 | 21.8 59.4 26.1 36.3 31.3 15.5 20.8 16.4 | 9.6 | 12.1 39.1 11.6 | 17.9 | | | | | | | | ConRAT 91.3 | 38.2 | 17.6 | 23.8 51.7 26.2 34.8 32.6 17.4 22.7 23.0 13.8 17.3 45.3 13.1 | 20.3 | | | | | | | | | | | Ours | 95.2 | 39.5 | 58.4 | 45.5 63.3 56.4 59.7 27.3 67.4 38.9 | 26 | 43.5 32.5 41.4 66.1 | 50.9 | | | | | | | l = 20 l = 10 We will first analyze the meaningfulness of the discovered concepts by measuring their alignment with human annotations on the different aspects of a multi or single-aspect sentiment analysis task. Then, we will ensure that our explanations are faithful to the model through an adaptation of the insertion and deletion metrics to concept-based XAI. Finally, we will showcase some examples of explanations and of applications for our method. ## 4.1 Alignment With Human Concepts ![6_Image_0.Png](6_Image_0.Png) Following the human-alignment evaluation in (Antognini and Faltings, 2021), we perform beer task: Beer Task We will measure the extent to which our concepts overlap the human annotations for the 4 different aspects of the multi-aspect *beer reviews* dataset (McAuley et al., 2012). This dataset contains reviews for beers with commentary and marks (from 0 to 5) on 5 different aspects: Appearance, Aroma, Palate, Taste and Overall. The model will be trained to predict whether the overall score is greater than 3 - i.e. a positive review on the beer – and will not have access to the labels for the other aspects. Additionally, it includes 994 reviews with annotations indicating the position of these aspects in the text. The objective of this evaluation is to look for concepts that align with these annotations and measure their capacity to predict the location of each different aspect. In particular, we searched across the whole annotated dataset for the concepts whose F1 score for the prediction of each aspect was maximal. It is important to note that this does not take into account to which extent they are important for the model to predict, but this only serves as an automatized test for determining whether the explainability technique is capable of generating understandable concepts. We calculate the precision, recall and F1 scores for each aspect, and we do so with l = 10 and l = 20 concepts. We remind the reader that, unlike the baselines, our method is a post-hoc technique, and thus, the model does not need to be re-trained, and that changing the number of concepts takes only a few minutes of compute on GPU. In Table 1, we present a comparison of our results to those obtained with some rationalization techniques: RNP (Lei et al., 2016), RNP-3P (Yu et al., 2019), InvRAT (Chang et al., 2020) and ConRAT (Antognini and Faltings, 2021) for the task ![7_image_0.png](7_image_0.png) on *Beer*. We demonstrate that not only our model achieves the highest accuracy, but also that it outperforms all the other methods in its ability to accurately recognize the human annotations, be it by its precision, recall or F1 score. ## 4.2 Evaluation Of Explanation Faithfulness We have demonstrated that we can generate concepts that greatly align with humans', but to legitimately serve as an explainability technique, we must also guarantee its faithfulness. This element is key, as the concepts leveraged by the model may not perfectly align with humans in every task, but we still want the explanation to reflect what the model is doing. An XAI method is said to be faithful if its explanations faithfully convey the information that the model is using to generate its predictions. In (Ghorbani et al., 2019b; Zhang et al., 2021), they proposed to use an adaptation of the deletion and insertion explainability metrics to concept-based methods. In essence, they proposed to gradually mask/add the concepts (following their importance) and seeing the impact on the logits. If the concepts are indeed important for the model to predict, they should drastically decrease/increase as vital information for the prediction is progressively being erased/added. To evaluate the explanation Faithfulness and present qualitative results, we used the IMDB dataset (Maas et al., 2011). The IMDB dataset is a collection of 50K movie reviews from the Internet Movie Database (IMDB) website. For each review, IMDB specifies whether it is positive or negative (the label). The dataset is balanced, with 25K positive and 25K negative reviews. We used a RoBERTa model to predict the label from the reviews. In Fig. 5, we showcase the plots for these two fidelity metrics on the *IMDB Reviews* dataset. We observe that the concepts are indeed important for the model's predictions. In the both plots, the curve corresponding to the concept ranked in order of importance according to our Sobol method is better than a random ranking of these concepts, and much better than if we had taken the order of Sobol importance in reverse. In particular, to obtain statistically significant results, we took 10 sets of 10k reviews, and computed the mean and standard deviation values for both of the metrics. ## 4.3 Qualitative Evaluation A model with a good accuracy like RoBERTa gives very good explanations. Others like LSTM (see appendix C) do not do so well and do not yield good explanations. This is not a surprise; if the model predicts badly, necessarily the concepts it uses to predict will be bad. Similarly, if the model is very basic, it uses simple concepts to predict. The reviews in IMDB are also well written, so it is more comfortable to analyse sentences and words to properly call the concepts found by the NMF. In Fig. 6, we can see the 3 most important concepts for each label class. Each of its concepts "*the favorite movie*", "technically good/interesting movie", "*good comedie of family movie*" for the positive class or "*the worst movie*", "*middling movie*", "*boring/stupid movie*" for the negative class are ideas that seem natural and which structures our vision of why a film would be positive or negative. ## 5 Conclusion In this paper, we revisited concept-based explainability techniques and presented COCKATIEL, a post-hoc, model agnostic method capable of generating meaningful and faithful explanations for NLP models trained on classification tasks. The method has three parts: *(i) a concept part*, using Non-Negative Matrix Factorization to discover the ![8_image_0.png](8_image_0.png) concept, *(ii) a ranking part*, using Total Sobol indices to measure the influence of each concept, and (iii) an interpretable elements part, using a blackbox attribution method to quantify the impact of each element out of each concept. We measured COCKATIEL's ability to discover concepts that align with those humans and obtained better scores than state-of-the-art methods. We demonstrated that in addition to generating meaningful concepts for humans, these explanations are faithful to the models. Finally, we gave some qualitative examples of explanations for different models to understand the method "in practice". ## Limitations We have demonstrated that COCKATIEL is capable of generating meaningful explanations that align with human concepts, and that they tend to explain rather faithfully the model. The concepts extracted of NMF are abstract and we interpret them using part 3 of the method. However, for the interpretation, we rely on our own understanding of the concept linked to the examples of words or clauses associated with the concept. This part therefore requires human supervision and will not be identical depending on who is looking. One way to add some objectivity to this concept labeling task would be to leverage topic modeling models to find a common theme to each concept. In addition, τ1 and τ2 were chosen empirically to allow for an adequate concept complexity/human understandability trade-off in our examples. We recognize that this choice might not be optimal in every situation, as more complex concept may be advantageous in some cases, and more easily understandable ones, in others. We surmise that this choice might also depend on the amount of concepts and on the model's expressivity. Finally, we have studied the meaningfulness and fidelity of our generated concepts, but ideally, the simulatability should also be tested. This property measures the explanation's capacity to help humans predict the model's behavior, and has recently caught the attention of the XAI community (Fel et al., 2021b; Shen and Huang, 2020; Nguyen, 2018; Hase and Bansal, 2020). We leave this analysis for future works. ## Ethics Statement This work contributes to the field of explainability. This field has strong links with the field of fairness, because explaining a model makes it possible to understand its biases. Transformers are a type of model that are little studied in explainability and yet it is widely used. COCKATIEL is a tool to explain transformers and therefore avoid using biased models against the minority. It is important to remark that this need for understanding automatic decisions start being enforced by Law, as for instance by the so-called *AI act*1 of the European Union. As a consequence, companies need to ensure that they are deploying algorithms which are free of harmful biases and that the explanations that they're obligated to issue are easily understandable by employees and end-users alike. ## Acknowledgements We thank the ANR-3IA Artificial and Natural Intelligence Toulouse Institute (ANITI) funded by the ANR-19-PI3A-0004 grant for research support. We also thank the reviewers for their insightful comments. This work was conducted as part of the DEEL2 project. ## References Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, and Been Kim. 2018. Sanity checks for saliency maps. Advances in neural information processing systems, 31. Diego Antognini and Boi Faltings. 2021. Rationalization through concepts. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 761–775, Online. Association for Computational Linguistics. Jasmijn Bastings, Wilker Aziz, and Ivan Titov. 2019. Interpretable neural predictions with differentiable binary variables. *arXiv preprint arXiv:1905.08160*. Diane Bouchacourt and Ludovic Denoyer. 2019. Educe: Explaining model decisions through unsupervised concepts extraction. arXiv preprint arXiv:1905.11852. Gino Brunner, Yang Liu, Damian Pascual, Oliver Richter, Massimiliano Ciaramita, and Roger Wattenhofer. 2019. On identifiability in transformers. arXiv preprint arXiv:1908.04211. Shiyu Chang, Yang Zhang, Mo Yu, and Tommi Jaakkola. 2019. A game theoretic approach to class-wise selective rationalization. Advances in neural information processing systems, 32. Shiyu Chang, Yang Zhang, Mo Yu, and Tommi Jaakkola. 2020. Invariant rationalization. In *International* Conference on Machine Learning, pages 1448–1458. PMLR. RI Cukier, CM Fortuin, Kurt E Shuler, AG Petschek, and JH Schaibly. 1973. Study of the sensitivity of coupled reaction systems to uncertainties in rate coefficients. i theory. *The Journal of chemical physics*, 59(8):3873–3878. Maria De-Arteaga, Alexey Romanov, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, and Adam Tauman Kalai. 2019. Bias in bios: A case study of semantic representation bias in a high-stakes setting. In *proceedings of the Conference on Fairness,* Accountability, and Transparency, pages 120–128. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*. Thomas Fel, Rémi Cadène, Mathieu Chalvidal, Matthieu Cord, David Vigouroux, and Thomas Serre. 2021a. Look at the variance! efficient black-box explanations with sobol-based sensitivity analysis. Advances in Neural Information Processing Systems, 34. Thomas Fel, Julien Colin, Rémi Cadène, and Thomas Serre. 2021b. What i cannot predict, i do not understand: A human-centered evaluation framework for explainability methods. arXiv preprint arXiv:2112.04417. Thomas Fel, Agustin Picard, Louis Bethune, Thibaut Boissin, David Vigouroux, Julien Colin, Rémi Cadène, and Thomas Serre. 2022. Craft: Concept recursive activation factorization for explainability. arXiv preprint arXiv:2211.10154. Mathieu Gerber. 2015. On integration methods based on scrambled nets of arbitrary size. *Journal of Complexity*, 31(6):798–816. Amirata Ghorbani, Abubakar Abid, and James Zou. 2019a. Interpretation of neural networks is fragile. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pages 3681–3688. Amirata Ghorbani, James Wexler, James Y Zou, and Been Kim. 2019b. Towards automatic concept-based explanations. *Advances in Neural Information Processing Systems*, 32. Peter Hase and Mohit Bansal. 2020. Evaluating explainable ai: Which algorithmic explanations help users predict model behavior? arXiv preprint arXiv:2005.01831. Bertrand Iooss and Paul Lemaître. 2015. A review on global sensitivity analysis methods. *Uncertainty* management in simulation-optimization of complex systems, pages 101–122. Sarthak Jain and Byron C Wallace. 2019. Attention is not explanation. *arXiv preprint arXiv:1902.10186*. Sarthak Jain, Sarah Wiegreffe, Yuval Pinter, and Byron C Wallace. 2020. Learning to faithfully rationalize by construction. arXiv preprint arXiv:2005.00115. Alexandre Janon, Thierry Klein, Agnes Lagnoux, Maëlle Nodet, and Clémentine Prieur. 2014. Asymptotic normality and efficiency of two sobol index estimators. *ESAIM: Probability and Statistics*, 18:342– 364. Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, et al. 2018. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav). In *International conference on machine learning*, pages 2668–2677. PMLR. Mauritz Kop. 2021. Eu artificial intelligence act: The european approach to ai. Stanford-Vienna Transatlantic Technology Law Forum, Transatlantic Antitrust . . . . Daniel D Lee and H Sebastian Seung. 1999. Learning the parts of objects by non-negative matrix factorization. *Nature*, 401(6755):788–791. Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing neural predictions. arXiv preprint arXiv:1606.04155. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019a. Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized BERT pretraining approach. *CoRR*, abs/1907.11692. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In *Proceedings of the 49th Annual Meeting of the* Association for Computational Linguistics: Human Language Technologies, pages 142–150, Portland, Oregon, USA. Association for Computational Linguistics. Joel Mackenzie, Rodger Benham, Matthias Petri, Johanne R. Trippas, J. Shane Culpepper, and Alistair Moffat. 2020. Cc-news-en: A large english news corpus. In Proceedings of the 29th ACM International Conference on Information amp; Knowledge Management, CIKM '20, page 3077–3084, New York, NY, USA. Association for Computing Machinery. Amandine Marrel, Bertrand Iooss, Beatrice Laurent, and Olivier Roustant. 2009. Calculations of sobol indices for the gaussian process metamodel. *Reliability* Engineering & System Safety, 94(3):742–751. Julian McAuley, Jure Leskovec, and Dan Jurafsky. 2012. Learning attitudes and attributes from multi-aspect reviews. In 2012 IEEE 12th International Conference on Data Mining, pages 1020–1025. IEEE. Dong Nguyen. 2018. Comparing automatic and human evaluation of local explanations for text classification. In *Proceedings of the 2018 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1069–1078. Art B Owen. 2013. Better estimation of small sobol'sensitivity indices. *ACM Transactions on Modeling and Computer Simulation (TOMACS)*, 23(2):1– 17. Bhargavi Paranjape, Mandar Joshi, John Thickstun, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2020. An information bottleneck approach for controlling conciseness in rationale extraction. arXiv preprint arXiv:2005.00652. Judea Pearl, Madelyn Glymour, and Nicholas P Jewell. 2016. *Causal inference in statistics: A primer*. John Wiley & Sons. Jonas Peters, Dominik Janzing, and Bernhard Schölkopf. 2017. *Elements of causal inference: foundations and* learning algorithms. MIT press. Danish Pruthi, Mansi Gupta, Bhuwan Dhingra, Graham Neubig, and Zachary C Lipton. 2019. Learning to deceive with attention-based explanations. *arXiv* preprint arXiv:1909.07913. Arnald Puy, William Becker, Samuele Lo Piano, and Andrea Saltelli. 2022. A comprehensive comparison of total-order estimators for global sensitivity analysis. International Journal for Uncertainty Quantification, 12(2). Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. Model-agnostic interpretability of machine learning. *arXiv preprint arXiv:1606.05386*. Andrea Saltelli, Paola Annoni, Ivano Azzini, Francesca Campolongo, Marco Ratto, and Stefano Tarantola. 2010. Variance based sensitivity analysis of model output. design and estimator for the total sensitivity index. *Computer physics communications*, 181(2):259–270. Sofia Serrano and Noah A Smith. 2019. Is attention interpretable? *arXiv preprint arXiv:1906.03731*. Hua Shen and Ting-Hao Huang. 2020. How useful are the machine-generated interpretations to general users? a human evaluation on guessing the incorrectly predicted labels. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, volume 8, pages 168–172. Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. 2014. Deep inside convolutional networks: Visualising image classification models and saliency maps. In In Workshop at International Conference on Learning Representations. Citeseer. Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda Viégas, and Martin Wattenberg. 2017. Smoothgrad: removing noise by adding noise. *arXiv preprint* arXiv:1706.03825. Ilya M Sobol. 1993. Sensitivity analysis for non-linear mathematical models. *Mathematical modelling and* computational experiment, 1:407–414. Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017a. Axiomatic attribution for deep networks. In *International conference on machine learning*, pages 3319– 3328. PMLR. Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017b. Axiomatic attribution for deep networks. In International conference on machine learning, pages 3319–3328. PMLR. Stefano Tarantola, Debora Gatelli, and Thierry Alex Mara. 2006. Random balance designs for the estimation of first order global sensitivity indices. *Reliability Engineering & System Safety*, 91(6):717–727. Trieu H Trinh and Quoc V Le. 2018. A simple method for commonsense reasoning. arXiv preprint arXiv:1806.02847. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *Advances in neural information processing* systems, 30. Junlin Wang, Jens Tuyls, Eric Wallace, and Sameer Singh. 2020. Gradient-based analysis of nlp models is manipulable. *arXiv preprint arXiv:2010.05419*. Ronald J Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. *Machine learning*, 8(3):229–256. Mo Yu, Shiyu Chang, Yang Zhang, and Tommi S Jaakkola. 2019. Rethinking cooperative rationalization: Introspective extraction and complement control. *arXiv preprint arXiv:1910.13294*. Matthew D Zeiler and Rob Fergus. 2014. Visualizing and understanding convolutional networks. In *European conference on computer vision*, pages 818–833. Springer. Ruihan Zhang, Prashan Madumal, Tim Miller, Krista A Ehinger, and Benjamin IP Rubinstein. 2021. Invertible concept-based explanations for cnn models with non-negative concept activation vectors. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 35, pages 11682–11690. Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In arXiv preprint arXiv:1506.06724. ## A Sobol Technique In Details Let (Ω, A, P) be a probability space of possible concept perturbations. To build these concept perturbations, we use M = (M1, . . . , Mr) *∈ M ⊆* [0, 1]r, i.i.d. stochastic masks on the original vector of concept coefficients Ub ∈ R r. We define concept perturbation U = π(Ub ,M) with the perturbation operator π(U˜ ,M) = U˜ ⊙M +(1−M)µ with ⊙ the Hadamard product and µ ∈ R a baseline value, here zero. We denote the set U = {1*, . . . , r*}, u a subset of U, its complementary ∼ u and E(·) the expectation over the perturbation space. We define c : A → R, the classification function and we assume that c ∈ L 2(A, P) i.e. |E(c(U))| < +∞. The Hoeffding decomposition gives c in function of summands of increasing dimension, denoting cu the partial contribution of the concepts Uu = (Ui)i∈u to the score c(U) : $$\mathbf{c}(\mathbf{U})=\mathbf{c}_{\varnothing}$$ $$+\sum_{i}^{r}\mathbf{c}_{i}\left(U_{i}\right)$$ $$+\sum_{1\leq i<j\leq r}\mathbf{c}_{i,j}\left(U_{i},U_{j}\right)+\cdots+\mathbf{c}_{1,...,r}\left(U_{1},...,U_{r}\right)$$ $$=\sum_{\mathbf{u}\subseteq\mathcal{U}}\mathbf{c}_{\mathbf{u}}\left(U_{\mathbf{u}}\right)$$ Eq. 4 consists of 2 rterms and is unique under the orthogonality constraint: $$\mathbb{E}\left(\mathbf{c_{u}}\left(\mathbf{U_{u}}\right)\mathbf{c_{v}}\left(\mathbf{U_{v}}\right)\right)=0,$$ $$\forall(\mathbf{u},\mathbf{v})\subseteq{\mathcal{U}}^{2}{\mathrm{~s.t.~}}\mathbf{u}\neq\mathbf{v}$$ Moreover, thanks to orthogonality, we have cu (Uu) = E (c(U) | Uu) −Pv⊂u cv (Uv) and we can write model variance as: $$\begin{split}\mathbb{V}(\mathbf{c}(\mathbf{U}))&=\sum_{i}^{r}\mathbb{V}\left(\mathbf{c}_{i}\left(U_{i}\right)\right)\\ &\quad+\sum_{1\leq i<j\leq r}\mathbb{V}\left(\mathbf{c}_{i,j}\left(U_{i},U_{j}\right)\right)\\ &\quad+\ldots+\mathbb{V}\left(\mathbf{c}_{1,\ldots,r}\left(U_{1},\ldots,U_{r}\right)\right)\\ &=\sum_{\mathbf{u}\subseteq\mathbf{U}}\mathbb{V}\left(\mathbf{c}_{\mathbf{u}}\left(\mathbf{U}_{\mathbf{u}}\right)\right)\end{split}\tag{5}$$ Eq. 5 allows us to write the influence of any subset of concepts u as its own variance. This yields, after normalization by V(c(U)), the general definition of Sobol' indices. Definition A.1. Sobol indices (Sobol, *1993).* The sensitivity index Su which measures the contribution of the concept set Uu to the model response f(U) in terms of fluctuation is given by: Su = V (cu (Uu)) V(c(U)) = (6) V (E (c(U) | Uu)) −Pv⊂u V (E (c(U) | Uv)) V(c(U)) Sobol indices provide a numerical assessment of the importance of various subsets of concepts in relation to the model's decision-making process. Thus, we have: Pu⊆U Su = 1. Additionally, the use of Sobol' indices allows for the efficient identification of higher-order interactions between features. Thus, we can view the Total Sobol indices defined in 2 as the sum of of all the Sobol indices containing the concept i : STi =Pu⊆U,i∈u Su. 5131 ## B Implementation Details We trained 3 different models. For each model, we performed a single run and we split datasets in 70% for train, 10% for validation and 20% for test. ## B.1 Trained Roberta On Beer Dataset We used a RoBERTa base pretrained on hugging face by Liu et al. (2019b) (all the information on the pretrain can be found in the paper). The model was pretrained on the reunion of five datasets: - BookCorpus (Zhu et al., 2015), a dataset containing 11,038 unpublished books; - English Wikipedia (excluding lists, tables and headers) ; - CC-News (Mackenzie et al., 2020), a dataset containing 63 millions English news articles crawled between September 2016 and February 2019 ; - OpenWebText (Radford et al., 2019), an opensource recreation of the WebText dataset used to train GPT-2 ; - Stories (Trinh and Le, 2018) a dataset containing a subset of CommonCrawl data filtered to match the story-like style of Winograd schemas. We then trained the model on Beer dataset. The model was trained on 2 GPUs for 10 epochs with a batch size of 32 and a sequence length of 512. The optimizer was AdamW with a learning rate of 1e-5, β1 = 0.9, β2 = 0.98, and ϵ = 1e6 ## B.2 Trained Roberta On Imdb Dataset We used a RoBERTa model already fine-tuned on IMDB from hugging face. This model used the pretraining presented above, we fine-tuned it with 2 epochs, a batch size of 16, and an Adam optimizer with a learning rate of 2e-5, β1 = 0.9, β2 = 0.999 and ϵ = 1e − 8. ## B.3 Trained Lstm On Imdb Dataset We created our LSTM with: SentimentRNN ( ( embedding ) : Embedding ( 1 0 0 1 , 512) ( l s t m ) : LSTM( 5 1 2 , 128 , num_layers =4 , b a t c h _ f i r s t =True , b i d i r e c t i o n a l =True ) ( d r o p o u t ) : Dropout ( p = 0 . 3 , i n p l a c e = F a l s e ) ( f c_ 1 ) : L i n e a r ( i n _ f e a t u r e s =128 , o u t _ f e a t u r e s =128 , b i a s =True ) ( r e l u ) : ReLU ( ) ( f c_ 2 ) : L i n e a r ( i n _ f e a t u r e s =128 , o u t _ f e a t u r e s =2 , b i a s =True ) ( s i g ) : Softmax ( dim =1) ) Then, we trained it on the IMDB dataset. The model was trained on 2 GPUs for 5 epochs with a batch size of 128 and a sequence length of 512. The optimizer was Adam with a learning rate of 1e-4. ## C Lstm Example LSTMs are much less complex than RoBERTa, and as such, we can expect them to leverage less and much simpler concepts for their predictions. In particular, COCKATIEL identified 3 concepts that monopolized the importance score for each class on the RoBERTa model. For the positive class, we had "*the favorite movie*", "*technically* good/interesting movie" and "*good comedie or family movie*". For the negative class, we also had "*the worst movie*", "*middling movie*" and "boring movie". In contrast, in the case of the LSTM (see figure 8), COCKATIEL detected a single important concept per predicted class. For the positive class, this concept encompasses the positive language elements mostly, and for the negative class, *the negative elements*. This is a much more basic view of the review classification problem, and COCKATIEL allows us to confirm our intuitions about the richness of the embedding learned by the LSTM. ## D Other Examples Of Cockatiel Explanations For Roberta ![13_image_0.png](13_image_0.png) ![13_image_3.png](13_image_3.png) ![13_image_1.png](13_image_1.png) ![13_image_2.png](13_image_2.png) ![14_image_0.png](14_image_0.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitations section, after conclusion. ✓ A2. Did you discuss any potential risks of your work? In limitations section. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** We Did Use Available And Standard Datasets. ✓ B1. Did you cite the creators of artifacts you used? Section 4 B2. Did you discuss the license or terms for use and / or distribution of any artifacts? No response. ✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? The two datasets used are well known and public domain. Their intended use is known. ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? The two datasets used are well known and public domain. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 4 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Appendix 2 ## C ✓ **Did You Run Computational Experiments?** Section 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix 2 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Appendix 2 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Appendix 2 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Appendix 2 ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** We use human annotations but they are only in a used dataset. We did not collect human annotations. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
hsu-etal-2023-code
Code-Switched Text Synthesis in Unseen Language Pairs
https://aclanthology.org/2023.findings-acl.318
Existing efforts on text synthesis for code-switching mostly require training on code-switched texts in the target language pairs, limiting the deployment of the models to cases lacking code-switched data. In this work, we study the problem of synthesizing code-switched texts for language pairs absent from the training data. We introduce GLOSS, a model built on top of a pre-trained multilingual machine translation model (PMMTM) with an additional code-switching module. This module, either an adapter or extra prefixes, learns code-switching patterns from code-switched data during training, while the primary component of GLOSS, i.e., the PMMTM, is frozen. The design of only adjusting the code-switching module prevents our model from overfitting to the constrained training data for code-switching. Hence, GLOSS exhibits the ability to generalize and synthesize code-switched texts across a broader spectrum of language pairs. Additionally, we develop a self-training algorithm on target language pairs further to enhance the reliability of GLOSS. Automatic evaluations on four language pairs show that GLOSS achieves at least 55{\%} relative BLEU and METEOR scores improvements compared to strong baselines. Human evaluations on two language pairs further validate the success of GLOSS.
# Code-Switched Text Synthesis In Unseen Language Pairs I-Hung Hsu1∗ Avik Ray2 Shubham Grag2 **Nanyun Peng**2,3 **Jing Huang**2 1Information Science Institute, University of Southern California 2 Amazon Alexa AI 3 University of California, Los Angeles [email protected] {avikray, gargshu}@amazon.com [email protected] [email protected] ## Abstract Existing efforts on text synthesis for codeswitching mostly require training on codeswitched texts in the target language pairs, limiting the deployment of the models to cases lacking code-switched data. In this work, we study the problem of synthesizing codeswitched texts for language pairs absent from the training data. We introduce GLOSS, a model built on top of a pre-trained multilingual machine translation model (PMMTM) with an additional code-switching module. This module, either an adapter or extra prefixes, learns code-switching patterns from codeswitched data during training, while the primary component of GLOSS, i.e., the PMMTM, is frozen. The design of only adjusting the code-switching module prevents our model from overfitting to the constrained training data for code-switching. Hence, GLOSS exhibits the ability to generalize and synthesize codeswitched texts across a broader spectrum of language pairs. Additionally, we develop a self-training algorithm on target language pairs further to enhance the reliability of GLOSS. Automatic evaluations on four language pairs show that GLOSS achieves at least 55% relative BLEU and METEOR scores improvements compared to strong baselines. Human evaluations on two language pairs further validate the success of GLOSS. ## 1 Introduction Code-switching, the linguistic phenomenon of using more than one language within a single utterance or conversation,1is a common expression of multilingualism in informal text and speech (Auer and Wei, 2008; Gumperz, 1982). To accommodate the needs of multicultural and multilingual societies and individuals, there is a growing interest in investigating models dedicated to code-switching ∗Work was done when the author interned at Amazon. 1In this paper, we mainly focus on the sentence-level codeswitching involving only two languages. ![0_image_0.png](0_image_0.png) within the realm of conversational AI (FitzGerald et al., 2022; Khanuja et al., 2020; Winata et al., 2022; Sitaram et al., 2019). However, a notable obstacle in code-switching modeling is the scarcity of large-scale code-switched text datasets for different applications in diverse language pairs (Gupta et al., 2020; Tarunesh et al., 2021). This necessitates generative models capable of synthesizing code-switched texts, facilitating subsequent studies for code-switching. Most prior work on text synthesis for codeswitching assumes the availability of training data for all language pairs being tested. Early trials concentrate on individual language pair (Samanta et al., 2019; Chang et al., 2019; Tarunesh et al., 2021). For example, Bhat et al. (2016) develop a code-switched text synthesizer for Hindi-English based on linguistic rules (Poplack, 1980; Belazi et al., 1994; Myers-Scotton, 1997), while Lee et al. (2019); Winata et al. (2019); Garg et al. (2018) explore neural generative models for Chinese-English code-switched text synthesis. More recently, Gupta et al. (2020) presents pioneering efforts in developing a generic method for producing high-quality and fluent code-switched sentences across diverse language pairs. This is achieved through the collection of code-switched texts in multiple languages. However, the requirement of training on codeswitched texts for target language pairs hinders the scalability of existing models to cover a broader range of language pairs. Many real-world codeswitching scenarios, such as Swahili-English in Tanzania (Kanijo, 2018), Shona-English in Zimbabwe (Mashiri, 2002) suffer from limited or nonexistent curated datasets. Recognizing this resource limitation, in this work, our study focuses on synthesizing code-switched text in multiple language pairs, including those language pairs that are *unseen* during training (*zero-shot transfer* setting (Huang et al., 2021, 2022)). In this setting, models must learn code-switched patterns from limited code-switched training data in some language pairs and generalize to other language pairs, as shown in Fig. 1. The setting enables a more flexible process of code-switched text synthesis by using existing resources to assist resource-limited language pairs. Yet, it also introduces new challenges: (1) models must possess the ability to generate tokens in multiple languages; (2) models need to acquire a transferable ability for code-switching such that they can generate code-switched text in unseen language pairs. To overcome the challenges, we propose GLOSS, a GeneraLized cOde-Switched text Synthesizer that introduces an additional codeswitching module to a pre-trained multilingual machine translation model (PMMTM). The codeswitching module, implemented either through an adapter (Houlsby et al., 2019) or extra prefixes (Li and Liang, 2021), offers a parameter-efficient approach to transfer learning from machine translation to code-switched text synthesis. Inheriting the ability of PMMTM, GLOSS can generate text across multiple languages. The incorporation of an additional code-switching module, instead of directly fine-tuning the PMMTM, serves as an effective method to prevent models from overfitting to the specific training code-switched language pairs. Furthermore, we develop a self-training algorithm on the target language pairs to improve GLOSS further. Specifically, our preliminary study shows that although GLOSS can successfully generate reasonable code-switched sentences, when performing zero-shot transfer to unseen language pairs, it may still generate non-code-switched sentences (around 11% to 13% of cases). The proposed self-training framework aims to introduce weakly-supervised signals to help GLOSS more stably generate target domain cases when the target language pair is known.2 To achieve this, we iteratively fine-tune GLOSS on a *filtered* dataset that is generated by GLOSS itself in the target domain case. The filter incorporates a language identification model to remove low-quality instances.3 Being fine-tuned on filtered data, GLOSS learns to generate texts that satisfy the filtering rules and become more stable. Our contribution is three-fold. First, we present GLOSS, a code-switched text synthesizer that can generate code-switched sentences across multiple language pairs, even those not in the training data. To the best of our knowledge, we are the first to study this setting. Second, we introduce a selftraining framework to further improve GLOSS under the setting where the target language pair is known. Third, extensive experiments, including automatic evaluations on four languages and human evaluations on two languages, showcase GLOSS's strong performance. GLOSS achieves at least 55% relative BLEU and METEOR score improvements compared to strong baselines. ## 2 Problem Formulation Our goal is to synthesize code-switched (CS) texts for language pairs where their CS examples are never provided during training. Given a monolingual input sentence x ein language l eand an assigned language l m (l m ̸= l e), we aim to generate a sentence x m,e that mixes l m and l e, while preserving the semantic meaning of x e. 4 We consider the setting where the assigned language l m in the testing time is different from those in the training time. More formally, as illustrated in Figure 1, the training set consists of N language pairs (l en , lmn )(n ∈ {1, 2*, ..., N*}), while 2For example, we know the target scenario is to synthesize Bengali-English code-switched text, despite no BengaliEnglish code-switched training data being available. 3The language identification model is trained without using any code-switched data. More details are given in §3.3. 4Following the matrix language frame theory (MyersScotton, 1997; Joshi, 1982), l m is called the matrix language and l eis the embedded language. ![2_image_0.png](2_image_0.png) the testing set includes target language pairs where l mt ∈ { / l m1*, ..., l*mN }, ∀t. This scenario reflects real-world situations where code-switched data is more readily available for certain language pairs, such as Spanish-English and Hindi-English, while it is less accessible for others, such as BengaliEnglish and Swahili-English. ## 3 Method We introduce GLOSS, a GeneraLized cOde-Switched text Synthesizer that tackles the two specific challenges raised by our problem setting: (1) the model needs to generate texts across many languages, some are not even in the CS training data; (2) the model needs to learn transferable CS ability such that they generate reasonable CS sentences in unseen language pairs. Fig. 2 provides an overview. To address the first challenge, we begin by obtaining a Pre-trained Multilingual Machine Translation Model (PMMTM) using multilingual machine translation data, which covers all languages that would be used for final CS text synthesis (§3.1).5 The remaining challenge is how to make PMMTM a code-switched text synthesizer with only limited language coverage of training data. We propose to augment an additional codeswitching module onto PMMTM, thereby creating GLOSS (§3.2). This additional code-switching module is trained on our limited CS data while keeping PMMTM parameters fixed. Instead of finetuning the entire PMMTM, this modularized design improves systematic generalization (Bahdanau et al., 2019; Ruis and Lake, 2022), where PMMTM focuses on generating translated sentences and the code-switching module concentrates on *"mixing"* languages. This approach allows GLOSS to be more adaptable and less prone to overfitting during the fine-tuning process on CS data. Finally, we present a self-training framework that enables GLOSS to more stably generate CS texts in target language pairs (§3.3). ## 3.1 Pmmtm Multilingual machine translation models (Ha et al., 2016; Johnson et al., 2017; Baziotis et al., 2022; Tang et al., 2020) enable simple deployment and parameter-efficient support of machine translation for a large number of language pairs by using a shared representation space. To train a PMMTM, we follow the strategy of mBART-50 (Tang et al., 2020) to notify the model of the source language and the target language to be translated into. Specifically, a language-specific special token is ![3_image_0.png](3_image_0.png) prepended to both the source and target sentences. Hence, during decoding, the first token fed to the decoder is the target language's special token that guides the translation. This is illustrated in Fig. 2. ## 3.2 The Gloss **Model** After obtaining a PMMTM, which can comprehend and generate phrases across multiple languages, our next step is to transform a PMMTM into a CS text synthesizer. A commonly used way is to directly fine-tune the PMMTM on CS training data (Tarunesh et al., 2021; Gupta et al., 2020). However, models directly fine-tuned on new data could easily overfit to the fine-tuning scenario. Thus it is hard to adapt the ability to perform codeswitching to unseen language pairs. Therefore, instead of directly fine-tuning the whole PMMTM, we propose to use an *additional* code-switching module paired with the PMMTM. The module is specifically learned to mix languages for a given translation pair generated by PMMTM. To implement the design and enable end-to-end training, we employ either an *adapter* (Houlsby et al., 2019) or extra *prefixes* (Li and Liang, 2021) as the code-switching module. These approaches are parameter-efficient methods to introduce control into pre-trained models and guide the final generation (He et al., 2022): Adapter is an additional layer (and parameters) that is introduced inside each Transformer block (Vaswani et al., 2017), and it was shown to be an effective way to conduct transfer learning for NLP tasks (Houlsby et al., 2019). This layer is appended after each feed-forward layer (in a Transformer block). It projects the original feature size to a smaller dimension and then projects them back to the original size, ensuring that the number of parameters stays substantially small. Prefix is another parameter-efficient way to conduct transfer learning for NLP tasks (Li and Liang, 2021). *"Prefix"* are the new key and value matrices used when calculating attention in Transformer. More specifically, trainable prefixes are a set of vectors that will be concatenated with the original key and value matrices when calculating dot-product attention. Hence, in each layer, inputs will be influenced by these additional keys and values after attention is applied. During fine-tuning using CS training data, we keep the parameters of PMMTM frozen and solely train the adapter or prefixes. This allows the codeswitching module to learn how to blend a *translated* distribution with the input sentence. When GLOSS is tested and tasked with generating a codeswitched sentence in an unseen target language pair, the frozen PMMTM, having been trained to produce translations for this specific pair, can still generate reliable translations. With reliable translations, our code-switching module continues to perform a similar function during training by blending languages. As a result, GLOSS exhibits improved generalization capabilities. ## 3.3 Gloss **With Self-Training** Although GLOSS has the ability to generalize to synthesize CS text to languages that the PMMTM supports, the generation could still be unstable. As we will show in §5, GLOSS still has around 11% to 13% of cases that will generate non-CS sentences when performing zero-shot transfer to unseen language pairs. Hence, we aim to improve this stability issue if more information about the test case is provided. We assume a common scenario in real practice - the target language pair l m and l eis known, and we can update GLOSS for fitting this specific target language pair. We design a self-training procedure to incorporate off-the-shelf language identification models to help GLOSS synthesize target CS sentences more stably. The procedure is illustrated in Fig. 3. To be more specific, we first use the input sentence written in l ein the CS training data as the input query and ask GLOSS to make a prediction on the target language l m, forming potential CS sentences x m,e. Then, we use language identification models to perform sentence filtering based on the following constraints: - The synthesized sentence should at least cover one token from l m. - The synthesized sentence should at least cover tokens from l e. - The synthesized sentence cannot cover tokens from other languages except l m and l e. We use CLD3 6as the language identification model, which extracts character n-grams from the input text and computes an embedding based on the fraction of times each n-gram character appears. Notably, CLD3's training does not rely on codeswitched text. We leverage CLD3's predicted language distribution for each token to determine if each generated sentence meets the aforementioned constraints. We filter out low-quality instances and collect the remaining sentences as a synthetic codeswitching corpus specific to the target domain. This corpus is subsequently used for further fine-tuning of GLOSS. The procedure can be executed repeatedly in R rounds, where R is a hyper-parameter. Notice that other advanced filtering can be easily included in our proposed procedure and we leave the exploration as a future work. Different from the classic self-training algorithm in semi-supervised research (Fei et al., 2023), in our procedure, the initial model is a zero-shot transfer model. Additionally, we apply a filtering process to further improve the quality of the synthetic codeswitching corpus. 6www.github.com/bsolomon1124/pycld3 ## 3.4 Discussion Utilizing pre-trained models that are initially trained on machine translation data as a foundation for constructing code-switched (CS) text synthesizers has gained significant attention recently due to the resemblance between machine translation and CS text synthesis tasks (Tarunesh et al., 2021; Gupta et al., 2020). However, our work differs from theirs in that we train a *single* model capable of consuming all the machine translation data, thereby supporting translation across multiple language pairs. In contrast, prior works rely on selecting data based on the target language pair (l m and l e) as a priori. Our approach enables a unified model that possesses the ability to generate phrases in multiple languages, thereby facilitating CS text synthesis across various language pairs. Conversely, constraining the training of the PMMTM to a limited number of languages, such as a few specific pairs, would result in GLOSS losing its ability to generalize to a broader range of CS language pairs. ## 4 Automatic Evaluation 4.1 Experimental Settings Dataset and Evaluation Metrics. We use the data provided by Gupta et al. (2020), which covers eight language pairs, including Bengali-English (Bn-En), German-English (De-En), Spanish (EsEn), French-English (Fr-En), Hindi-English (HiEn), Malayalam-English (Ml-En), Tamil-English (Ta-En), and Telugu-English (Te-En). Note that in this dataset, the input language sentence is always English. Hence, the target code-switched (CS) language pair is X-English, where X is the different languages that the dataset covers. In the original paper, they used English-X to call the language pair in their dataset, but we changed the naming to present the dominant language first. The dataset statistics are listed in Appendix §A. In our setting, we conduct leave-one-out experiments, i.e., seven CS language pairs are selected as the CS training data, and the remaining is the test language pair. We select Bn-En, De-En, EsEn, and Hi-En as the four test scenarios based on the language resource levels defined in Tang et al. (2020), such that our selection covers high-resource (German, Spanish), medium-resource (Hindi), and low-resource (Bengali) languages. We evaluate the synthesized text using BLEU (Papineni et al., 2002) | Model | Type | Bn-En | De-En | Es-En | Hi-En | | | | | |----------------------------------------------------------|--------|-------------------------------------------------|-------------------------------------------|---------|---------|------|-------|------|-------| | B | M | B | M | B | M | B | M | | | | Gupta et al. (2020)* | Sup. | 21.49 27.32 24.15 30.47 22.47 29.45 21.55 28.37 | | | | | | | | | UB. Fine-tuned PMMTM on all language pairs (mBART50-MMT) | Sup. | 12.49 38.67 32.24 59.75 37.82 62.54 27.93 54.81 | | | | | | | | | Fine-tuned PMMTM on all language pairs (augment-MMT) | Sup. | 13.08 38.69 32.65 59.96 38.59 63.36 28.88 55.10 | | | | | | | | | Copy Input | Unsup. | 2.66 | 19.28 | 3.29 | 22.76 | 3.28 | 22.31 | 5.22 | 24.20 | | Machine Translation | Unsup. | 4.78 | 16.82 | 6.30 | 30.28 | 9.63 | 32.97 | 9.87 | 24.26 | | Translate, Align, then Swap | Unsup. | 1.91 | 16.06 | 5.53 | 27.30 | 7.80 | 30.11 | 6.61 | 24.90 | | Fine-tuned PMMTM on available language pairs | Zst. | 3.05 | 18.57 | 9.09 | 32.34 | 8.77 | 30.41 | 3.93 | 22.22 | | GLOSS (mBART50-MMT + adapter) | Zst. | 2.31 | 22.07 18.63 48.28 23.04 49.75 | 4.09 | 22.02 | | | | | | Proposed. GLOSS (augment-MMT + prefix) | Zst. | 9.65 | 32.63 21.88 50.33 24.85 51.88 12.16 36.94 | | | | | | | | GLOSS (mBART50-MMT + prefix) | Zst. | 5.21 | 26.83 20.49 48.49 23.47 50.52 | 7.51 | 29.82 | | | | | | GLOSS (augment-MMT + adapter) | Zst. | 2.16 | 18.60 14.58 40.75 16.62 42.31 | 8.61 | 30.39 | | | | | and METEOR (Banerjee and Lavie, 2005) scores following Gupta et al. (2020). Implementation Details. We use two different PMMTM for GLOSS. The first one directly adapts the pre-trained mBART50-many-to-many-MMT model (**mBART50-MMT**) from (Tang et al., 2020), which is a machine translation model trained on 50 language pairs using the ML50 benchmark. The other one is to further fine-tune mBART50-MMT on the machine translation data collected by Gupta et al. (2020) to make an "augmented mBART50- MMT" (**augment-MMT**). The second setting is considered since machine translation data in the ML50 benchmark are limited for Indic languages. Hence, we further fine-tune mBART50-MMT on the machine translation data provided in (Gupta et al., 2020) for three epochs. Notice that the machine translation data in (Gupta et al., 2020) only covers eight language pairs, making augmentMMT a more restricted machine translation model in terms of supported languages. All GLOSS (mBART50-MMT/augment-MMT paired with adapter/prefix) are implemented using the Huggingface package (Wolf et al., 2020) as the backbone. To implement the adapter and prefix, we leverage AdatperHub (Pfeiffer et al., 2020). We use their default setting to set prefix length as 30 and use all prefixes in the self-attention block in the Transformer encoder, and cross-attention block as well as the self-attention block in the Transformer decoder. We train GLOSS with a machine equipped with 4 NVIDIA Tesla V100 GPUs. We train GLOSS using 1 GPU at a time with around 30 ## Hrs Of Training. We consider AdamW optimizer (Loshchilov and Hutter, 2019) with learning rate set to 10−5and the weight decay set to 10−5. We set the batch size to 12 and the number of training epochs to 15. For GLOSS with self-training, we experiment with R ∈ {1, 2, 5} rounds with heuristics. Hyperparameter determination, except for R, is based on the available CS data in the development set without considering the leave-out language pair. Due to the computational resource restriction, our experiment results from a single seed. We note the gradual performance improvement as R increased in §4.3. However, determining the optimal stopping point for R presented a challenge since no development data exist under the zero-shot scenario. As a result, we decide not to increase R further in our experiments. Compared baselines. Three types of baselines ## Are Considered: - Unsupervised baselines - (1) **Copy Input**: directly copy the input sentence as the prediction, (2) **Machine Translation**: augment-MMT's machine translation results, (3) **Translate, Align,** then Swap: we use advanced unsupervised word-alignment tool (Dou and Neubig, 2021) to extract potential word alignment between the input sentence and the Machine Translation's prediction. Then, we generate the final output by having a probability p to swap words in Machine Translation's prediction with the aligned input word, where p = 0.35 is based on the statistics from the training data. | Model | Bn-En | De-En | Es-En | Hi-En | | | | |-----------------------------------------------------|---------|-------------|-------------|-------------|-------|-------|----| | B | M | B | M | B | M | B | M | | GLOSS (mBART50-MMT + prefix) | 5.21 | 26.83 20.49 | 48.49 23.47 | 50.52 | 7.51 | 29.82 | | | GLOSS (mBART50-MMT + prefix) + self-training(R = 1) | 5.84 | 28.31 20.55 | 48.83 24.00 | 51.12 | 8.22 | 31.55 | | | GLOSS (mBART50-MMT + prefix) + self-training(R = 2) | 6.66 | 29.00 20.97 | 49.12 24.12 | 51.47 | 9.27 | 33.16 | | | GLOSS (mBART50-MMT + prefix) + self-training(R = 5) | 6.26 | 29.75 21.49 | 49.71 24.58 | 51.53 10.31 | 35.84 | | | | GLOSS (augment-MMT + prefix) | 9.65 | 32.63 21.88 | 50.33 24.85 | 51.88 12.16 | 36.94 | | | | GLOSS (augment-MMT + prefix) + self-training(R = 1) | 9.80 | 33.63 21.78 | 49.73 25.96 | 52.68 12.99 | 38.59 | | | | GLOSS (augment-MMT + prefix) + self-training(R = 2) | 10.19 | 34.70 22.36 | 50.59 26.22 | 52.88 13.70 | 40.09 | | | | GLOSS (augment-MMT + prefix) + self-training(R = 5) | 10.32 | 35.46 22.45 | 50.63 26.31 | 53.13 13.63 | 40.05 | | | - Supervised baselines - (1) Gupta et al. **(2020)**: a sequence-to-sequence model that leverages XLM (Conneau and Lample, 2019) features and utilizes the transfer learning signal from machine translation to warm-up the model, (2) Fine-tuned PMMTM on all language pairs: we fine-tune mBART50-MMT on CS data in all eight language pairs. - Zero-shot transfer baselines - (1) **Fine-tuned** PMMTM on available language pairs: finetune whole mBART50-MMT on *available* CS training data only (excluding test language pair). Note that the training of supervised baselines contains CS data in target language pairs; hence, it can be viewed as an upper bound for GLOSS. Zeroshot transfer baselines are trained only using CS data from other language pairs but not the target language pair. Unsupervised baseline training does not use any CS training data. ## 4.2 Main Results Tab. 1 shows the results. From the table, we can observe that the unsupervised baselines generate very unreliable CS sentences in general. Additionally, naively fine-tuning the whole PMMTM could perform even worse than the unsupervised methods. GLOSS improves unsupervised baselines and zero-shot transfer baselines by at least 55% relative scores across the board, and every variation of GLOSS could outperform these baselines. By comparing different variations of GLOSS, we can observe that GLOSS with prefixes is more robust than using an adapter, especially in the cases where the PMMTM model has worse performance (Bengali & Hindi due to limited training machine translation data used in mBART50-MMT). Furthermore, by comparing GLOSS equipped with augment-MMT and GLOSS equipped with mBART50-MMT, we highlight the PMMTM's impact on our model. ## 4.3 Results Given Known Target Language When the target language pair is known, we can then apply our self-training procedure to GLOSS. We experiment on GLOSS using prefixes and present results in Tab. 2. From the table, we can observe the consistent improvement when adopting self-training to GLOSS, and the improvement is especially significant for Hindi-English. Additionally, by conducting self-training with more rounds, we can observe the gradual improvements in both of the cases for GLOSS with mBART50-MMT and augment-MMT. ## 5 Human Evaluation To further verify the quality of our method, we conduct the human evaluation for Hindi-English and Chinese-English code-switched (CS) text using sentences in English as the source language. ## 5.1 Evaluator Selection Considering the expertise of the annotation task requires people familiar with both English and Chinese (or English and Hindi), we have a highstandard selection process to recruit 3 professionals for the human evaluation. For Hindi-English annotation, We engaged the services of a team of expert professionals who were contracted to provide labels for various Hindi and English-related tasks. They're all native Hindi speakers and highly skilled in speaking Hindi-English code-switching. Conversely, our Chinese-English annotators are native Chinese NLP researchers with over three | Model | Type | Hi-En | Zh-En | | | | | | | |----------------------------------------------|--------|---------|-----------|----------|------|-------|-----------|------|------| | CS Rate. | F | S | Geo. Mean | CS Rate. | F | S | Geo. Mean | | | | Translate, Align, then Swap | Unsup. | 98.6% | 2.69 | 2.83 | 2.75 | 91.3% | 3.19 | 3.62 | 3.33 | | Fine-tuned PMMTM on available language pairs | Zst. | 4.0% | 1.00 | 1.00 | 1.00 | 2.0% | 1.0 | 1.0 | 1.0 | | Ours. GLOSS (prefix + self-training) | Zst. | 93.3% | 3.84 | 3.96 | 3.90 | 99.3% | 3.73 | 4.01 | 3.85 | | GLOSS (prefix) | Zst. | 87.3% | 3.06 | 3.10 | 3.08 | 89.3% | 3.67 | 3.84 | 3.73 | | Fine-tuned PMMTM on all language pairs | Sup. | 98.0% | 4.09 | 4.21 | 4.15 | −− | −− | −− | −− | | UB. Ground truth | −− | 96.0% | 4.40 | 4.39 | 4.84 | 94.0% | 4.18 | 4.42 | 4.28 | years of experience, residing in the US for at least four years, and proficient in Chinese, English, and Chinese-English code-switching. We offer a competitive hourly payment that meets regional legal standards, though it's difficult to determine the average payment for them on this single task. ## 5.2 Experimental Settings Dataset. To avoid the evaluation being biased in the domain we trained on, we collect testing English instances by selecting sentences from the following CS dataset. We sample 50 sentences for each language pair. - Hindi-English: We use the data released from Tarunesh et al. (2021), who collected the dataset via crowd-sourcing in India. Every data point in this dataset is a pair of an English sentence and its corresponding Hindi-English CS sentence. - Chinese-English: We use the transcript data of the SEAME dataset (Lyu et al., 2010), which is a Chinese-English CS speech recognition dataset. To get the English counterpart of the ChineseEnglish sentences, we ask experts to translate the CS sentence back to their English version. Compared models. We compare six methods: (1) **Translate, Align, then Swap**, which serves as a representative of unsupervised methods, (2) **Finetuned PMMTM on available language pairs**, which serves as a baseline for zero-shot transfer, (3) GLOSS **+ prefix**, we use augment-MMT as the backbone for Hindi-English, while using mBART50-MMT as the base model for ChineseEnglish, (4) GLOSS **+ prefix + self-training**, we apply self-training (R = 5) to GLOSS + prefix, (5) **Fine-tuned PMMTM on all language pairs**, which serves a strong supervised baseline. Notice that since the training dataset in Gupta et al. (2020) does not contain the Chinese-English pair. Hence, when evaluating on Chinese-English, this baseline is not applicable, (6) **Ground truth**, the original CS sentences we sampled from the dataset. Evaluation Procedure We ask each expert annotator to evaluate all the output of 50 testing instances from all models (i.e., 300 sentences for Hindi-English and 250 for Chinese-English). Our questionnaire covers the following three questions when using Hindi-English as an example. - Code-switching Correctness: We measure whether the present sentence is correct CS (binary score). Specifically, we define a sentence as a correct CS sentence if it satisfies the constraints: (a) It's not fully Hindi or English, (b) It should be mainly in Hindi, and (c) There's no other language except English and Hindi. - Fluency: Measuring the fluency of the prediction presented to humans with scores from 1 to 5, with 5 as the best. - Semantic Correctness: Measuring whether the predicted sentence correctly reveals the meaning of the corresponding input sentence with scores from 1 to 5, with 5 as a fully correct translation. ## 5.3 Results Tab. 3 presents the results. First, we can observe that the code-switching correctness rate is extremely low for the zero-shot baseline - Finetuned PMMTM on available language pairs. Second, although the unsupervised baseline - Translate, Align, then Swap gets a high code-switching success rate, the low fluency reveals that deciding a suitable position to switch languages is a task beyond random. Third, we can observe that self-training can successfully improve the codeswitching quality across all metrics in both lan- ![8_image_0.png](8_image_0.png) ## 5.4 Output Examples Lastly, we present real examples generated by our models in Fig. 4. For these examples, we can see that directly fine-tuning the whole PMMTM on CS training data will generate unnatural or even predictions containing tokens in other languages. In contrast, GLOSS can generate more stable results, and our self-training algorithm can even help GLOSS to generate high-quality CS sentences. ## 6 Related Work Early approaches (Pratapa et al., 2018; Bhat et al., 2016; Pratapa and Choudhury, 2021; Li and Fung, 2014) on code-switched (CS) text synthesis were built based on various linguistic theories, such functional head constraints (Belazi et al., 1994), MatrixLanguage theory (Myers-Scotton, 1997; Joshi, 1982), and Equivalence-Constraint theory (Poplack, 1980; Sankoff, 1998). To turn linguistic theories into computational models, Bhat et al. (2016); Pratapa and Choudhury (2021) leverage trained constituency parser to extract parses of translation pairs and create CS sentences by mixing translation pairs following the syntactic constraints derived from the theories. However, constraints cannot be postulated as a universal rule for all CS scenarios, especially for languages that are syntactically divergent (Berk-Seligson, 1986), such as English and Chinese, since they have word alignments with an inverted order (Winata et al., 2019). Owing to the limitation, more and more recent works start to build CS text synthesizers in a data-driven way. Garg et al. (2018) train a sequence generative adversarial model on real CS text to generate ChineseEnglish CS sentences. Chang et al. (2019) build a CS text synthesizer using the generative adversarial network, while several follow-up works (Samanta et al., 2019; Winata et al., 2019; Gonen and Goldberg, 2019) using different generative model techniques are also presented. More studies have been introduced to improve the synthesis quality such that we cannot exhaust them in this short summary. We refer readers to the recent survey (Winata et al., 2022; Sitaram et al., 2019). Although many of these efforts had some success, the above-mentioned methods can only generate CS text in the same language pair sets used in training. Given the difficulties of acquiring CS data, this requirement hinders the scalability of these models to support more language pairs. Hence, in this paper, we take a step forward to explore the possibility of zero-shot transfer generalization in CS text synthesis and present GLOSS that can generate reasonable outputs. ## 7 Conclusion In this paper, we develop a novel generalized codeswitched text synthesizer, which can even generate code-switched sentences where the corresponding code-switched training data is unavailable. We introduce GLOSS that is built on top of a pre-trained multilingual machine translation model and augmented with an adapter or prefixes. The modularized design of learning specific parameters for mixing languages from a translated distribution helps the overall system generalization, hence, fulfilling our goal. Extensive experiments verify our methods' effectiveness qualitatively and quantitatively. In the future, we plan to investigate how our synthesizer performs on downstream tasks such as conversational understanding under a code-switched scenario. ## Limitation Our paper presents a pilot exploration of investigating a new setting in code-switched text synthesis - we allow the target language pair selection not limited to those for which we already have training data. Although we have shown the strength of GLOSS qualitatively and quantitatively, our experimental setting is still confined due to the dataset restriction - all the input text is in English. It would be an even harder challenge if the source languages are more diverse and we leave such exploration for future work. Additionally, due to the computational restriction, in GLOSS, we only explore mBART50-MMT and an augment-MMT as our PMMTM. From the experimental results, we do observe the benefit of having a more stable PMMTM in GLOSS. We anticipate the models' performance can be further improved by leveraging more stronger PMMTM, and the exploration is left for the future. ## Broader Impacts Our proposed models are based on a model that is pre-trained on a large scale of multilingual machine translation data. It is known that the machine translation model could capture the bias reflecting the training data (Wang et al., 2022). Therefore, our models can potentially generate code-switched text containing offensive or biased content. We suggest that for deploying our model in any real-world applications, careful examination of the potential bias is an essential step. ## Acknowledgements The authors would like to thank Chris Hench, Chenyang Tao, Mingyu Derek Ma, Che-Ping Tsai, and Tanmay Parekh for their feedback and help regarding human evaluation. We also thank anonymous reviewers for their helpful feedback on the paper. ## References Peter Auer and Li Wei. 2008. *Handbook of multilingualism and multilingual communication*. Walter de Gruyter. Dzmitry Bahdanau, Shikhar Murty, Michael Noukhovitch, Thien Huu Nguyen, Harm de Vries, and Aaron C. Courville. 2019. Systematic generalization: What is required and can it be learned? In *7th International Conference on Learning* Representations, ICLR. Satanjeev Banerjee and Alon Lavie. 2005. METEOR: an automatic metric for MT evaluation with improved correlation with human judgments. In *Proceedings of* the Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization@ACL. Christos Baziotis, Mikel Artetxe, James Cross, and Shruti Bhosale. 2022. Multilingual machine translation with hyper-adapters. arXiv preprint arXiv:2205.10835. Hedi M Belazi, Edward J Rubin, and Almeida Jacqueline Toribio. 1994. Code switching and x-bar theory: The functional head constraint. *Linguistic inquiry*. Susan Berk-Seligson. 1986. Linguistic constraints on intrasentential code-switching: A study of spanish/hebrew bilingualism. *Language in society*. Gayatri Bhat, Monojit Choudhury, and Kalika Bali. 2016. Grammatical constraints on intra-sentential code-switching: From theories to working models. arXiv preprint arXiv:1612.04538. Ching-Ting Chang, Shun-Po Chuang, and Hung-yi Lee. 2019. Code-switching sentence generation by generative adversarial networks and its application to data augmentation. In Interspeech 2019, 20th Annual Conference of the International Speech Communication Association. Alexis Conneau and Guillaume Lample. 2019. Crosslingual language model pretraining. In *Advances* in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS. Zi-Yi Dou and Graham Neubig. 2021. Word alignment by fine-tuning embeddings on parallel corpora. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, EACL. Ben Fei, Weidong Yang, Liwen Liu, Tianyue Luo, Rui Zhang, Yixuan Li, and Ying He. 2023. Selfsupervised learning for pre-training 3d point clouds: A survey. *arXiv preprint arXiv:2305.04691*. Jack FitzGerald, Christopher Hench, Charith Peris, Scott Mackie, Kay Rottmann, Ana Sanchez, Aaron Nash, Liam Urbach, Vishesh Kakarala, Richa Singh, Swetha Ranganath, Laurie Crist, Misha Britan, Wouter Leeuwis, Gökhan Tür, and Prem Natarajan. 2022. MASSIVE: A 1m-example multilingual natural language understanding dataset with 51 typologically-diverse languages. arXiv preprint arXiv:2204.08582. Saurabh Garg, Tanmay Parekh, and Preethi Jyothi. 2018. Code-switched language models using dual rnns and same-source pretraining. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018. Hila Gonen and Yoav Goldberg. 2019. Language modeling for code-switching: Evaluation, integration of monolingual data, and discriminative training. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019. John J Gumperz. 1982. *Discourse strategies*. Cambridge University Press. Deepak Gupta, Asif Ekbal, and Pushpak Bhattacharyya. 2020. A semi-supervised approach to generate the code-mixed text using pre-trained encoder and transfer learning. In *Findings of the Association for Computational Linguistics: EMNLP*. Thanh-Le Ha, Jan Niehues, and Alex Waibel. 2016. Toward multilingual neural machine translation with universal encoder and decoder. In *Proceedings of the* 13th International Conference on Spoken Language Translation, IWSLT. Junxian He, Chunting Zhou, Xuezhe Ma, Taylor BergKirkpatrick, and Graham Neubig. 2022. Towards a unified view of parameter-efficient transfer learning. In The Tenth International Conference on Learning Representations, ICLR. Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for NLP. In Proceedings of the 36th International Conference on Machine Learning, ICML. Kuan-Hao Huang, Wasi Uddin Ahmad, Nanyun Peng, and Kai-Wei Chang. 2021. Improving zero-shot cross-lingual transfer learning via robust training. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP). Kuan-Hao Huang, I-Hung Hsu, Premkumar Natarajan, Kai-Wei Chang, and Nanyun Peng. 2022. Multilingual generative language models for zero-shot crosslingual event argument extraction. In *Proceedings* of the 60th Annual Meeting of the Association for Computational Linguistics (ACL). Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda B. Viégas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google's multilingual neural machine translation system: Enabling zero-shot translation. Trans. Assoc. Comput. Linguistics. Aravind K. Joshi. 1982. Processing of sentences with intra-sentential code-switching. In Proceedings of the 9th International Conference on Computational Linguistics, COLING. Ponsiano Kanijo. 2018. Code-switching and codemixing errors among swahili-english bilinguals in tanzania. *Kiswahili*. Simran Khanuja, Sandipan Dandapat, Anirudh Srinivasan, Sunayana Sitaram, and Monojit Choudhury. 2020. Gluecos: An evaluation benchmark for codeswitched NLP. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL. Grandee Lee, Xianghu Yue, and Haizhou Li. 2019. Linguistically motivated parallel data augmentation for code-switch language modeling. In Interspeech 2019, 20th Annual Conference of the International Speech Communication Association. Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP. Ying Li and Pascale Fung. 2014. Language modeling with functional head constraint for code switching speech recognition. In *Proceedings of the 2014* Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL. Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In 7th International Conference on Learning Representations, ICLR. Dau-Cheng Lyu, Tien-Ping Tan, Eng Siong Chng, and Haizhou Li. 2010. Seame: a mandarin-english code-switching speech corpus in south-east asia. In Eleventh Annual Conference of the International Speech Communication Association. Pedzisai Mashiri. 2002. Shona-english code-mixing in the speech of students at the university of zimbabwe. Southern African Linguistics and Applied Language Studies. Carol Myers-Scotton. 1997. *Duelling languages: Grammatical structure in codeswitching*. Oxford University Press. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics ACL. Jonas Pfeiffer, Andreas Rücklé, Clifton Poth, Aishwarya Kamath, Ivan Vulic, Sebastian Ruder, Kyunghyun ´ Cho, and Iryna Gurevych. 2020. Adapterhub: A framework for adapting transformers. In *Proceedings of the 2020 Conference on Empirical Methods in* Natural Language Processing (EMNLP 2020): Systems Demonstrations. Shana Poplack. 1980. Sometimes i'll start a sentence in spanish y termino en espanol: toward a typology of code-switching. Adithya Pratapa, Gayatri Bhat, Monojit Choudhury, Sunayana Sitaram, Sandipan Dandapat, and Kalika Bali. 2018. Language modeling for code-mixing: The role of linguistic theory based synthetic data. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL. Adithya Pratapa and Monojit Choudhury. 2021. Comparing grammatical theories of code-mixing. In *Proceedings of the Seventh Workshop on Noisy Usergenerated Text, W-NUT*. Laura Ruis and Brenden M. Lake. 2022. Improving systematic generalization through modularity and augmentation. *arXiv preprint arXiv:2202.10745*. Bidisha Samanta, Sharmila Reddy, Hussain Jagirdar, Niloy Ganguly, and Soumen Chakrabarti. 2019. A deep generative model for code switched text. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI. David Sankoff. 1998. A formal production-based explanation of the facts of code-switching. Bilingualism: language and cognition. Sunayana Sitaram, Khyathi Raghavi Chandu, Sai Krishna Rallabandi, and Alan W. Black. 2019. A survey of code-switched speech and language processing. arXiv preprint arXiv:1904.00784. Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, and Angela Fan. 2020. Multilingual translation with extensible multilingual pretraining and finetuning. arXiv preprint arXiv:2008.00401. Ishan Tarunesh, Syamantak Kumar, and Preethi Jyothi. 2021. From machine translation to code-switching: Generating high-quality code-switched text. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems 30: Annual Conference on Neural* Information Processing Systems NeurIPS. Jun Wang, Benjamin I. P. Rubinstein, and Trevor Cohn. 2022. Measuring and mitigating name biases in neural machine translation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022. Genta Indra Winata, Alham Fikri Aji, Zheng Xin Yong, and Thamar Solorio. 2022. The decades progress on code-switching research in NLP: A systematic survey on trends and challenges. *arXiv preprint* arXiv:2212.09660. Genta Indra Winata, Andrea Madotto, Chien-Sheng Wu, and Pascale Fung. 2019. Code-switched language models using neural based synthetic data from parallel sentences. In Proceedings of the 23rd Conference on Computational Natural Language Learning, CoNLL. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Huggingface's transformers: State-of-the-art natural language processing. In *Proceedings of the 2020 Conference on* Empirical Methods in Natural Language Processing: System Demonstrations. ## A Dataset Details Tab. 4 presents the dataset statistics for our automatic evaluation. The dataset is created by Gupta et al. (2020) and under a Creative Commons Attribution-NoDerivatives 4.0 International License.7 | Language Pairs | Train | Dev. | Test | |------------------|---------|--------|--------| | Es-En | 196,725 | 2,000 | 2,000 | | De-En | 188,131 | 2,000 | 2,000 | | Fr-En | 193,922 | 2,000 | 2,000 | | Hi-En | 248,330 | 2,000 | 2,000 | | Bn-En | 163,893 | 2,000 | 2,000 | | Ml-En | 178,453 | 2,000 | 2,000 | | Ta-En | 11,380 | 2,000 | 2,000 | | Te-En | 9,105 | 2,000 | 2,000 | Table 4: Dataset statistics of the dataset provided by Gupta et al. (2020). ## B Inter Annotator Agreement We measure the mutual agreement rate among our human annotators by calculating the average absolute differences between the scores they give for the same instance. For example, if the semantic correctness score is given with score (2, 2, 3). Then, the average absolute difference is 0.66. We then take a micro average across all our human-annotated instances. We get a score of 0.50 and 0.52 for the fluency and semantic correctness score for ChineseEnglish, respectively. As for Hindi-English, we get a score of 0.59 and 0.55 for the fluency and semantic correctness score. This indicates our experts agree with each other with only a little disagreement. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitation Section ✓ A2. Did you discuss any potential risks of your work? Broader Impacts Section ✓ A3. Do the abstract and introduction summarize the paper's main claims? Introduction Section and Abstract Section ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4 ✓ B1. Did you cite the creators of artifacts you used? Section 4 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Appendix A ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Appendix A ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? We did simple checking of the potential harmful information in the dataset, but since the date size is large, we couldn't perform manual check on every instance. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Appendix A ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Appendix A ## C ✓ **Did You Run Computational Experiments?** Section 4 & 5 ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4.1 & Appendix B ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Appendix B ✗ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Left blank. D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Section 5 ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Appendix C ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Appendix C ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Appendix C ✓ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Broader Impacts Section ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Appendix C
erker-etal-2023-imagination
Imagination is All You Need! Curved Contrastive Learning for Abstract Sequence Modeling Utilized on Long Short-Term Dialogue Planning
https://aclanthology.org/2023.findings-acl.319
Inspired by the curvature of space-time, we introduce Curved Contrastive Learning (CCL), a novel representation learning technique for learning the relative turn distance between utterance pairs in multi-turn dialogues. The resulting bi-encoder models can guide transformers as a response ranking model towards a goal in a zero-shot fashion by projecting the goal utterance and the corresponding reply candidates into a latent space. Here the cosine similarity indicates the distance/reachability of a candidate utterance toward the corresponding goal. Furthermore, we explore how these forward-entailing language representations can be utilized for assessing the likelihood of sequences by the entailment strength i.e. through the cosine similarity of its individual members (encoded separately) as an emergent property in the curved space. These non-local properties allow us to imagine the likelihood of future patterns in dialogues, specifically by ordering/identifying future goal utterances that are multiple turns away, given a dialogue context. As part of our analysis, we investigate characteristics that make conversations (un)plannable and find strong evidence of planning capability over multiple turns (in 61.56{\%} over 3 turns) in conversations from the DailyDialog dataset. Finally, we show how we achieve higher efficiency in sequence modeling tasks compared to previous work thanks to our relativistic approach, where only the last utterance needs to be encoded and computed during inference.
# Imagination Is All You Need! Curved Contrastive Learning For Abstract Sequence Modeling Utilized On Long Short-Term Dialogue Planning Justus-Jonas Erker DFKI ∗ Lab Berlin & Maastricht University j.erker@student. † ## Abstract Stefan Schaffer DFKI ∗ Lab Berlin [email protected] Gerasimos Spanakis Maastricht University jerry.spanakis@† Inspired by the curvature of space-time (Einstein, 1921), we introduce Curved Contrastive Learning (CCL), a novel representation learning technique for learning the **relative** turn distance between utterance pairs in multi-turn dialogues. The resulting bi-encoder models can guide transformers as a response ranking model towards a goal in a zero-shot fashion by projecting the goal utterance and the corresponding reply candidates into a latent space. Here the cosine similarity indicates the distance/reachability of a candidate utterance toward the corresponding goal. Furthermore, we explore how these forward-entailing language representations can be utilized for assessing the likelihood of sequences by the entailment strength i.e. through the cosine similarity of its individual members (encoded separately) as an emergent property in the curved space. These non-local properties allow us to imagine the likelihood of future patterns in dialogues, specifically by ordering/identifying future goal utterances that are multiple turns away, given a dialogue context. As part of our analysis, we investigate characteristics that make conversations (un)plannable and find strong evidence of planning capability over multiple turns (in 61.56% over 3 turns) in conversations from the DailyDialog (Li et al., 2017) dataset. Finally, we show how we achieve higher efficiency in sequence modeling tasks compared to previous work thanks to our relativistic approach, where only the last utterance needs to be encoded and computed during inference. ## 1 Introduction Large Scale Transformers are becoming more and more popular in dialogue systems (Zhang et al. (2019), Peng et al. (2022)). Though these models are very effective in generating human-like responses in a given context, based on their learning ∗German Research Center for Artificial Intelligence †maastrichtuniversity.nl objective to minimize perplexity, they tend to have trouble generating engaging dialogues (Gao et al., 2020). Meister et al. (2022) have shown that human conversations usually do not sample from the most likelihood of words like transformers do. We argue that one reason for this is that natural conversations can be (always) considered goal-oriented (even chitchat) and motivate this claim based on literature from psychology. These have shown that "Conversation is a goal-directed process" (Myllyniemi, 1986) as humans shift conversation topics based on the social connection/audience and use it to shape social relations (Dunbar et al., 1997). The psychological literature also elaborates on how humans are able to plan and simulate dialogues by utilizing inner speech as part of verbal working memory (Grandchamp et al., 2019). "Key to most of such models is that inner speech is posited as part of a speech production system involving predictive simulations or "forward models" of linguistic representations" (Alderson-Day and Fernyhough, 2015) Keeping this in mind, we investigated dialogues under the aspect of "forward" entailing language representations by projecting them into a simple semantic sentence transformer (Reimers and Gurevych, 2019) latent space. We place a fixed position in the DailyDialog (Li et al., 2017) dataset as a goal utterance and measure the cosine similarity of the goal to every other utterance within the dialogue. Our own preliminary work revealed, as shown in figure 1, that the similarity of previous utterances to the goal utterance increases as they get closer to the goal utterance. However, fluctuations between the speaker at the goal turn (saying the utterance later on) and their dialogue partner can be observed. As we see on the blue & red highlighted turns, the goal turn speaker has a greater similarity to the goal utterance than ![1_image_0.png](1_image_0.png) the dialogue partner. We filtered all samples causing these fluctuations and find that these transitive entailing properties are essential for guiding the conversation toward the given goal. Regardless of whether the person had the intent to reach the target goal.We demonstrate in this paper how we can build upon this phenomenon to learn the relative distance between utterance pairs. In particular, by mixing the training objective of Natural Language Inference (NLI) for the semantic embedding space with a distance proportional and directional aware (through two special tokens [BEFORE] & [AFTER]) cosine similarity-based loss of utterance pairs. The resulting Curved Contrastive Learning (CCL) is presented on three tasks: (1) short-term planning, (2) next utterance selection, and (3) longterm planning. (1) Short-term planning: CCL allows us to imagine the likelihood of a candidate utterance leading to a given goal utterance by projecting them together into one latent space (imaginary space). The cosine similarity indicates the distance/reachability of a candidate utterance towards the corresponding goal as illustrated in a transformer guidance example in figure 2. Thanks to the transitive property we can select the utterances at each turn greedily. (2) Next utterance selection: The embeddings can be utilized for sequence modeling by only using the cosine similarity between the separately encoded sequence members. It is evaluated by the ranking performance of the human vs random utterances task given a dialogue context. (3) Long-term planning: Since these embeddings do not require entire sequences for sequence modeling, we can assess the likelihood of following patterns (of multiple goal utterances that are multiple turns apart) by using the entailment strength between these and the context in the curved space. We evaluate this approach based on the ordering/identifying of future goal utterances. Furthermore, we investigate two research questions: - Do chit-chat conversations have planning capability? **(RQ1)** - What characteristics make dialogue planning possible? **(RQ2)** The paper is structured as follows: In §2 we discuss the related work. Following in §3 where we present the methodology, baselines as well as basic components for the advanced architectures. In §4 the short-term planning approaches, followed by the next utterance selection in §5 and the longterm planning approaches for ordering goals in §6. We wrap up the paper with the experiments & discussion in §7 followed by the conclusion in §8. ## 2 Related Work Our work builds upon two major concepts, dialogue planning, and entailment. Related publications from the stated fields are discussed below. ## Dialogue Planning While previously introduced planning techniques used several abstraction approaches (Teixeira and Dragoni, 2022), none of them exploited the characteristics of curved conversation embedding latent spaces. We argue that generating a complete dialogue path is unnecessary as we can simply choose the utterance in the transformer's search space that gets us closest to the goal at every turn. Ramakrishnan et al. (2022) proposed a similar idea on word level by applying constrained decoding to the dialogue response generation to increase the likelihood of a target word not only in the current utterance but also utterances in the future. Furthermore, DialogRPT (Gao et al., 2020) has been introduced as a dialogue response ranking model for depth, width, and upvotes prediction for utterance candidates. We utilize DialogRPT as a baseline for our next utterance selection experiments based on the dialogue history. ## Entailment Entailment-based approaches have a long history in NLP and have been utilized for a lot of tasks as zero-shot classification tasks like relation extraction (Obamuyide and Vlachos, 2018) or zero-shot text classification (Yin et al., 2019). The idea of ![2_image_1.png](2_image_1.png) entailment graphs and making use of transitivity has been previously explored by Kotlerman et al. (2015) & (Chen et al., 2022). Textual entailment has also been applied to Dialogue Systems as an evaluation technique (Dziri et al., 2019) or for improving response quality through backward reasoning (Li et al., 2021). Contrastive learning with positional information has been previously applied to image segmentation (Zeng et al., 2021). While You et al. (2020) utilized contrastive learning with augmentations for graph neural networks (GNNs). Natural Language Inference (NLI) based transformers have been increasingly used for semantic textual similarity (STS) since the introduction of Sentence Transformers, thanks to bi-encoders (Reimers and Gurevych, 2019) that can compare sentence pairs with cosine similarity and therefore reduce computation time by a 234000 * fold. This trend has especially been supported by GPU Search (Johnson et al., 2017). These sentence transformers have successfully been applied to learn utterance representations for retrieving utterance replies in dialogue systems (Liu et al., 2021) or ConvRT (Henderson et al., 2020) that we use as a baseline. However, without utilizing the curved property of conversations which we argue, as motivated in §1, is essential for forward representations. ## 3 Methods In this section, we formally define the research questions (problem definition), our baselines for the evaluation, and the core of Imaginary Embeddings based on which advanced architectures are built in the following sections. ## 3.1 Problem Definition Planning ![2_Image_0.Png](2_Image_0.Png) As part of this paper, we investigate two planning problems, short- and long-term planning. Shortterm planning aims at guiding the conversation from the current position towards a given goal utterance g (which we define as a semantic utterance) over multiple turns. Long-term planning, on the other hand, targets the ordering/scheduling of a set of goals G (utterances that are multiple turns apart) within a conversation. ## 3.2 Long-Short Term Planning Evaluation As part of this paper, we introduce a new evaluation technique, Long-Short Term Planning Evaluation (**LSTPE**). LSTPE is split into Short- as well as Long-Term planning. ## 3.2.1 Short-Term Planing Evaluation As part of the short-term planning evaluation, we evaluate the guidance capability of imaginary embeddings towards a given goal utterance. For this purpose, we split all dialogues within a given corpus d ∈ C into subsets of d[: hl] which represents the history of utterances (or context) with a fixed length hl, d[hl] the "correct" following utterance and d[hl+gd] as goal utterance with a goal distance gd. We then let a dialogue transformer generate 100 candidate utterances given the context d[: hl] for every dialogue d ∈ C which we project together with the goal utterance into the imaginary embedding. Following, we compare the ranking score of the original utterance to the artificially generated utterances. As metrics, we report the Hits@K ratio (X%) and the average rank. ## 3.2.2 Long-Term Planning Evaluation Similar to the Short-Term planning, we take a corpus of dialogue data d ∈ C and split it at fixed positions x into the dialogue history and three goal utterances |G| = 3. Given a dialogue history of length 5154 hl, ∀d ∈ C : d[: hl], d[x], d[x + gd], d[x + 2gd] where gd ≥ 2 is the distance between the goals. We define the first goal in distance as x − hlin the perspective of the dialogue history. The three resulting goal utterances result in 6 possible order permutations. Since 4 of them are partially ordered, we split the evaluation into ranking the partially ordered and reverse order to the true order separately. In both cases, we present the Hits@K ratio (X%) as well as the average total rank. While this technique is simple and does not require any supervision, some samples due to the random selection are without any context indistinguishable. E.g. an utterance like "oh, okay" could be at any position. Since all models are evaluated on the same data set, this is not an issue, however, an accuracy of 100% is realistically not possible. ## 3.3 Next Utterance Selection Evaluation Furthermore, we test the embedding's capability of telling potential replies from random utterances given a dialogue context by comparing it to DialogRPT (Gao et al., 2020), ConvRT (Henderson et al., 2020) and BM25 (Robertson and Zaragoza, 2009) on a ranking task. The data set is built up in a similar way as for short-term planning. ## 3.4 Imaginary Embeddings With Curved Contrastive Learning We introduce a novel self-supervised learning technique to map sequences to a conversational space. To generate these properties, we train a bi-encoder sentence transformer on two training objectives. The first objective builds upon the AllNLI dataset (a combination of SNLI (Bowman et al., 2015) and MultiNLI (Williams et al., 2017)) with a simple Softmax Loss. To learn the conversational space, two special tokens [BEFORE] and [AFTER] are introduced. The model is (pre-)trained with a Cosine Similarity loss on DailyDialog (Li et al., 2017), by sliding through conversational data with a fixed length l = 5. Notably, we combine consecutive utterances of the same speaker. Based on this fixed length, the training data is constructed for a given window as follows: $$\forall i\in\{1,..,l\}:{\left\{\begin{array}{l l}{([B]\;u[0],[A]\;u[i],s={\frac{l-i}{l}}}\\ {([B]\;u[i],[A]\;u[0],\ s=0}\\ {([B]\;u[0],[A]\;u^{\prime}[r],s=0}\\ {([B]\;u^{\prime}[r],[A]\;u[0],s=0}\end{array}\right.}$$ (1) $\frac{1}{2}$ . where [A] = [AFTER], [B] = [BEFORE], u the utterances in the observed window, u′a set of random utterances, and s the cosine similarity score. As 5 shows, the target cosine similarity for a positive sample pair is proportional to their positional distance in the dialogue (see illustration in figure 3). This lets us learn semantic properties between [B] & [B] and [A] & [A] as well as the curvature as a **relative time dimension** between utterance pairs in the space between [B] & [A] representations. Three hard negatives are introduced, the first ensures the directional property by swapping the [BEFORE] and [AFTER] token. The following two are selected from a special dataset of random utterances. Figure 3 unveils the widespread util- ![3_image_0.png](3_image_0.png) ity of imaginary embeddings. As shown, we can simply pick the best candidate utterance for reaching a given goal by **imagining** the closeness of the candidate utterance to the goal in the curved space without requiring the **real** representations between the utterance pairs. Similar to an object in our universe that always moves on a straight line but is curved by space-time (Einstein, 1921), we can follow a line to our goal utterance by greedily selecting the best utterance on turn-to-turn bases. We illustrated this transitive property by the light red in-between nodes in figure 3. Thanks to the relative time dimension between utterance pairs and their resulting **non-locality**, we are able to encode all sequence members (utterances) independently into one latent space and accumulate the likelihood of a sequence by comparing only with cosine similarity. In particular, by imagining the closeness between every context utterance (encoded with [B]) and the future utterance (encoded with [A])), i.e. Imagination is all you need! Not only can we assess the likelihood of sequences that we explore in the next utterance selection §5 but we can also utilize these self-organizing properties for mapping sequential representations to the conversational surface that are multiple turns apart. We explore this as the ordering of goals in long-term planning §6. ## 3.4.1 Adding Speaker Tokens Furthermore, we can modify imaginary embeddings with additional speaker tokens. Given a multi-turn dialogue with two participants, the tokens [O] and [E] are added to the [BEFORE] utterance at the encoding step (for even and odd distances to the target utterance [AFTER]). Accordingly, the learning objective (see equation 5) for the curved property is slightly modified by adding hard negatives for false speaker matches (see appendix D). ## 4 Short Term Planning Approach (Transformer Guidance) As described in section 3.2.1 we utilize imaginary embeddings as a re-ranking model. Respectively, we let a task-specific dialogue transformer generate 100 candidate utterances given the context d[: hl] of a fixed length hl for every sample dialogue d ∈ C. To get a diverse distribution of utterances we choose nucleus sampling with p = 0.8 and a temperature of t = 0.8. The generated utterances from the transformer are then projected in the imaginary embedding space and the goal similarity of d[hl + gd] is measured. Following, we check the rank of the true utterance from the test set leading to the goal utterance. The average rank and the distribution of ranks within the dialogue are evaluated with respect to different history lengths hl and different goal distances gd. ## 5 Next Utterance Selection With Curving Motivated by the curved property, the most suitable next utterance uf ∈ UF for a dialogue sequence his should be closest to the individual utterances of the sequence on average. We can assess a relative likelihood between all future utterances by measuring the entailment strength PE (i.e. imagining the closeness) of every uf to the history of utterances based on the cosine similarity as follows: $$P_{E}(u_{f}|h i s)=\sum_{u_{i}\in h i s}{\frac{[\mathbf{B}]\ \mathbf{u_{i}}\ \ [\mathbf{A}]\ \mathbf{u_{f}}}{\|[\mathbf{B}]\ \mathbf{u_{i}}\|\ \|[\mathbf{A}]\ \mathbf{u_{f}}\|}}\quad(2)$$ In the ranking evaluation, we sort the results of ∀uf ∈ UF : PE(uf |his) to determine the rank of the true utterance. Notably, we can observe the entailment strength (or activation) of individual utterances to a future one, which enables many other applications. During inference, while the dialogue partner is still speaking, we can precompute the entire context (apart from the new incoming utterance). Furthermore, we can utilize the curved context for greedily selecting the next goal max g∈G PE(g|his) in our long-term planning experiments. We refer to this as greedy curving. ## 6 Long-Term Planning Approaches In this section, we describe how Imaginary Embeddings can be used to order goals (a set of utterances) within dialogues for long-term planning. The models are evaluated with **LSTPE**, a given set of goals G with |G| = 3, and an equal distance between each node. ## 6.1 Imaginary Embedding Chains ![4_Image_0.Png](4_Image_0.Png) Imaginary Embeddings are perfectly suited for this task as they can be concatenated into cosine similarity chains by using the ([B] before and [A] after token) as illustrated in figure 4. We mathematically define it as: $$s(o)=\left(\sum_{i\in o}{\frac{[\mathbf{B}]\ \mathbf{g_{i}}\ [\mathbf{A}]\ \mathbf{g_{i+1}}}{\|[\mathbf{B}]\ \mathbf{g_{i}}\|\ \|[\mathbf{A}]\ \mathbf{g_{i+1}}\|}}\right)\quad(3)$$ where we choose the order of goals o ∈ O by the highest similarity score s with max o∈O (s(o)) (strongest entailment strength) of a given sequence o =< g1*, ..., g*n > of goals gi ∈ G. While this chain can be arbitrarily long and, thanks to GPU tensor computations calculated rather quickly, the complexity with O(n!) for a brute force computation remains high. ## 6.2 Imaginary Embedding Chains With History Curving Finally, we combine the concepts of Imaginary Embedding Chains and Curving by generating for every order [g1, g2, g3] a score (equation 4): $$s^{\prime}(g_{1},g_{2},g_{3})=s(o)+P_{E}(g_{1}|h i s)$$ $$-\frac{1}{2}P_{E}(g_{2}|h i s)-P_{E}(g_{3}|h i s)\tag{4}$$ where s(o) is the chain score of the given order based on equation 3 and PE(gi|his) is the history curving score for the corresponding goal. We motivate the addition of g1 and the subtraction of g3 (as well as g2) based on the presumption that g1 should be closest while g3 should be the furthest away to the history with respect to the curved property. Note that other than the simple Imaginary Embedding Chains (IEC), IEC + curving requires some dialogue context and is therefore not suitable for dialogue planning without context. ## 7 Experiments Our experiments are conducted on two dialogue corpora, DailyDialog (Li et al., 2017) and the Microsoft Dialogue Challenge (MDC) corpus (Li et al., 2018). We experiment with two transformer architectures BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019) to generate Imaginary Embeddings. In the short-term planning (transformer guidance) setting, we let our Imaginary Embeddings guide DialoGPT (Zhang et al., 2019) for DailyDialog and GODEL (Peng et al., 2022) for the MDC corpus. For the next utterance selection, we use pre-trained checkpoints of DialogRPT (Gao et al., 2020) and ConvRT (Henderson et al., 2020) as baselines. Furthermore, we add BM25 (Robertson and Zaragoza, 2009) as well as an ablation study with the two special tokens (before and after) but without the curved learning objective that we explore in the appendix G. ## 7.1 Experimental Setup While the DailyDialog data set has a test corpus of 1000 dialogues, we first have to generate a test data set for MDC. We do so by extracting the last 333 samples for each of the three task-oriented domains (movie-ticket booking, restaurant reservation, and taxi booking). This leaves us with 11,118 dialogues as training data for DailyDialog and 9088 training samples for MDC. ## 7.2 Self-Supervised Training Apart from combining consecutive utterances of the same speaker and removing dialogues with utterances longer than 200 tokens, we apply no further pre-processing on the training data. As described in §3.4, we pre-train all our architectures in stage (1) with a mixed training objective of NLI and the Curved Contrastive Learning (CCL) on the DailyDialog corpus for 5 epochs. For all MDC models, we follow up with a second stage where we train on the target corpora with the curved property learning objective only for domain adaptation.While Long Term planning performs best after 5 epochs of further fine-tuning, short-term planning requires only between 0.5 to 1 epoch(s). We provide all models including model cards on Huggingface as well as our code as part of a python package pip install imaginaryNLP (open-sourced under Apache-2.0 license) in the following GitHub repository †. ## 7.3 Evaluation Data Sets The evaluation data sets DailyDialog and MDC are constructed analogously. We construct the datasets for STP based on history length and goal in distance & LTP based on history length, goal in distance, goal distances respectively as illustrated in figure 4. Since MDC with an average number of 6.51 turns is even shorter than DialyDialog with 7.84, we are limited in the long-term planning to a shorter context as well as a goal in distance length. ## 7.4 Evaluation & Discussion In the following sections, we investigate how well these embeddings perform on our introduced LSTPE (§3.2) and on the next utterance selection task. In the main paper, we focus on our empirical findings and present the results of the experiments for space reasons in aggregated form. We provide †https://github.com/Justus-Jonas/ imaginaryNLP | Human Utterance Ranking vs 100 utterances sampled from DialoGPT Large / GODEL Large (p=0.8, t=0.8) | | | | | | | | | | | |--------------------------------------------------------------------------------------------------------------------|---------------------|---------|---------|---------|---------|--------|---------|---------|---------|---------| | Imaginary Embedding | Imaginary Embedding | | | | | | | | | | | without Speaker Token | with Speaker Token | | | | | | | | | | | Goal in Distance | Hits@5 | Hits@10 | Hits@25 | Hits@50 | Average | Hits@5 | Hits@10 | Hits@25 | Hits@50 | Average | | (in %) | (in %) | (in %) | (in %) | Rank | (in %) | (in %) | (in %) | (in %) | Rank | | | DailyDialog Test Corpus Guidance even g distance | 29.36 | 35.76 | 51.03 | 67.9 | 34.59 | 27.78 | 36.22 | 53.78 | 71.36 | 32.56 | | Guidance odd g distance | 31.31 | 39.21 | 54.09 | 72.78 | 30.61 | 63.49 | 72.18 | 83.21 | 91.06 | 12.9 | | MDC Test Corpus Guidance even g distance | 20.79 | 29.32 | 48.04 | 70.85 | 34.86 | 39.18 | 50.9 | 69.29 | 83.1 | 22.09 | | Guidance odd g distance | 25.41 | 32.17 | 46.8 | 67.31 | 35.88 | 63.06 | 70.65 | 80.94 | 89.16 | 14.01 | | Table 1: Aggregated short-term planning evaluation for odd (unveiling utterances of the dialogue partner) and even | | | | | | | | | | | a detailed analysis in the appendix, where we explore examples as well as demonstrate the curved property of dialogues in these embeddings. This is illustrated as vector chains in figure 7 or the average similarity of different distances and directions within dialogues (appendix B). ## 7.4.1 Short-Term Planning As shown in the short-term planning aggregated results table 1, we split the results based on odd distance length (unveiling utterances of the dialogue partner) and even distance (which would be uttered by the transformer). Both have at least 20% of the true candidate utterances in the top 5 (Hits@5) (of 100) ranks, 50% in the top 25 (Hits@25), and a max average rank of 32.56. We observe that speaker token-based imaginary embeddings on odd distances can even achieve 63% in the top 5 (Hits@5) with the highest average rank of 14.01. This can be expected as odd utterances will be uttered by our dialogue partner which we can greatly influence by our preceding utterance. Interestingly, we find that it is significantly easier to plan 3 turns ahead rather than 2 turns. This is portrayed in the detailed analysis based on the history length, goal distances, and the first goal distance (goal in distance) in table 3 (appendix). Our analysis unveils that the DailyDialog models have an advantage through their more diverse utterance distribution in selecting the true candidate utterance. Furthermore, they perform more consistently across different history lengths and goal distances. MDC, on the other hand, performs overall better but has a higher variance in its performance (with samples of different history lengths and goal distance). Concluding that the score distribution in the ranking process is either more strongly peaked (most in data sets with lots of request intents) or it more is flattened (especially on data with majorly inform intents). We explore this in detail in the appendix E. This flattened score distribution can be expected as in many cases of providing information, the actual information has little impact on future turns considering a structured task-oriented setting (e.g. replying on how many people will attend a reservation). ## 7.4.2 Next Utterance Selection Based On Curved History The sequence modeling capability is evaluated based on the normalized average rank (of the true following utterance compared to all other utterances at the same position of the corresponding corpus). We find that the DailyDialog corpus clearly outperforms MDC across all variations. As we demonstrate in figure 5, DailyDialog performs best with an average rank in the top 10% over all history lengths (the entire history projected in the curved space with speaker tokens). For sequences longer than 2 turns, it even outperforms all our baselines DialogRPT (human vs. random) by at least 2.8% and ConvRT by 0.5%. Overall, we find that DialogRPT has trouble with increasing sequence lengths as input and find that keeping the last two utterances performs best. Notably, we can reduce the computation costs of the dialogue context compared to DialogRPT and also ConvRT due to our relativistic approach which we explore in more detail in the appendix C.1. While our experiments on MDC for the next utterance selection show weak results, in summary, MDC shows the same fluctuations between primarily inform & requests intents. While the ranking approaches based on only the last utterance are most of the time superior, we observe on odd turns (where we have a lot of request intents) the entire history usually performs better relative to even ![7_image_0.png](7_image_0.png) distances. Conversely, we notice that approaches based on only the last utterance are especially good on turns where we see more informing intents (replying to the request). We further explore this in the appendix C.2. ## 7.4.3 Long Term Planning Evaluation The short turn length of the two corpora becomes especially troublesome in the long-term planning evaluation. Here, we are limited to short context/history lengths as well as short goal distances and (first) goal in distances.Across all models and datasets, we observe a solid average rank of 1.87 (between 1 and 2 for all approaches) on identifying the correct order of 3 goal utterances within their 6 possible orders as table 2 unveils. Note that Greedy Curving has only to predict only the immediate next goal (1/3) while the other LTP models the entire order (1/6). While our MDC embeddings had especially trouble with utterance selection in width (selecting an utterance from the same dialog depth §7.4.2), we find that MDC shows a stronger performance on greedy goal selection (Greedy Curving (GC)) on classic embeddings thanks to the solidified sequential structure of task-oriented dialogues. This advantage lets MDC outperform DailyDialog also on all other approaches. When Speaker tokens come into play, however, MDC drops while DailyDialog improves in performance compared to classic imaginary embeddings. Imaginary Embedding Chains (IEC) and with curved context (IEC & CU) show similar performance in aggregated form. However, when the context is close (i.e. the first goal is not far away) IECs with a curved context prevail. This changes with increasing distance of goals or first goal in distance as highlighted in table 4 of the appendix. Here, IECs with no context keep an advantage. Similarly, we observe a drop in performance over longer distances for Greedy Curving. In terms of the MDC planning capability, the performance drop-off between the two most common intents, request and inform, is similar, although not as severe as in short-term planning or the next utterance selection. ## 8 Conclusion In this paper, we introduced Curved Contrastive Learning, a novel technique for generating forwardentailing language embeddings. We demonstrated that these can be utilized on various sequence modeling tasks by only using the cosine similarity between the separately encoded sequence members in the curved space. In particular, for the next utterance selection by imagining the closeness of every context utterance to candidate utterances in the curved space (where DailyDialog's true utterances are constantly in the top 10%), outperforming our pre-trained baselines DialogRPT and ConvRT on sequences longer than 2 turns while reducing encoding costs. Furthermore, we have shown their pattern recognition ability on the ordering/identification of future representations (with an average rank of 1.87/6) even at longer distances and far apart. We also demonstrated that these embeddings can be applied to guiding dialogue transformers to approach a goal over multiple turns. In particular, by imagining the closeness of candidate utterances towards the goal through the transitive properties of the curved space. Following up on our claim, that even chit-chat can be considered goal-oriented **(RQ1)**, we find strong evidence of planning capability in chit-chat conversations over | Imaginary Embedding w.o. Speaker Token | Imaginary Embedding with Speaker Token | | | | | | | | | | | | |------------------------------------------|------------------------------------------|-------------------|---------------|--------|--------|---------|--------|--------|--------|--------|--------|---------| | partially ordered | Reverse order | partially ordered | Reverse order | | | | | | | | | | | Model | Hits@1 | Hits@2 | Hits@3 | Hits@4 | Hits@1 | Average | Hits@1 | Hits@2 | Hits@3 | Hits@4 | Hits@1 | Average | | (in %) | (in %) | (in %) | (in %) | (in %) | Rank | (in %) | (in %) | (in %) | (in %) | (in %) | Rank | | | DailyDialog Test Corpus IEC 49.99 70.62 | 85.26 | 93.42 | 79.17 | 2.01 | 51.60 | 72.22 | 86.82 | 94.94 | 81.18 | 1.94 | | | | IEC & CU | 50.69 | 71.24 | 85.09 | 93.63 | 78.54 | 1.99 | 51.07 | 72.98 | 86.9 | 94.97 | 79.87 | 1.94 | | GC | 57.87 | 82.47 | - | - | - | 1.6 | 57.32 | 83.89 | - | - | - | 1.59 | | MDC Test Corpus IEC 58.72 | 77.43 | 90.28 | 96.38 | 85.28 | 1.77 | 56.83 | 77.50 | 90.19 | 95.44 | 84.52 | 1.80 | | | IEC & CU | 61.59 | 77.72 | 90.15 | 96.79 | 86.25 | 1.74 | 58.63 | 78.62 | 91.20 | 95.72 | 85.44 | 1.76 | | GC | 66.30 | 89.61 | - | - | - | 1.44 | 56.05 | 80.59 | - | - | - | 1.64 | multiple turns. E.g. 48.83% / 61.56% (within the top 5 / top 10 utterances in the re-ranking) on 3 turns ahead. Our RQ2 can be answered by the fact that we observe significant differences in the plannability of different intents. Our empirical analysis shows that request intents are significantly easier to plan than informing intents. While our focus in this paper was mainly on the introduction of Imaginary Embeddings and their utilization to dialogue planning, we leave much more space for further evaluation, analysis, and applications on the curved properties of our✘✘✘✘ universe ‡embeddings in future works. ## 9 Limitations One of our limitations is that the data is split for short-term planning and long-term planning at fixed positions which on one side shows the overall planning capability on different datasets unbiasedly but on the other hand mixes the planning ability of the datasets with the overall performance of the embeddings. We have demonstrated in section E.2 that this can lead in many cases to unplannable examples. While this means that our embeddings should overall perform better than our results suggest, in the future, we should create either a human-filtered dataset where planning is always possible or either create a human benchmark as a further baseline. Furthermore, we rely in short-term planning (transformer guidance) on the generated utterance distributions by transformers where we have to balance between semantic diversity and the likelihood of utterances. We control these with temperature and nucleus sampling (top p) and found the best trade-off with a temperature of 0.8 and a top p of 0.8. Nonetheless, this can still lead to utterances that might lead to the goal but that would be not considered by humans as very likely based on the given context as we explore in E.2. Furthermore, in the next utterance selection, we utilize the publicly available checkpoints which have been evaluated in the paper (Gao et al., 2020) on DailyDialog but both were seemingly not trained on an MDC-like task-oriented corpus. Since we find that the next utterance selection based on the curved property of the context in a task-oriented setting like MDC is almost always worse than just taking the last utterance, we have not expanded experiments in this domain. ## 10 Ethics Like other language models, our model is prone to bias from training data sets (Schramowski et al., 2022)(Mehrabi et al., 2019). This is something to keep in mind when fine-tuning the model for domain adaptation. Since the models are for guidance only, we do not see any direct threats related to language generation. Still, if an individual intentionally wants to harm others and trains a language model to generate harmful utterances, our model could be employed to support this process. In contrast, however, we argue that these embeddings have great potential through their transitive properties to foresee and deflect harmful utterances from afar. Considering the risk that language models pose to humans (Weidinger et al., 2021), these embeddings could be utilized as a filter on top of generative language models, e.g. removing utterances that would increase the probability of leading to an utterance of a large set of harmful utterances. Our proposed model has a relatively small model size and shows higher efficiency during training & inference compared to DialogRPT and ConvRT, therefore we see great potential for reducing the carbon footprint in utterance retrieval tasks, in accordance with recent efforts in NLP (Strubell et al., 2019) (Patterson et al., 2021). ## References Ben Alderson-Day and Charles Fernyhough. 2015. Inner speech: Development, cognitive functions, phenomenology, and neurobiology. *Psychological Bulletin*, 141:931 - 965. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. Zhibin Chen, Yansong Feng, and Dongyan Zhao. 2022. Entailment graph learning with textual entailment and soft transitivity. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. Robin Dunbar, Anna Marriott, and Neill Duncan. 1997. Human conversational behavior. Human nature (Hawthorne, N.Y.), 8:231–246. Nouha Dziri, Ehsan Kamalloo, Kory W. Mathewson, and Osmar R. Zaïane. 2019. Evaluating coherence in dialogue systems using entailment. *CoRR*, abs/1904.03371. Albert Einstein. 1921. *Relativity: The Special and General Theory*. Routledge. Xiang Gao, Yizhe Zhang, Michel Galley, Chris Brockett, and Bill Dolan. 2020. Dialogue response ranking training with large-scale human feedback data. CoRR, abs/2009.06978. Romain Grandchamp, Lucile Rapin, Marcela PerroneBertolotti, Cédric Pichat, Célise Haldin, Emilie Cousin, Jean-Philippe Lachaux, Marion Dohen, Pascal Perrier, Maëva Garnier, Monica Baciu, and Hélène Loevenbruck. 2019. The ConDialInt Model: Condensation, Dialogality, and Intentionality Dimensions of Inner Speech Within a Hierarchical Predictive Control Framework. *Frontiers in Psychology*, 10:2019. Matthew Henderson, Iñigo Casanueva, Nikola Mrkšic,´ Pei-Hao Su, Tsung-Hsien Wen, and Ivan Vulic. 2020. ´ ConveRT: Efficient and accurate conversational representations from transformers. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 2161–2174, Online. Association for Computational Linguistics. Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2017. Billion-scale similarity search with gpus. *CoRR*, abs/1702.08734. Lili Kotlerman, Ido Dagan, Bernardo Magnini, and Luisa Bentivogli. 2015. Textual entailment graphs. Natural Language Engineering, 21:699 - 724. Xiujun Li, Yu Wang, Siqi Sun, Sarah Panda, Jingjing Liu, and Jianfeng Gao. 2018. Microsoft dialogue challenge: Building end-to-end task-completion dialogue systems. Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. 2017. DailyDialog: A manually labelled multi-turn dialogue dataset. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 986–995, Taipei, Taiwan. Asian Federation of Natural Language Processing. Ziming Li, Julia Kiseleva, and Maarten de Rijke. 2021. Improving response quality with backward reasoning in open-domain dialogue systems. *CoRR*, abs/2105.00079. Che Liu, Rui Wang, Jinghua Liu, Jian Sun, Fei Huang, and Luo Si. 2021. Dialoguecse: Dialogue-based contrastive learning of sentence embeddings. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. 2019. A survey on bias and fairness in machine learning. *CoRR*, abs/1908.09635. Clara Meister, Tiago Pimentel, Gian Wiher, and Ryan Cotterell. 2022. Typical decoding for natural language generation. *CoRR*, abs/2202.00666. Rauni Myllyniemi. 1986. Conversation as a system of social interaction. *Language & Communication*, 6(3):147–169. Abiola Obamuyide and Andreas Vlachos. 2018. Zeroshot relation classification as textual entailment. In Proceedings of the First Workshop on Fact Extraction and VERification (FEVER), pages 72–78, Brussels, Belgium. Association for Computational Linguistics. David A. Patterson, Joseph Gonzalez, Quoc V. Le, Chen Liang, Lluis-Miquel Munguia, Daniel Rothchild, David R. So, Maud Texier, and Jeff Dean. 2021. Carbon emissions and large neural network training. CoRR, abs/2104.10350. Baolin Peng, Michel Galley, Pengcheng He, Chris Brockett, Lars Liden, Elnaz Nouri, Zhou Yu, Bill Dolan, and Jianfeng Gao. 2022. Godel: Large-scale pre-training for goal-directed dialog. arXiv. Ramya Ramakrishnan, Hashan Buddhika Narangodage, Mauro Schilman, Kilian Q. Weinberger, and Ryan McDonald. 2022. Long-term control for dialogue generation: Methods and evaluation. Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. CoRR, abs/1908.10084. Stephen Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: Bm25 and beyond. *Found. Trends Inf. Retr.*, 3(4):333–389. Patrick Schramowski, Cigdem Turan, Nico Andersen, Constantin A. Rothkopf, and Kristian Kersting. 2022. Large pre-trained language models contain humanlike biases of what is right and wrong to do. Nature Machine Intelligence, 4(3):258–268. Emma Strubell, Ananya Ganesh, and Andrew McCallum. 2019. Energy and policy considerations for deep learning in NLP. *CoRR*, abs/1906.02243. Milene Teixeira and Mauro Dragoni. 2022. A review of plan-based approaches for dialogue management. Cognitive Computation, 14. Laura Weidinger, John Mellor, Maribeth Rauh, Conor Griffin, Jonathan Uesato, Po-Sen Huang, Myra Cheng, Mia Glaese, Borja Balle, Atoosa Kasirzadeh, Zac Kenton, Sasha Brown, Will Hawkins, Tom Stepleton, Courtney Biles, Abeba Birhane, Julia Haas, Laura Rimell, Lisa Anne Hendricks, William S. Isaac, Sean Legassick, Geoffrey Irving, and Iason Gabriel. 2021. Ethical and social risks of harm from language models. *CoRR*, abs/2112.04359. Adina Williams, Nikita Nangia, and Samuel R. Bowman. 2017. A broad-coverage challenge corpus for sentence understanding through inference. Wenpeng Yin, Jamaal Hay, and Dan Roth. 2019. Benchmarking zero-shot text classification: Datasets, evaluation and entailment approach. *CoRR*, abs/1909.00161. Yuning You, Tianlong Chen, Yongduo Sui, Ting Chen, Zhangyang Wang, and Yang Shen. 2020. Graph contrastive learning with augmentations. In *Advances in* Neural Information Processing Systems, volume 33, pages 5812–5823. Curran Associates, Inc. Dewen Zeng, Yawen Wu, Xinrong Hu, Xiaowei Xu, Haiyun Yuan, Meiping Huang, Jian Zhuang, Jingtong Hu, and Yiyu Shi. 2021. Positional contrastive learning for volumetric medical image segmentation. CoRR, abs/2106.09157. Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2019. Dialogpt: Large-scale generative pre-training for conversational response generation. ## A Attribution This work stems from the mandatory master's internship of Justus-Jonas Erker at the German Research Center for Artificial Intelligence supervised by Stefan Schaffer and Gerasimos Spanakis. ## B Imaginary Embedding Extended Analysis We analyze the Imaginary Embeddings based on ![10_image_0.png](10_image_0.png) their average similarity to different distances of utterances pairs within dialogues as well as their direction as shown in figure 6. While the model's average similarity is far from the training objective, the scores show a favorable decay considering the distance for positive examples as well as a relatively low similarity for false direction utterance pairs. Furthermore, we have illustrated the curved Figure 6: Average Imaginary Embedding Similarity to correct and false direction utterances based on turn distance on DailyDialog Test Corpus property of these embeddings as directed graphs of dialogues in figure 7 where we notice a tendency of utterances at the beginning of the dialogue in the close right and the last utterance (encoded with the after token) deeper on the left. ## C Next Utterance Selection Extended Analysis For the next utterance selection we provide an extended description for our speed comparison as well as the MDC results. ## C.1 Computation Comparison Since the bi-encoder architectures are significantly more efficient than DialogRPT, we compare ConvRT and Imaginary Embeddings in more detail. Considering the encoding of utterances for some ![11_image_0.png](11_image_0.png) sequence of length n, ConvRT requires each context representation to encode every previous utterance again with O(n) while Imaginary Embeddings only encodes the last utterance O(1). Therefore, the entire next utterance selection task for the DailyDialog Corpus (up to a context length of 10 utterances) requires ConvRT to generate 6219 context representations where in total 26733 utterances are encoded. Imaginary Embeddings reduce the computation of encoding utterances through its relativistic approach by a factor of 26733 6219 = 4.3. In terms of context to candidate utterance matching, Imaginary Embeddings can pre-compute the entire context until utterance n−1 with (batch, h_len − 1*, emb*) while the dialogue partner is speaking. Since we got normalized embeddings from the sentence transformer we can compute the cosine similarity-based score for context and candidate pairs in one simple batch matrix multiplication U ⊙ H.T by transposing the history with dimensions (1, 2). Following we sum across the second dimension (history dim) like equation 4 illustrates and store the score matrix s1*,...,n*−1 in memory. At inference, we have to compute scores only between the last utterances and the candidate utterances matching the number of dot products with ConvRT. Once the new score matrix sn for every pair is generated we simply sum the two score matrices s = s1*,...,n*−1 + sn. ## C.2 Mdc Results We demonstrate the results of the MDC next utterance selection in figure 8 where we observe as ![11_image_1.png](11_image_1.png) described in the main paper the symmetry between inform and request intents either profiting from only the last utterance or the entire history. ## D Speaker Token Learning Objective ∀i ∈ {1*, .., l*} : ([E][B] u[0], [A] u[i], s = l−i lif i mod 2 = 0 ([O][B] u[0], [A] u[i], s = l−i lif i mod 2 ̸= 0 ([E][B] u[i], [A] u[0], s = 0 if i mod 2 = 0 ([O][B] u[i], [A] u[0], s = 0 if i mod 2 ̸= 0 ([O][B] u[i], [A] u ′[r], s = 0 (p = 1 4 ) ([E][B] u[i], [A] u ′[r], s = 0 (p = 1 4 ) ([O][B] u ′[r], [A] u[i], s = 0 (p = 1 4 ) ([E][B] u ′[r], [A] u[i], s = 0 (p = $\overset{\text{a}}{(p=\frac{1}{4})}$. ## (5) Where [A] = [After], [B] = [Before], [E] = [E], [O] = [O], U The Utterances In The Observed Window, U′A Set Of Random Utterances, And S The Cosine Similarity Score. For The Random Utterance Matching We Assign An Equal Probability P To Every Possible Combination. E Extended Short-Term Planning Evaluation As part of the extended Short Term Planning Evaluation, we investigate the extended results based on the history length, goal distances, and the first goal distance (goal in distance) in table 3 and demonstrate examples. ## E.1 **Detailed Short-Term Planning Evaluation** Table 3 unveils that additional speaker tokens show improvement in the MDC Test corpus across all tested categories. While classic embeddings show on MDC a similar performance across all even distances, we can observe two spikes at position (3, 1) and (5, 1) with (hl, gd) on odd distances with 51.17% / 45.80% in the top 5 respectively. At these positions, we monitor a 33% increase in the standard deviation on average of the distribution of guidance scores i.e. that the model is much more decisive in its ranking. We analyzed the intent at these positions and find a two times increase in requests and a 38% decrease in inform intents to the data set's average. While the speaker token-based embeddings show that we can overcome this gap for odd distances, we still find that the two lowest performers on (4, 1) & (4, 3) with "only" 53.03% & 51.45% in the top 5 have all a minimum of 80% of informing intents. Since the two corpora use separate latent spaces, we do not compare them on a simple standard deviation. Instead, we take the sum of average standard deviations as a baseline and divide it by the sum of the standard deviations (for each data set) of the standard deviations (for each transformer utterance distribution) to measure the variation in performance over different testing parameters history length, goal distances, (first) goal in distance. With a 35% higher score, DailyDialog shows less variance through different test parameters. Nonetheless, we find that DailyDialog has a 12% higher semantic variance across all utterances in the transformer-generated distributions than MDC by measuring their average semantic similarity with a simple semantic sentence transformer. ## E.2 Examples Of Short-Term Planning While we provide construction of our evaluation datasets, we still want to highlight some of the strengths and weaknesses of our introduced embeddings. In the example on the left of figure 9, we can see that without knowing what the person is going to say, the model can sometimes move toward the goal too greedily. In the example on the right, we see that the model can also understand more complex relations, where the only way to get to a conversation state where someone would utter "look behind you. They are coming this way" would be in a manner of playing catch me as the model ranks it on the first position. A lot of the weaker ranking results are due to the fixed split of data as demonstrated in figure 10. We observe in the first example (left) that the model tries to unveil the utterance "You're right" by trying to get the other person into an argument (rank 1) where it hopes the person would then agree to their own opinion 3 turns later or by trying to unveil the utterance right away (rank 2). In the example in the middle, we see the drawback of purely relying on the transformer's context-aware utterance generation as the selected utterance of "pint of wine" might be closer to fruits than beer but at the same time is not a valid answer. This can be also observed in the last example (right). | Human Utterance Ranking vs 100 utterances sampled from DialoGPT Large / GODEL Large (p=0.8, t=0.8) | | | | | | | | | | | | | | |-------------------------------------------------------------------------------------------------------------|---------------------|---------------|-------|-------------------------------------------------------------------------------------------------------------------------------------------------|-------|-------|-------|-------|-------|-------|-------|-------|-------| | without Speaker Token | Imaginary Embedding | | | | | | | | | | | | | | Imaginary Embedding | with Speaker Token | | | | | | | | | | | | | | Embedding Type | History Length | Goal Distance | n | Hits@5 (in %) Hits@10 (in %) Hits@25 (in %) Hits@50 (in %) Average Rank Hits@5 (in %) Hits@10 (in %) Hits@25 (in %) Hits@50 (in %) Average Rank | | | | | | | | | | | DailyDialog Test Corpus | 2 | 2 | 741 | 23.08 | 31.44 | 50.74 | 70.31 | 33.65 | 24.70 | 33.87 | 55.06 | 75.57 | 30.28 | | 2 | 4 | 534 | 23.03 | 31.65 | 48.13 | 66.85 | 35.61 | 22.10 | 32.02 | 51.31 | 71.72 | 32.57 | | | 5 | 2 | 479 | 25.05 | 31.52 | 44.47 | 63.47 | 38.03 | 20.88 | 29.23 | 49.69 | 69.52 | 34.87 | | | 5 | 4 | 323 | 15.79 | 22.60 | 39.01 | 56.66 | 43.02 | 17.65 | 24.15 | 42.11 | 66.25 | 38.27 | | | 10 | 2 | 102 | 48.04 | 51.96 | 60.78 | 77.45 | 27.18 | 36.27 | 45.10 | 61.76 | 70.59 | 30.37 | | | Guidance with even goal distance gd | | | | | | | | | | | | | | | (saying goal by yourself) | 2 | 1 | 918 | 42.37 | 50.54 | 66.88 | 84.64 | 21.74 | 70.59 | 78.54 | 87.15 | 94.55 | 9.15 | | 2 | 3 | 651 | 23.66 | 33.33 | 51.00 | 71.89 | 32.05 | 52.53 | 60.52 | 74.04 | 84.79 | 19.19 | | | 5 | 1 | 534 | 35.02 | 43.26 | 58.61 | 76.40 | 27.90 | 67.79 | 77.53 | 86.70 | 93.26 | 10.46 | | | 5 | 3 | 385 | 18.44 | 23.64 | 40.00 | 61.04 | 40.92 | 48.83 | 61.56 | 76.62 | 85.97 | 18.83 | | | 10 | 1 | 183 | 36.61 | 44.81 | 54.10 | 69.95 | 30.49 | 77.60 | 82.51 | 91.26 | 96.72 | 6.86 | | | Guidance with odd goal distance gd | | | | | | | | | | | | | | | (unveiling goal utterance in dialogue partner) MDC Test Corpus | 2 | 2 | 600 | 20.67 | 28.83 | 43.00 | 64.33 | 37.68 | 45.83 | 55.00 | 69.33 | 84.33 | 20.41 | | 2 | 4 | 417 | 21.58 | 31.18 | 47.00 | 67.63 | 36.02 | 47.48 | 55.16 | 70.26 | 83.45 | 20.85 | | | 3 | 2 | 545 | 22.02 | 32.66 | 50.64 | 69.72 | 33.33 | 34.68 | 44.40 | 66.24 | 78.35 | 25.08 | | | 3 | 4 | 344 | 26.16 | 38.08 | 53.49 | 77.62 | 28.97 | 41.28 | 53.20 | 67.44 | 85.76 | 20.93 | | | 4 | 2 | 417 | 20.62 | 29.50 | 46.28 | 64.99 | 36.58 | 37.89 | 47.96 | 67.63 | 85.13 | 21.06 | | | 4 | 4 | 234 | 16.67 | 23.08 | 47.01 | 70.51 | 37.24 | 40.60 | 53.42 | 73.93 | 89.74 | 18.04 | | | 5 | 2 | 344 | 18.02 | 24.42 | 40.70 | 60.47 | 40.94 | 29.36 | 41.86 | 61.05 | 77.03 | 26.79 | | | 5 | 4 | 161 | 20.50 | 34.78 | 56.52 | 78.26 | 28.09 | 44.72 | 58.39 | 75.78 | 88.82 | 17.32 | | | Guidance with even goal distance gd | | | | | | | | | | | | | | | (saying goal by yourself) Guidance with odd goal distance gd (unveiling goal utterance in dialogue partner) | 2 | 1 | 893 | 20.83 | 27.32 | 40.54 | 61.59 | 38.89 | 63.83 | 69.99 | 81.41 | 90.26 | 13.46 | | 2 | 3 | 545 | 31.19 | 38.53 | 55.41 | 73.76 | 29.92 | 69.91 | 77.06 | 83.30 | 90.28 | 11.78 | | | 3 | 1 | 600 | 51.17 | 58.00 | 70.33 | 82.00 | 20.75 | 69.17 | 74.17 | 83.33 | 91.50 | 12.03 | | | 3 | 3 | 417 | 15.83 | 25.18 | 43.88 | 68.35 | 37.87 | 67.39 | 73.62 | 83.93 | 93.29 | 11.25 | | | 4 | 1 | 545 | 18.17 | 26.06 | 43.30 | 67.16 | 36.16 | 53.03 | 63.49 | 76.70 | 84.04 | 18.23 | | | 4 | 3 | 344 | 17.44 | 25.58 | 42.44 | 61.34 | 39.51 | 51.45 | 62.50 | 76.16 | 83.14 | 18.42 | | | 5 | 1 | 417 | 45.80 | 52.28 | 63.07 | 74.34 | 26.56 | 73.38 | 77.22 | 85.85 | 91.85 | 10.85 | | | 5 | 3 | 234 | 16.24 | 19.23 | 32.91 | 58.55 | 46.47 | 71.37 | 77.78 | 88.46 | 92.74 | 9.92 | | | Table 3: Detailed Short-Term Planning Evaluation with n (number of evaluation samples) | | | | | | | | | | | | | | ![14_image_2.png](14_image_2.png) ![14_image_1.png](14_image_1.png) ![14_image_0.png](14_image_0.png) ![14_image_3.png](14_image_3.png) ## F Long-Term Planning Results We present our detailed Long Term planning results in table 4 as well as examples in the following subsection. ## Long-Term Planning Examples F.1 Alike for short-term planning, we demonstrate examples to present the weaknesses and as well as strengths of the embeddings. In figure 11 we show two very easy examples, where we can follow the conversation well without knowing the replies of the other dialogue partner. This changes especially in figure 12 where in the left example it is also for us very difficult to order the corresponding utterances. While one could argue that emergency calls tend to start with the location of the incident, the utterance "I haven't checked yet" makes the ordering of the utterances without any further context very difficult. This can also be observed in the right example of figure 12 , however, one could argue that based on the context to which both IEC+CU and GC have access, the predicted order (of these two) makes more sense than the original reply order. Nonetheless, both examples show that some of these orders are debatable. ## G Ablation Study As an ablation study, we compare two variations of a simple contrastive to our introduced curved contrastive objective. The first variation has the exact same setup as our approach with the same mixed learning objective of NLI, a dialogue window of l = 5, the same hard negatives (including ones for the directional property) but without the "curved" similarity scores between [BEFORE] and [AFTER] tokens. In other words with simple labels of 0 (not before and after each other within 5 turns) or 1 (before and after the utterance with | LTP Planning Evaluation for 3 Goals | | | | | | | | | | | | | | | | | |---------------------------------------------------------------------------------------|---------------------|-------------------|---------------|-------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|------| | without Speaker Token | Imaginary Embedding | | | | | | | | | | | | | | | | | Imaginary Embedding | with Speaker Token | | | | | | | | | | | | | | | | | partially ordered | Reverse order | partially ordered | Reverse order | | | | | | | | | | | | | | | Distances | First Goal | | | | | | | | | | | | | | | | | Model | History Length | Goal | In Distance | n | Hits@1 Hits@2 Hits@3 Hits@4 (in %) Hits@1 (in %) Average Rank Hits@1 Hits@2 Hits@3 Hits@4 (in %) Hits@1 (in %) Average (in %) (in %) (in %) (in %) (in %) (in %) Rank | | | | | | | | | | | | | DailyDialog Test Corpus 2 | 2 | 0 | 385 | 57.66 | 72.47 | 87.79 | 93.25 | 81.56 | 1.88 | 58.70 | 76.10 | 91.43 | 97.14 | 84.16 | 1.76 | | | 2 | 2 | 1 | 323 | 46.13 | 68.73 | 84.83 | 94.74 | 79.26 | 2.06 | 51.39 | 70.90 | 86.07 | 94.43 | 81.42 | 1.97 | | | 2 | 2 | 2 | 230 | 46.52 | 67.83 | 83.04 | 90.87 | 74.78 | 2.11 | 46.96 | 67.39 | 83.91 | 92.17 | 78.70 | 2.09 | | | 2 | 2 | 3 | 183 | 44.26 | 66.12 | 77.60 | 91.26 | 73.22 | 2.20 | 50.82 | 68.85 | 83.61 | 93.99 | 77.60 | 2.03 | | | 4 | 2 | 0 | 230 | 46.52 | 67.83 | 83.04 | 90.87 | 74.78 | 2.11 | 46.96 | 67.39 | 83.91 | 92.17 | 78.70 | 2.09 | | | 4 | 2 | 1 | 183 | 44.26 | 66.12 | 77.60 | 91.26 | 73.22 | 2.20 | 50.82 | 68.85 | 83.61 | 93.99 | 77.60 | 2.02 | | | 4 | 2 | 2 | 102 | 43.14 | 68.63 | 82.35 | 92.16 | 73.53 | 2.13 | 39.22 | 60.78 | 79.41 | 93.14 | 64.71 | 2.27 | | | 2 | 4 | 0 | 51 | 37.25 | 56.86 | 78.43 | 86.27 | 76.47 | 2.41 | 47.06 | 66.67 | 96.08 | 98.04 | 84.31 | 1.92 | | | IEC | 2 | 2 | 0 | 385 | 57.92 | 76.36 | 90.13 | 95.84 | 84.42 | 1.78 | 59.74 | 77.92 | 92.21 | 98.18 | 85.45 | 1.72 | | 2 | 2 | 1 | 323 | 46.75 | 67.80 | 85.14 | 95.05 | 77.71 | 2.05 | 47.37 | 70.59 | 85.45 | 94.12 | 78.95 | 2.02 | | | 2 | 2 | 2 | 230 | 47.39 | 69.57 | 80.00 | 90.00 | 73.48 | 2.14 | 46.09 | 70.43 | 83.04 | 92.61 | 75.22 | 2.08 | | | 2 | 2 | 3 | 183 | 44.26 | 63.39 | 77.60 | 92.35 | 66.12 | 2.26 | 45.90 | 62.84 | 79.78 | 93.44 | 71.04 | 2.18 | | | 4 | 2 | 0 | 230 | 50.87 | 69.57 | 82.61 | 94.35 | 73.48 | 2.05 | 50.87 | 75.65 | 86.52 | 95.22 | 74.78 | 1.94 | | | 4 | 2 | 1 | 183 | 47.54 | 68.31 | 83.06 | 94.54 | 71.04 | 2.11 | 53.55 | 72.68 | 81.42 | 92.90 | 69.95 | 2.06 | | | 4 | 2 | 2 | 102 | 42.16 | 66.67 | 83.33 | 94.12 | 71.57 | 2.19 | 40.20 | 59.80 | 78.43 | 93.14 | 66.67 | 2.35 | | | 2 | 4 | 0 | 51 | 41.18 | 74.51 | 84.31 | 92.16 | 78.43 | 2.11 | 56.86 | 86.27 | 90.20 | 96.08 | 84.31 | 1.70 | | | IEC & CU | 2 | 2 | 0 | 385 | 70.13 | 88.05 | - | - | - | 1.42 | 70.13 | 88.83 | - | - | - | 1.41 | | 2 | 2 | 1 | 323 | 56.97 | 83.28 | - | - | - | 1.60 | 51.39 | 81.11 | - | - | - | 1.67 | | | 2 | 2 | 2 | 230 | 46.52 | 76.09 | - | - | - | 1.77 | 50.43 | 81.74 | - | - | - | 1.68 | | | 2 | 2 | 3 | 183 | 48.09 | 74.32 | - | - | - | 1.78 | 44.81 | 73.77 | - | - | - | 1.81 | | | 4 | 2 | 0 | 230 | 62.61 | 86.52 | - | - | - | 1.51 | 63.48 | 85.65 | - | - | - | 1.51 | | | 4 | 2 | 1 | 183 | 50.82 | 82.51 | - | - | - | 1.67 | 56.28 | 84.15 | - | - | - | 1.60 | | | 4 | 2 | 2 | 102 | 45.10 | 74.51 | - | - | - | 1.80 | 39.22 | 75.49 | - | - | - | 1.85 | | | 2 | 4 | 1 | 51 | 78.43 | 90.20 | - | - | - | 1.31 | 82.35 | 90.20 | - | - | - | 1.27 | | | MDC Test Corpus 2 | 2 | 0 | 234 | 52.99 | 79.91 | 90.60 | 97.01 | 85.47 | 1.79 | 50.85 | 74.79 | 89.74 | 96.15 | 83.33 | 1.88 | | | 2 | 2 | 1 | 161 | 66.46 | 78.88 | 91.93 | 95.65 | 86.34 | 1.67 | 67.08 | 82.61 | 91.93 | 95.03 | 88.20 | 1.63 | | | IEC | 2 | 2 | 2 | 106 | 48.11 | 72.64 | 88.68 | 95.28 | 81.13 | 1.95 | 47.17 | 71.70 | 85.85 | 94.34 | 79.25 | 2.01 | | 3 | 2 | 0 | 161 | 66.46 | 78.88 | 91.93 | 95.65 | 86.34 | 1.52 | 67.08 | 82.61 | 91.93 | 95.03 | 88.20 | 1.46 | | | 3 | 2 | 1 | 106 | 48.11 | 72.64 | 88.68 | 95.28 | 81.13 | 1.80 | 47.17 | 71.70 | 85.85 | 94.34 | 79.25 | 1.83 | | | 3 | 2 | 2 | 75 | 56.00 | 81.33 | 92.00 | 96.00 | 82.67 | 1.61 | 56.00 | 81.33 | 92.00 | 94.67 | 88.00 | 1.58 | | | GC IEC & CU | 2 | 2 | 0 | 234 | 65.81 | 86.32 | 93.59 | 97.86 | 93.16 | 1.56 | 60.68 | 82.48 | 94.87 | 98.72 | 92.31 | 1.63 | | 2 | 2 | 1 | 161 | 67.08 | 77.02 | 90.06 | 96.27 | 84.47 | 1.70 | 65.22 | 80.75 | 90.06 | 95.03 | 85.71 | 1.69 | | | 2 | 2 | 2 | 106 | 51.89 | 69.81 | 86.79 | 96.23 | 81.13 | 1.95 | 50.00 | 72.64 | 88.68 | 93.40 | 78.30 | 1.95 | | | 3 | 2 | 0 | 161 | 68.32 | 80.12 | 93.17 | 96.27 | 85.71 | 1.49 | 52.80 | 75.78 | 82.61 | 95.65 | 80.75 | 1.79 | | | 3 | 2 | 1 | 106 | 50.94 | 68.87 | 85.85 | 95.28 | 80.19 | 1.84 | 42.45 | 61.32 | 77.36 | 91.51 | 81.13 | 2.02 | | | 3 | 2 | 2 | 75 | 46.67 | 66.67 | 81.33 | 94.67 | 78.67 | 1.94 | 28.00 | 50.67 | 73.33 | 85.33 | 58.67 | 2.22 | | | 2 | 2 | 0 | 234 | 81.20 | 95.73 | - | - | - | 1.23 | 76.92 | 95.30 | - | - | - | 1.28 | | | 2 | 2 | 1 | 161 | 67.70 | 88.20 | - | - | - | 1.44 | 45.96 | 79.50 | - | - | - | 1.75 | | | GC | 2 | 2 | 2 | 106 | 50.00 | 84.91 | - | - | - | 1.65 | 45.28 | 66.98 | - | - | - | 1.88 | | 3 | 2 | 0 | 161 | 72.67 | 90.06 | - | - | - | 1.37 | 39.13 | 69.57 | - | - | - | 1.91 | | | 3 | 2 | 1 | 106 | 46.23 | 83.96 | - | - | - | 1.70 | 48.11 | 67.92 | - | - | - | 1.84 | | | 3 | 2 | 2 | 75 | 45.33 | 72.00 | - | - | - | 1.83 | 24.00 | 41.33 | - | - | - | 2.35 | | | Table 4: Detailed Long-Term Planning Evaluation with n = number of evaluation samples | | | | | | | | | | | | | | | | | ![16_image_0.png](16_image_0.png) a distance between 1-5 turns). Since this does not take any distance into account we have a second ablation variant that takes only direct utterance pairs (so a window size of 2) with the corresponding two labels and otherwise the same setup. Like our embeddings, we train the two variations on BERT and RoBERTa architectures respectively. In contrast to our embeddings, we find that both ablation studies find their optimum for our three takes after already 1-2 epochs. In the following sections, we present the performance of the ablation studies to our approach, note that we refer to the ablation with a window size of l = 5 as ab5 and the one with l = 2 as ab2. ## G.1 Ablation Study Ltp As shown in table 5 the ablation study with a dialogue window of l = 5 shows stronger performance in ordering utterances than its counterpart of l = 2. Thanks to the solidified structure of the task-oriented corpus the ablation comes relatively close to the performance of our imaginary embeddings. For Greedy Curving (GC) in particular, it can detect the next goal out of 3 even slightly better than our embeddings without speaker tokens. However, when the solidified structure of dialogue disappears (on the chit-chat dataset DailyDialog) our models show much stronger performance than their ablation study. ## G.2 Ablation Study Stp ![16_Image_1.Png](16_Image_1.Png) While the ablation study with the dialogue window of l = 5 shows solid performance in ordering utterances, it has severe trouble understanding the pathways between utterances as can be seen in 6. Especially, on the MDC dataset for close members in their own group (observation window). Here we observe that the performances increase over longer distances which goes hand in hand with the better greedy curving performance. Overall, the ablation study with a dialogue window of l = 2 shows through its learning objective a better understanding of its close neighbors as l = 5. While once again the ablation studies do not get close to our embeddings on the DailyDialog corpus, on the MDC corpus it can outperform our embeddings on direct neighbors (distance 1) while being significantly worse on longer distances. Since it only learned the properties between two speakers it has notable trouble mapping utterances from the same speaker as can be seen by even distances on the MDC corpus. ## G.3 Ablation Study Next Utterance Selection We compare both ablation studies to our embeddings in figure 13 on DailyDialog on the same variation as Imaginary Embeddings, either the entire context or only the last utterance. Both ablation studies perform best on the variation closest to their training target, in other words, ab5 on the entire context and ab2 only on the last utterance. With the | Ablation Study | Imaginary Embedding with / w.o Speaker Token | | | | | | | | | | | | |------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------|---------------|-------|-------|------|--------|-------|-------|-------|-------|------| | partially ordered | Reverse order | partially ordered | Reverse order | | | | | | | | | | | Model | Hits@1 Hits@2 Hits@3 Hits@4 (in %) Hits@1 (in %) Average Rank Hits@1 Hits@2 Hits@3 Hits@4 (in %) Hits@1 (in %) Average (in %) (in %) (in %) (in %) (in %) (in %) Rank | | | | | | | | | | | | | DailyDialog Test Corpus IEC ab5 / ST | 33.69 | 56.30 | 78.26 | 90.00 | 68.04 | 2.74 | 51.60 | 72.22 | 86.82 | 94.94 | 81.18 | 2.13 | | IEC ab2 / w.o | 25.69 | 48.69 | 67.43 | 84.53 | 58.31 | 3.15 | 49.99 | 70.62 | 85.26 | 93.42 | 79.17 | 2.21 | | applies to Greedy Curving where IEC & CU ab5 / ST | 33.48 | 56.52 | 79.13 | 89.13 | 69.13 | 2.73 | 51.07 | 72.98 | 86.9 | 94.97 | 79.87 | 2.13 | | IEC & CU ab2 / w.o | 26.71 | 49.25 | 69.38 | 85.28 | 60.03 | 3.09 | 50.69 | 71.24 | 85.09 | 93.63 | 78.54 | 2.19 | | GC ab5 / ST | 50.00 | 75.22 | - | - | - | 1.75 | 57.32 | 83.89 | - | - | - | 1.59 | | GC ab2 / w.o | 43.69 | 72.68 | - | - | - | 1.84 | 57.87 | 82.47 | - | - | - | 1.6 | | MDC Test Cor pus IEC ab5 / ST | 54.62 | 74.15 | 89.42 | 96.5 | 84.71 | 2.01 | 56.83 | 77.50 | 90.19 | 95.44 | 84.52 | 1.96 | | IEC ab2 / w.o | 41.52 | 65.19 | 84.50 | 94.17 | 75.61 | 2.39 | 58.72 | 77.50 | 90.19 | 95.44 | 84.52 | 1.92 | | IEC & CU ab5 / ST | 54.77 | 75.02 | 89.91 | 96.97 | 85.45 | 1.98 | 58.63 | 78.62 | 91.20 | 95.72 | 85.44 | 1.90 | | IEC & CU ab2 / w.o | 40.83 | 64.24 | 84.38 | 94.05 | 75.86 | 2.41 | 61.59 | 77.72 | 90.15 | 96.79 | 86.25 | 1.87 | | GC ab5 / ST | 66.63 | 89.86 | - | - | - | 1.43 | 56.05 | 80.59 | - | - | - | 1.64 | | GC ab2 / w.o | 48.77 | 72.68 | - | - | - | 1.72 | ´66.30 | 89.61 | - | - | - | 1.44 | | Table 5: Aggregated Long-Term Planning Ablation vs Imaginary Embeddings Study on 3 goals with ((2, 2, 2), (2, 2, | | | | | | | | | | | | | | Human Utterance Ranking vs 100 utterances sampled from DialoGPT Large / GODEL Large (p=0.8, t=0.8) | | | | | | | | | | | |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------|---------|---------|---------|---------|--------|---------|---------|---------|---------| | Ablation Study | Imaginary Embedding | | | | | | | | | | | with (ST) / w.o Speaker Token | | | | | | | | | | | | Goal in Distance | Hits@5 | Hits@10 | Hits@25 | Hits@50 | Average | Hits@5 | Hits@10 | Hits@25 | Hits@50 | Average | | (in %) | (in %) | (in %) | (in %) | Rank | (in %) | (in %) | (in %) | (in %) | Rank | | | DailyDialog Test Corpus Guidance distance 1 (ab5 / ST) | 21.87 | 29.96 | 45.27 | 63.63 | 39.34 | 72.03 | 79.53 | 88.37 | 94.84 | 8.82 | | Guidance distance 1 (ab2 / w.o) | 45.57 | 52.83 | 68.03 | 82.27 | 22.27 | 38.06 | 46.26 | 59.86 | 77.00 | 26.70 | | Guidance distance 2 (ab5 / ST) | 19.70 | 26.47 | 42.57 | 61.94 | 40.71 | 27.28 | 36.07 | 55.50 | 71.89 | 31.83 | | Guidance distance 2 (ab2 / w.o) | 20.22 | 25.76 | 41.97 | 59.40 | 43.89 | 32.06 | 38.31 | 51.20 | 70.46 | 32.94 | | Guidance distance 3 (ab5 / ST) | 14.69 | 22.88 | 38.87 | 62.31 | 41.53 | 50.68 | 61.17 | 75.46 | 85.38 | 19.01 | | Guidance distance 3 (ab2 / w.o) | 20.21 | 27.55 | 43.135 | 63.35 | 39.96 | 21.18 | 28.62 | 45.42 | 66.47 | 36.47 | | Guidance distance 4 (ab5 / ST) | 15.25 | 22.54 | 35.94 | 57.16 | 45.60 | 28.28 | 36.37 | 52.05 | 70.83 | 33.28 | | Guidance distance 4 (ab2 / w.o) | 19.67 | 25.15 | 38.64 | 57.18 | 44.40 | 26.67 | 33.22 | 50.06 | 65.35 | 36.23 | | MDC Test Corpus Guidance distance 1 (ab5 / ST) | 6.99 | 11.00 | 23.62 | 43.88 | 54.94 | 61.48 | 68.97 | 79.32 | 88.59 | 14.94 | | Guidance distance 1 (ab2 / w.o) | 66.12 | 74.48 | 88.48 | 95.70 | 9.09 | 29.59 | 36.12 | 49.31 | 68.75 | 33.83 | | Guidance distance 2 (ab5 / ST) | 9.15 | 15.012 | 31.72 | 53.33 | 47.52 | 33.55 | 45.27 | 64.85 | 80.40 | 24.86 | | Guidance distance 2 (ab2 / w.o) | 4.1 | 6.29 | 14.24 | 32.13 | 63.97 | 20.84 | 28.22 | 45.27 | 67.90 | 36.13 | | Guidance distance 3 (ab5 / ST) | 8.40 | 12.92 | 28.735 | 51.41 | 49.50 | 65.03 | 72.74 | 82.96 | 89.96 | 12.84 | | Guidance distance 3 (ab2 / w.o) | 25.01 | 32.58 | 51.56 | 68.88 | 33.97 | 20.18 | 27.13 | 43.66 | 65.50 | 38.44 | | Guidance distance 4 (ab5 / ST) | 13.50 | 19.73 | 28.75 | 56.39 | 43.98 | 44.82 | 56.53 | 73.73 | 85.80 | 19.31 | | Guidance distance 4 (ab2 / w.o) | 2.95 | 3.76 | 9.34 | 21.3 | 73.7 | 20.73 | 30.42 | 50.80 | 73.80 | 33.59 | | Table 6: Aggregated short-term planning evaluation vs ablation study for different distances to goal. (ab2 ablation with l = 2), (ab5 ablation with l = 5), (w.o without Speaker Token), (ST with Speaker Token) | | | | | | | | | | | U1 11. U 2 ![18_image_1.png](18_image_1.png) correct order: U6 ![18_image_3.png](18_image_3.png) ![18_image_5.png](18_image_5.png) ![18_image_7.png](18_image_7.png) ![18_image_9.png](18_image_9.png) u1 1111122 correct order: 111115 ![18_image_2.png](18_image_2.png) ![18_image_4.png](18_image_4.png) ![18_image_6.png](18_image_6.png) ![18_image_8.png](18_image_8.png) Greedy Curving evaluation (table 5), one could suggest a stronger performance to ab5 rather than ab2. However, we find the exact opposite in the next utterance selection task as we consider candidate utterances in width rather than in-depth. Compared to the other baselines, the strongest ablation study is still 1.5% worse than the pre-trained DialogRPT, 3.69% worse than ConveRT, and 4.3% worse than our best imaginary embeddings. On MDC (figure 14), we observe, as we described in §3.3 , that considering only the last utterance shows the strongest results. Expectedly, the training objective to only match direct pairs of ablation l = 2 comes in handy, outperforming all other approaches. ![18_image_10.png](18_image_10.png) ![18_image_0.png](18_image_0.png) ![19_image_0.png](19_image_0.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: A1. Did you describe the limitations of your work? Left blank. A2. Did you discuss any potential risks of your work? Left blank. A3. Do the abstract and introduction summarize the paper's main claims? Left blank. A4. Have you used AI writing assistants when working on this paper? Left blank. ## B **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? Left blank. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Left blank. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Left blank. ## C **Did You Run Computational Experiments?** Left blank. C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Left blank. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Left blank. C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Left blank. C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Left blank. D **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Left blank.
antoun-etal-2023-data
Data-Efficient {F}rench Language Modeling with {C}amem{BERT}a
https://aclanthology.org/2023.findings-acl.320
Recent advances in NLP have significantly improved the performance of language models on a variety of tasks. While these advances are largely driven by the availability of large amounts of data and computational power, they also benefit from the development of better training methods and architectures. In this paper, we introduce CamemBERTa, a French DeBERTa model that builds upon the DeBERTaV3 architecture and training objective. We evaluate our model{'}s performance on a variety of French downstream tasks and datasets, including question answering, part-of-speech tagging, dependency parsing, named entity recognition, and the FLUE benchmark, and compare against CamemBERT, the state-of-the-art monolingual model for French. Our results show that, given the same amount of training tokens, our model outperforms BERT-based models trained with MLM on most tasks. Furthermore, our new model reaches similar or superior performance on downstream tasks compared to CamemBERT, despite being trained on only 30{\%} of its total number of input tokens. In addition to our experimental results, we also publicly release the weights and code implementation of CamemBERTa, making it the first publicly available DeBERTaV3 model outside of the original paper and the first openly available implementation of a DeBERTaV3 training objective.
# Data-Efficient French Language Modeling With Camem**Bert**A Wissam Antoun Benoît Sagot Djamé Seddah Inria, Paris {firstname,lastname}@inria.fr ## Abstract Recent advances in NLP have significantly improved the performance of language models on a variety of tasks. While these advances are largely driven by the availability of large amounts of data and computational power, they also benefit from the development of better training methods and architectures. In this paper, we introduce CAMEMBERTA, a French DeBERTa model that builds upon the DeBERTaV3 architecture and training objective. We evaluate our model's performance on a variety of French downstream tasks and datasets, including question answering, part-of-speech tagging, dependency parsing, named entity recognition, and the FLUE benchmark, and compare against CamemBERT, the state-of-the-art monolingual model for French. Our results show that, given the same amount of training tokens, our model outperforms BERT-based models trained with MLM on most tasks. Furthermore, our new model reaches similar or superior performance on downstream tasks compared to CamemBERT, despite being trained on only 30% of its total number of input tokens. In addition to our experimental results, we also publicly release the weights and code implementation of CAMEMBERTA, making it the first publicly available DeBERTaV3 model outside of the original paper and the first openly available implementation of a DeBERTaV3 training objective.1 ## 1 Introduction Advances in natural language processing (NLP) have been driven mainly by scaling up the size of pre-trained language models, along with the amount of data and compute required for training (Raffel et al., 2020; Radford et al., 2019; Rae et al., 2021; Fedus et al., 2021; Hoffmann et al., 2022). However, these are not the only factors to determine a model's downstream performance, as the model's architecture and training objective are also 1https://gitlab.inria.fr/almanach/CamemBERTa important. He et al. (2021b) showed that we can improve a model's performance by using disentangled attention, which uses two vectors to represent a token, one for position and one for content. He et al. (2021a) later showed that performance could be further improved by using ELECTRA's (Clark et al., 2020) self-supervised and sample-efficient replaced token detection objective. Another crucial aspect lies in the ability to train models faster, which allows for quick iteration and thus accelerates the research process and allows for more efficient exploration of new ideas (Izsak et al., 2021; Pan et al., 2022; Geiping and Goldstein, 2022). This research aims to develop data-efficient and optimized training techniques that can improve performance in downstream tasks, while reducing the required training corpus size and compute. To achieve this goal, we propose a new data-efficient French language model based on DeBERTaV3 (He et al., 2021a). Our proposed model aims to optimize the training process by using a sampleefficient training objective, a state-of-the-art model architecture, and an efficient implementation. We evaluate downstream performance with a variety of NLP tasks, including dependency parsing, partof-speech tagging, named entity recognition, text classification, and question answering. We compare our model to a BERT model trained with the masked language modeling (MLM) objective using the same tokenizer and training corpus, and to the state-of-the-art French language model, CamemBERT (Martin et al., 2020), which required three times as many training iterations. Our results show that our proposed model reaches or establishes a new state-of-the-art using one third of the computational budget of its main predecessors. Our contributions can be summarized as follows: - We propose a new data-efficient French language model, which we train based on our DeBERTaV3 re-implementation with our optimized training recipe. - We empirically show that under the same conditions, our model outperforms Transformer models trained with MLM on most tasks, and that it reaches or establishes a new stateof-the-art even when compared with models trained for three times as long. - Our release is the only publicly available implementation of DeBERTaV3's training objective, and the first for a monolingual DeBERTaV3 model other than the original paper. Our code and models are available under an open-source license2, making it easy for researchers to reproduce our results and build upon our work. ## 2 Related Works Transformers. This architecture has been widely adopted in NLP tasks such as language modeling, mainly due to the use of the self-attention mechanisms (Vaswani et al., 2017), which allow the model to weigh the importance of different parts of the input when making predictions. A downside of the Transformer block is that it is permutationinvariant, which inhibits the model from encoding word order information. Originally, the authors proposed to add either a fixed sinusoidal pattern or a learned positional embedding as positional bias the input token embedding. Later studies have shown that using relative positional embeddings is more effective (Shaw et al., 2018; Dai et al., 2019; Qu et al., 2021). Recently, He et al. (2021b) proposed a new disentangled attention mechanism, which considers both the relative position and the content of the input tokens as separate vectors. Pre-trained French Language Models. Current language models available for French are either trained using Masked Language Modeling (MLM) or Causal Language Modeling (CLM). CamemBERT (Martin et al., 2020) and FlauBERT (Le et al., 2020) are two of the most popular contemporary French models, both trained with masked language modeling. Other models include FrALBERT (Cattan et al., 2021), a French version of ALBERT (Lan et al., 2020), LePetit (Micheli et al., 2020) which is a small version of CamemBERT, and D'AlemBERT (Gabay et al., 2022), a RoBERTa (Liu et al., 2020) based language model targeted towards Early Modern French. BARThez (Kamal Eddine et al., 2021) is a sequence-to-sequence model trained with BART's objective (Lewis et al., 2020), and PAGnol (Launay et al., 2022) and Cedille (Müller and Laurent, 2022) are models trained with the CLM objective. To the best of our knowledge, there is no prior effort in developing language models with this improved disentangled attention mechanism and objectives other than MLM/CLM beyond English. ## 3 Camemberta**: Methodology** The following section details our proposed architecture and pre-training objective, along with descriptions for the downstream tasks. Architecture CAMEMBERTA is based on the DeBERTaV3 (He et al., 2021b) architecture which uses two vectors to encode the word and its position, with the premise being that the relative position of a word pair should also directly affect the computed attention weights. The V3 version optimizes the initial DeBERTa architecture by sharing the relative position embedding projection layers across all the encoder layers, and by adding a convolution layer aside the first encoder layer.3 We use a base model configuration with 12 layers and 12 attention heads, 768 hidden dimensions with 32k for vocabulary size. Training Objective We follow the DeBERTaV3 (He et al., 2021a) pretraining strategy by using the replaced token detection (RTD) pre-training loss first introduced in ELECTRA (Clark et al., 2020), with a generator and discriminator based on the DeBERTa architecture. During pre-training we project the generator embeddings to 256 dimensions and keep the generator model at 12 layers. During pre-training the generator model is trained using the MLM objective where we dynamically mask 15% of the input tokens. We then sample from the generator the masked tokens, and feed the output along with the unmasked tokens to the discriminator which is tasked to identify tokens that were replaced by the generator. The RTD objective increases sample efficiency since the model is predicting over all input tokens instead of the 15% masked tokens. In DeBERTaV3, the authors hypothesized and showed that sharing token embeddings between the generator and the discriminator results in a tugof-war situation, where the MLM and RTD tasks 3See Section 5.3 of the DeBERTa paper (He et al., 2021b) 2https://gitlab.inria.fr/almanach/CamemBERTa pull the embedding vectors into opposing directions. To alleviate this problem, the authors implemented Gradient-Disentangled Embedding Sharing (GDES), a method that re-parameterize the discriminator's token embeddings as ED = sg(EG)+E∆, where sg stops the gradient flow from the RTD loss to the generator token embeddings EG, and hence the loss gradient only updates a Difference Embedding matrix E∆ that is added to EG to form the discriminator token embeddings ED. After pretraining, E∆ and EG are summed to get the final ED and E∆ is then discarded. Pre-Training We pre-train on the French subset of CCNet4(Wenzek et al., 2020), the same corpus used to pre-train CamemBERT*CCNet* (Martin et al., 2020).5 Moreover we reuse CamemBERT*CCNet*'s tokenizer (Kudo and Richardson, 2018). By reusing the pre-training corpus and tokenizer, we isolate the performance differences to the model architecture and training objective variables. Optimization To speed up the pre-training experiments, we split the pre-training into two phases; in phase 1, the model is trained with a maximum sequence length of 128 tokens for 10,000 steps with 2,000 warm-up steps and a very large batch size of 67,584. In phase 2, maximum sequence length is increased to the full model capacity of 512 tokens for 3,300 steps with 200 warm-up steps and a batch size of 27,648. Because we use very large batch sizes, we optimize the model using the LAMB optimizer (You et al., 2020) with a learning rate of 6e−3, β1 = 0.878, and β2 = 0.974. ## 4 Experiments And Results Pre-Training Setup We re-implement the DeBERTaV3 RTD pre-training objective with GDES, since no public implementation was available at the time of writing. Our training implementation is based on Nvidia's ELECTRA and BERT TensorFlow2 implementations.6 We train our models for 8 days on 6 Nvidia A40 with Horovod (Sergeev and Balso, 2018), and make use of XLA compilation, mixed-precision and gradient accumulation to speed-up training and to fit large batch sizes with our limited compute. During pre-training, our model would have seen 133B tokens compared to 419B tokens for 4See Appendix 4 for more information on dataset choice. 5We go over the pertaining dataset choice in the experiments section. 6https://github.com/NVIDIA/DeepLearningExamples/ CamemBERT*CCNet* which was trained for 100K steps. This represents roughly 30% of CamemBERT's full training. Hence for a fair comparison, we train a RoBERTa model, which we dub CamemBERT30%, using our same exact pretraining setup but with the MLM objective. Downstream Evaluation We compare our models, CamemBERT*CCNet*, and CamemBERT30%, on a diverse set of French downstream tasks and datasets, namely: Question Answering (QA) on FQuAD 1.0 (d'Hoffschmidt et al., 2020), Part-OfSpeech (POS) tagging and Dependency Parsing on GSD (McDonald et al., 2013), Rhapsodie (Lacheret et al., 2014), Sequoia (Candito and Seddah, 2012; Candito et al., 2014) in their UD v2.2 versions and the French Social Media Bank7(Seddah et al., 2012), Named Entity Recognition (NER) on the 2008 version of FTB (Abeillé et al., 2000; Candito and Crabbé, 2009) with NER annotation by Sagot et al. (2012), and the FLUE benchmark (Le et al., 2020). We use the dataset splits as provided by their respective authors, and we finetune using welltested scripts from the Hugging Face *Transformers* library and the HOPS parser (Grobol and Crabbé, 2021). We only perform hyper-parameter tuning for the NER and QA tasks. See Appendix C for task-specific details. **Bold** text shows the best statistically significant score over 5 seeds. Question Answering. We evaluate our model on the FQuAD 1.0 dataset (d'Hoffschmidt et al., 2020), which is a SQuAD (Rajpurkar et al., 2016) style French question-answering dataset with 20731 examples for training, and 3188 for evaluation. The results shown in Table 2 show that our model outperforms CamemBERT30% by 6.01 F1 points, but shows no statistically significant improvement over CamemBERT*CCNet* F1 score, and exact match (EM) score. Part-of-Speech and Dependency Parsing. We report our results on 4 diverse French treebanks. For the parser training, we make use of the HOPS parser (Grobol and Crabbé, 2021) implementation, which is a graph-based dependency parser inspired by Dozat and Manning (2017). Our configuration uses the Transformer model's last layer in addi-7We follow Riabi et al. (2021) and use their shuffled version of the treebank, which they split into around 2000 sentences for training, and 1000 for each the dev and test sets | GSD | RHAPSODIE | SEQUOIA | FSMB | NER | | | | | | |----------------|-------------|------------|------------|------------|------------|------------|------------|------------|------------| | MODEL | UPOS | LAS | UPOS | LAS | UPOS | LAS | UPOS | LAS | F1 | | CamemBERT30% | 98.55±0.05 | 94.26±0.03 | 97.61±0.12 | 83.19±0.62 | 99.32±0.08 | 94.09±0.06 | 94.63±0.11 | 80.13±0.41 | 91.04±0.76 | | CamemBERTCCNet | 98.57±0.07 | 94.35±0.15 | 97.62±0.08 | 84.29±0.56 | 99.35±0.09 | 94.78±0.12 | 94.80±0.16 | 81.34±0.63 | 89.97±0.50 | | CAMEMBERTA | 98.55±0.05 | 94.38±0.15 | 97.52±0.14 | 84.23±0.08 | 99.44±0.07 | 94.85±0.14 | 94.80±0.09 | 80.74±0.25 | 90.33±0.54 | | Model | F1 | EM | |----------------|------------|------------| | FrALBERT | 72.6∗XX0 | 55.1∗XXX | | CamemBERT30% | 75.14±0.17 | 56.19±0.27 | | CamemBERTCCNet | 80.98±0.48 | 62.51±0.54 | | CAMEMBERTA | 81.15±0.38 | 62.01±0.45 | Table 1: POS tagging, **dependency parsing** and NER results on the test sets of our French datasets. *UPOS* (Universal Part-of-Speech) refers here to POS tagging accuracy, and LAS measures the overall accuracy of labeled dependencies in a parsed sentence. Table 2: Question Answering results on FQuAD 1.0. tion to FastText embeddings (Bojanowski et al., 2017), character-level bi-directional RNN embeddings, and word embeddings trained during the fine-tuning phase. Table 1 shows that our proposed model consistently outperforms CamemBERT30%, and competes with CamemBERT*CCNet* on all 4 treebanks. Named Entity Recognition is performed on the French Treebank (FTB) which contains 350k tokens in 27k sentences extracted from news articles. Our results in Table 1, surprisingly show that CamemBERT30% outperforms CamemBERT*CCNet*, while not being statistically better than our model. FLUE Benchmark We use datasets from the French Language Understanding Evaluation (FLUE) benchmark (Le et al., 2020), namely the French part of the paraphrase identification dataset PAWS-X (Yang et al., 2019), and of XNLI (Conneau et al., 2018), in addition to CLS, a binary classification dataset with Amazon reviews taken from Amazon. Our results (Table 3) show that our model outperforms all models on the CLS movie classification task, and matches the performance of CamemBERT*CCNet* on the other FLUE tasks. Pre-training Dataset Choice We choose CCNet as our pre-training dataset instead of the more common OSCAR dataset (Ortiz Suárez et al., 2019), as (i) it was shown to produce less offensive output (Launay et al., 2022) and (ii) it allowed us to be fully comparable with many of the Camem- Table 3: Text classification results (Accuracy) on the FLUE benchmark. ∗Results taken from Le et al. *(2020)*. BERT models (Martin et al., 2020), enabling thus meaningful comparisons. Nevertheless, we also ran experiments with CamemBERT*OSCAR*, and found that it performed slightly worse than CamemBERT*CCNet*, as shown in Table 5 Appendix A. Pre-training Compute and CO2 Impact Our model was trained for 8 days on 6 A40 GPUs, compared to CamemBERT which was trained on 256 V100 GPUs for one day, which is roughly equivalent to 28 days of training on 6 A40 GPUs, since an NVIDIA A40 GPU is about 1.5x faster than a V100 GPU on language modeling tasks according to recent benchmarks.8 Following the reports by Luccioni et al. (2022) and Cattan et al. (2022) on the environmental impact of language model training, we use Lannelongue et al.'s (2021) online carbon footprint calculator to provide the following estimates: CAMEMBERTA's pre-training used 700kWh and emitted 36kg CO2 compared to 3.32MWh and 170kg for CamemBERT.9 | Model | CLS | PAWS-X | XNLI | |----------------|------------|------------|------------| | FrALBERT | 72.17±3.32 | 76.29±1.28 | 66.87±0.42 | | FlauBERT | 93.22∗ 000 | 89.49∗ 000 | 80.6∗ 0000 | | CamemBERT30% | 93.28±0.19 | 88.94±0.14 | 79.89±0.64 | | CamemBERTCCNet | 94.62±0.04 | 91.36±0.38 | 81.95±0.51 | | CAMEMBERTA | 94.92±0.13 | 91.67±0.17 | 82.00±0.17 | ## 5 Discussion Our experiments clearly show that given the same training corpus, tokenizer, and total number of examples seen during training, CAMEMBERTA outperforms the MLM trained CamemBERT model 8See https://lambdalabs.com/blog/nvidia-rtx-a40benchmarks. 9These estimates are specific to our training infrastructure situated in France. These estimates highlight the remarkable efficiency achieved by CamemBERTa's pretraining process. on all tasks except NER on FTB and POS tagging on Rhapsodie. Moreover, our model implementation is able to match or outperform a fully trained CamemBERT model, trained on around 3 times more samples and more compute. The strong performance of our model on higher level FLUE tasks suggest that lower level tasks such as POS tagging and dependency parsing are less challenging for current generation models, since they mostly require surface level information which the model can capture early in the training process, as suggested by Martin et al. (2020), compared to tasks such as question answering and text classification which require more complex processing. Taking a step back and looking at the only DeBERTa model that includes French, mDeBERTa (He et al., 2021a) we can see (cf. Table 4) that our model only requires 6.6% of its multilingual counterpart training samples to achieve competitive performance while additionally also outperforming the XLM-R model (Conneau et al., 2020) trained on a much larger training sample size. | XNLI | Steps | # tokens† | Size‡ | | |-------------|---------|-------------|---------|--------| | mDeBERTa∗ | 84.4 | 500k | 2T | 0.295T | | CAMEMBERTA | 82.0 | 33k†† | 0.139T | 0.032T | | XLM-R∗∗ | 81.4 | 1.5M | 6T | 0.295T | | C.BERTCCNet | 81.95 | 100k | 0.419T | 0.032T | This confirms the interest in using such training paradigms in compute limited scenarios for semantically demanding tasks such as question-answering or natural-language inference. Last but not least, other competitive language models for French are available and although not the primary focus of this paper, we conducted a comparative analysis involving FlauBERT (Le et al., 2020) and FrALBERT (Cattan et al., 2021). The results, presented in Table 5 in Appendix A, demonstrate the better performance of our model across all evaluated tasks in comparison to these French models. Additionally, it is worth noting that FlauBERT was trained for 17 days with 32 V100 GPUs, which is equivalent to 60 days of training on 6 A40 GPUs. This represents a 7.5-fold increase in computational resources employed compared to CAMEMBERTA. ## 6 Conclusion We presented CAMEMBERTA, a data-efficient French language model trained on a large corpus of French text and the first publicly available DeBERTaV3-style pretrained model and implementation. For a fair evaluation we reused the same corpus and tokenizer as CamemBERT*CCNet*, but using only 30% of the total number of input training tokens. We compared the performance of both models in addition to an MLM model trained from scratch under the same setup as CAMEMBERTA, CamemBERT30%, on a variety of downstream tasks. Our experiments showed that our model outperforms CamemBERT30% on all tasks except NER on FTB, and that it is able to match and even surpass CamemBERT*CCNet*. Furthermore, we have also made our optimized code implementation and pretrained model weights publicly available for others to use. ## Limitations Although our model is more efficient than previous models trained using the MLM objective and the standard transformer architecture, we notice that the models runs around 30% slower. This is due to the disentangled attention mechanism, which is more computationally expensive than the standard attention mechanism. We also note that at the time of writing, the DeBERTaV3 TensorFLow 2 implementation available on HuggingFace's Transformers library (Wolf et al., 2020) experiences heavy slowdowns with TPU backends. Our attempts to solve this issue were unsuccessful, and we were unable to train our model on TPUs. ## Ethics Statement We propose a model trained using DeBERTaV3 style pre-training along with an optimized training implementation, which reduces training computation cost when compared to previous models, and hence greatly reduces the energy cost and environmental impact of language model training. We trained our model using the CCNet dataset, for which we direct the reader to for further discussion on bias and ethical considerations. Our experiments do not include any additional data collection or human annotators. Like other language models trained on massive corpora, there may be potential biases present in the training data, which could affect the output of our models. Therefore, we advise against using these models in production without thorough testing. All our experiments were carried out on clusters with energy sources consisting of nuclear (65–75%), 20% renewable, and the remaining from gas. ## Acknowledgements This work was partly funded by Benoît Sagot's chair in the PRAIRIE institute funded by the French national reseach agency (ANR as part of the "Investissements d'avenir" programme under the reference ANR-19-P3IA-0001. This work also received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No. 101021607. The authors are grateful to the OPAL infrastructure from Université Côte d'Azur for providing resources and support. ## References Anne Abeillé, Lionel Clément, and Alexandra Kinyon. 2000. Building a treebank for French. In *Proceedings of the Second International Conference on Language Resources and Evaluation (LREC'00)*, Athens, Greece. European Language Resources Association (ELRA). Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. *Transactions of the Association for Computational Linguistics*, 5:135–146. Marie Candito and Benoît Crabbé. 2009. Improving generative statistical parsing with semi-supervised word clustering. In *Proceedings of the 11th International Conference on Parsing Technologies* (IWPT'09), pages 138–141, Paris, France. Association for Computational Linguistics. Marie Candito, Guy Perrier, Bruno Guillaume, Corentin Ribeyre, Karën Fort, Djamé Seddah, and Eric De La Clergerie. 2014. Deep syntax annotation of the sequoia french treebank. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), Reykjavik, Iceland. European Language Resources Association (ELRA). Marie Candito and Djamé Seddah. 2012. Le corpus sequoia : annotation syntaxique et exploitation pour l'adaptation d'analyseur par pont lexical (the sequoia corpus : Syntactic annotation and use for a parser lexical domain adaptation method) [in French]. In *Proceedings of the Joint Conference JEP-TALNRECITAL 2012, volume 2: TALN*, pages 321–334, Grenoble, France. ATALA/AFCP. Oralie Cattan, Sahar Ghannay, Christophe Servan, and Sophie Rosset. 2022. Benchmarking transformers- based models on french spoken language understanding tasks. *arXiv preprint arXiv:2207.09152*. Oralie Cattan, Christophe Servan, and Sophie Rosset. 2021. On the Usability of Transformers-based models for a French Question-Answering task. In Recent Advances in Natural Language Processing (RANLP), Varna, Bulgaria. Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: Pretraining text encoders as discriminators rather than generators. In *ICLR*. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 8440– 8451, Online. Association for Computational Linguistics. Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. XNLI: Evaluating crosslingual sentence representations. In *Proceedings of* the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2475–2485, Brussels, Belgium. Association for Computational Linguistics. Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc Le, and Ruslan Salakhutdinov. 2019. Transformer-XL: Attentive language models beyond a fixed-length context. In *Proceedings of the 57th* Annual Meeting of the Association for Computational Linguistics, pages 2978–2988, Florence, Italy. Association for Computational Linguistics. Martin d'Hoffschmidt, Maxime Vidal, Wacim Belblidia, and Tom Brendlé. 2020. FQuAD: French Question Answering Dataset. *arXiv e-prints*, page arXiv:2002.06071. Timothy Dozat and Christopher D. Manning. 2017. Deep biaffine attention for neural dependency parsing. In *International Conference on Learning Representations*. William Fedus, Barret Zoph, and Noam Shazeer. 2021. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. *arXiv* preprint arXiv:2101.03961. Simon Gabay, Pedro Ortiz Suarez, Alexandre Bartz, Alix Chagué, Rachel Bawden, Philippe Gambette, and Benoît Sagot. 2022. From FreEM to d'AlemBERT: a large corpus and a language model for early Modern French. In *Proceedings of the Thirteenth Language Resources and Evaluation Conference*, pages 3367–3374, Marseille, France. European Language Resources Association. Jonas Geiping and Tom Goldstein. 2022. Cramming: Training a language model on a single gpu in one day. Loïc Grobol and Benoît Crabbé. 2021. Analyse en dépendances du français avec des plongements contextualisés. In *Actes de la 28ème Conférence sur le* Traitement Automatique des Langues Naturelles. Pengcheng He, Jianfeng Gao, and Weizhu Chen. 2021a. Debertav3: Improving deberta using electra-style pretraining with gradient-disentangled embedding sharing. Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2021b. Deberta: Decoding-enhanced bert with disentangled attention. In International Conference on Learning Representations. Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. 2022. Training compute-optimal large language models. arXiv preprint arXiv:2203.15556. Peter Izsak, Moshe Berchansky, and Omer Levy. 2021. How to train BERT with an academic budget. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 10644– 10652, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Moussa Kamal Eddine, Antoine Tixier, and Michalis Vazirgiannis. 2021. BARThez: a skilled pretrained French sequence-to-sequence model. In *Proceedings* of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 9369–9390, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Taku Kudo and John Richardson. 2018. Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018: System Demonstrations, Brussels, Belgium, October 31 - November 4, 2018, pages 66–71. Anne Lacheret, Sylvain Kahane, Julie Beliao, Anne Dister, Kim Gerdes, Jean-Philippe Goldman, Nicolas Obin, Paola Pietrandrea, and Atanas Tchobanov. 2014. Rhapsodie: un Treebank annoté pour l'étude de l'interface syntaxe-prosodie en français parlé. In 4e Congrès Mondial de Linguistique Française, volume 8, pages 2675–2689, Berlin, Germany. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. Albert: A lite bert for self-supervised learning of language representations. In *International Conference on Learning Representations*. Loïc Lannelongue, Jason Grealey, and Michael Inouye. 2021. Green algorithms: quantifying the carbon footprint of computation. *Advanced science*, 8(12):2100707. Julien Launay, E.l. Tommasone, Baptiste Pannier, François Boniface, Amélie Chatelain, Alessandro Cappelli, Iacopo Poli, and Djamé Seddah. 2022. PAGnol: An extra-large French generative model. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 4275–4284, Marseille, France. European Language Resources Association. Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoit Crabbé, Laurent Besacier, and Didier Schwab. 2020. FlauBERT: Unsupervised language model pre-training for French. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 2479–2490, Marseille, France. European Language Resources Association. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 7871–7880, Online. Association for Computational Linguistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Ro{bert}a: A robustly optimized {bert} pretraining approach. Alexandra Sasha Luccioni, Sylvain Viguier, and AnneLaure Ligozat. 2022. Estimating the carbon footprint of bloom, a 176b parameter language model. *arXiv* preprint arXiv:2211.02001. Louis Martin, Benjamin Muller, Pedro Javier Ortiz Suárez, Yoann Dupont, Laurent Romary, Éric de la Clergerie, Djamé Seddah, and Benoît Sagot. 2020. CamemBERT: a tasty French language model. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 7203– 7219, Online. Association for Computational Linguistics. Ryan McDonald, Joakim Nivre, Yvonne QuirmbachBrundage, Yoav Goldberg, Dipanjan Das, Kuzman Ganchev, Keith Hall, Slav Petrov, Hao Zhang, Oscar Täckström, Claudia Bedini, Núria Bertomeu Castelló, and Jungmee Lee. 2013. Universal Dependency annotation for multilingual parsing. In *Proceedings* of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 92–97, Sofia, Bulgaria. Association for Computational Linguistics. Vincent Micheli, Martin d'Hoffschmidt, and François Fleuret. 2020. On the importance of pre-training data volume for compact language models. In *Proceedings of the 2020 Conference on Empirical Methods* in Natural Language Processing (EMNLP), pages 7853–7858, Online. Association for Computational Linguistics. Martin Müller and Florian Laurent. 2022. Cedille: A large autoregressive french language model. Pedro Javier Ortiz Suárez, Benoît Sagot, and Laurent Romary. 2019. Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures. Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC7) 2019. Cardiff, 22nd July 2019, pages 9 - 16, Mannheim. Leibniz-Institut für Deutsche Sprache. Rui Pan, Shizhe Diao, Jianlin Chen, and Tong Zhang. 2022. Extremebert: A toolkit for accelerating pretraining of customized bert. Ivan Provilkov, Dmitrii Emelianenko, and Elena Voita. 2020. BPE-dropout: Simple and effective subword regularization. In *Proceedings of the 58th Annual* Meeting of the Association for Computational Linguistics, pages 1882–1892, Online. Association for Computational Linguistics. Anlin Qu, Jianwei Niu, and Shasha Mo. 2021. Explore better relative position embeddings from encoding perspective for transformer models. In *Proceedings* of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 2989–2997, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Jack W Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, et al. 2021. Scaling language models: Methods, analysis & insights from training gopher. arXiv preprint arXiv:2112.11446. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21(140):1–67. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In *Proceedings of* the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics. Arij Riabi, Benoît Sagot, and Djamé Seddah. 2021. Can character-based language models improve downstream task performances in low-resource and noisy language scenarios? In Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021), pages 423–436, Online. Association for Computational Linguistics. Benoît Sagot, Marion Richard, and Rosa Stern. 2012. Annotation référentielle du corpus arboré de Paris 7 en entités nommées (referential named entity annotation of the Paris 7 French TreeBank) [in French]. In *Proceedings of the Joint Conference JEP-TALNRECITAL 2012, volume 2: TALN*, pages 535–542, Grenoble, France. ATALA/AFCP. Djamé Seddah, Benoit Sagot, Marie Candito, Virginie Mouilleron, and Vanessa Combet. 2012. The French Social Media Bank: a treebank of noisy user generated content. In *Proceedings of COLING 2012*, pages 2441–2458, Mumbai, India. The COLING 2012 Organizing Committee. Alexander Sergeev and Mike Del Balso. 2018. Horovod: fast and easy distributed deep learning in TensorFlow. arXiv preprint arXiv:1802.05799. Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position representations. In *Proceedings of the 2018 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 464–468, New Orleans, Louisiana. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc. Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzmán, Armand Joulin, and Edouard Grave. 2020. CCNet: Extracting high quality monolingual datasets from web crawl data. In *Proceedings of the Twelfth Language Resources and Evaluation Conference*, pages 4003–4012, Marseille, France. European Language Resources Association. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Yinfei Yang, Yuan Zhang, Chris Tar, and Jason Baldridge. 2019. PAWS-X: A cross-lingual adversarial dataset for paraphrase identification. In *Proceedings of the 2019 Conference on Empirical Methods* in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3687–3692, Hong Kong, China. Association for Computational Linguistics. Yang You, Jing Li, Sashank Reddi, Jonathan Hseu, Sanjiv Kumar, Srinadh Bhojanapalli, Xiaodan Song, Training bert in 76 minutes. In International Confer- ## Appendix A Experiments Results On Oscar And Dropout Model UPOS LAS NER CLS PAWS-X XNLI F1F QuAD EM*F QuAD* FrALBERT 93.53 78.89 69.83 72.17 76.29 66.87 72.6∗55.1∗ FlauBERT 97.51 87.92 - 93.22∗89.49∗80.6∗- - CamemBERT*OSCAR* 97.50 88.24 88.19 94.61 90.87 81.38 79.92 61.15 CamemBERT*CCNet* 97.59 88.69 89.97 94.62 91.36 81.95 80.98 62.51 CAMEMBERTA 97.57 88.55 90.33 94.92 91.67 82.00 81.15 62.01 CAMEMBERTA*dropout* 97.56 88.57 90.03 94.46 91.42 81.91 79.37 60.29 Table 5: Comparison results of CamemBERT*OSCAR* and CamemBERT*CCNet*, and our model CAMEMBERTA, with and without dropout. Due to compatibility issues with FlauBERT's tokenizer, we were unable to conduct FlauBERT testing on FQuAD and NER using standard finetuning scripts. ∗*Results from the models' respective* papers Cattan et al. (2021) and (Le et al., *2020)*. ## B Negative Results In addition to our main results, we attempted to improve the performance of our model by adding BPEDropout (Provilkov et al., 2020) to the tokenization process, as it was shown that this method of subword regularization improves performance on translation tasks. We retrain our model with BPE-Dropout, dubbed CamemBERTa*dropout*, and compare the results to our original model in Table 5. We observe that by adding BPE-Dropout, we obtain a decrease in performance on most tasks, except for POS tagging and dependency parsing, where the performance does not change. ## C Hyper-Parameters Table 6: Hyper-parameters used for the Question Answering and Named Entity Recognition experiments. For experiments on the FLUE benchmark we use the same hyper-parameters as the authors of CamemBERT on the NLI task. As for POS tagging and dependency parsing, we use the same configurations as the one used in Riabi et al. (2021). | Hyper-parameter | Value | |---------------------|--------------------| | Max sequence length | 512 | | Batch size | 16 | | FP16 | Enabled | | Learning rate | {1.5e-5,2e-5,3e-5} | | Epochs | 8 | | Scheduler | linear | | Warmup steps | {0,0.1%} | | Seed | {1,25,42,666,1337} | ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitiations section ✓ A2. Did you discuss any potential risks of your work? ethics section ✓ A3. Do the abstract and introduction summarize the paper's main claims? Left blank. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Left Blank. ✓ B1. Did you cite the creators of artifacts you used? Left blank. ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Left blank. ## C ✓ **Did You Run Computational Experiments?** 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 3, 4, and Appendix C The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 4 and Appendix D ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 4 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Left blank. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
yang-etal-2023-coupling
Coupling Large Language Models with Logic Programming for Robust and General Reasoning from Text
https://aclanthology.org/2023.findings-acl.321
While large language models (LLMs), such as GPT-3, appear to be robust and general, their reasoning ability is not at a level to compete with the best models trained for specific natural language reasoning problems. In this study, we observe that a large language model can serve as a highly effective few-shot semantic parser. It can convert natural language sentences into a logical form that serves as input for answer set programs, a logic-based declarative knowledge representation formalism. The combination results in a robust and general system that can handle multiple question-answering tasks without requiring retraining for each new task. It only needs a few examples to guide the LLM{'}s adaptation to a specific task, along with reusable ASP knowledge modules that can be applied to multiple tasks. We demonstrate that this method achieves state-of-the-art performance on several NLP benchmarks, including bAbI, StepGame, CLUTRR, and gSCAN. Additionally, it successfully tackles robot planning tasks that an LLM alone fails to solve.
# Coupling Large Language Models With Logic Programming For Robust And General Reasoning From Text Zhun Yang, Adam Ishay 1 1 Arizona State University {zyang90,aishay}@asu.edu ## Abstract While large language models (LLMs), such as GPT-3, appear to be robust and general, their reasoning ability is not at a level to compete with the best models trained for specific natural language reasoning problems. In this study, we observe that a large language model can serve as a highly effective few-shot semantic parser. It can convert natural language sentences into a logical form that serves as input for answer set programs, a logic-based declarative knowledge representation formalism. The combination results in a robust and general system that can handle multiple question-answering tasks without requiring retraining for each new task. It only needs a few examples to guide the LLM's adaptation to a specific task, along with reusable ASP knowledge modules that can be applied to multiple tasks. We demonstrate that this method achieves state-of-the-art performance on several NLP benchmarks, including bAbI, StepGame, CLUTRR, and gSCAN. Additionally, it successfully tackles robot planning tasks that an LLM alone fails to solve. ## 1 Introduction A typical way to handle a question-answering task is to train a neural network model on large training data and test it on similar data. Such models work well with linguistic variability and ambiguity but often learn statistical features and correlations rather than true reasoning (Ruder, 2021), which makes them not robust, lack generalization, and difficult to interpret. Alternatively, transformer-based large language models (LLMs) have recently shown wide success on many downstream tasks, demonstrating general reasoning capability on diverse tasks without being retrained. However, when we restrict our attention to individual NLP reasoning benchmarks, they usually do not perform as well as state-of-the-art models despite various efforts to improve accuracy through prompt engineering (Wei et al., 2022; Zhou Joohyung Lee 1,2 2 Samsung Research [email protected] ## Et Al., 2022). Similarly, LLMs gained attention for plan generation for robots due to the rich semantic knowledge they acquired about the world (Ahn et al., 2022; Huang et al., 2022; Zeng et al., 2022). However, LLMs are known to perform shallow reasoning and cannot find complex plans (Valmeekam et al., 2022). In another context, Nye et al. (2021) note that while LLMs are good at System-1 thinking, their outputs are often inconsistent and incoherent. This is because LLMs are trained to predict subsequent words in a sequence and do not appear to have a deep understanding of concepts such as cause and effect, logic, and probability, which are important for reasoning. Nevertheless, we note that the rich semantic knowledge that LLMs possess makes them effective general-purpose few-shot semantic parsers that can convert linguistically variable natural language sentences into atomic facts that serve as input to logic programs. We also note that the fully declarative nature of answer set programs (Lifschitz, 2008; Brewka et al., 2011) makes them a good pair with the LLM semantic parsers, providing interpretable and explainable reasoning on the parsed result of the LLMs using background knowledge. Combining large language models and answer set programs leads to an attractive dual-process, neuro-symbolic reasoning that works across multiple QA tasks without retraining for individual tasks. We tested this idea with several NLP benchmarks, bAbI (Weston et al., 2016), StepGame (Shi et al., 2022), CLUTRR (Sinha et al., 2019), and gSCAN (Ruis et al., 2020), by applying the same dual-system model and achieved state-of-the-art performance in all of them. Furthermore, the high accuracy and transparency allow us to easily identify the source of errors, making our system a useful data set validation tool as well. In particular, we found a significant amount of errors in the original 5186 CLUTRR dataset that are hard to detect manually. While the new version of GPT-3 (Brown et al., 2020) (text-davinci-003) shows improvement over its predecessors, we observe that it also retains critical limitations. In the process, we develop prompt methods for semantic parsing to overcome some of them. The implementation of our method is publicly available online at https://github.com/ azreasoners/LLM-ASP. ## 2 Preliminaries 2.1 Semantic Parsing And Llms Semantic parsing involves converting a natural language query or statement into a structured representation that a computer can understand and manipulate. Statistical methods have increased in popularity (Zelle and Mooney, 1996; Miller et al., 1996; Zettlemoyer and Collins, 2005; Wong and Mooney, 2007), and encoder-decoder models in particular have been widely used (Dong and Lapata, 2016; Jia and Liang, 2016; Kocisk ˇ y et al. ` , 2016). However, these statistical methods require annotated input and output pairs. Furthermore, machine learning models often fail to compositionally generalize to unseen data (Lake and Baroni, 2018). More recently, pre-trained language models have been applied to semantic parsing tasks (Liu et al., 2021), such as generating SQL queries, SPARQL queries, logical forms, or programs, from natural language, together with fine-tuning or prompttuning on pre-trained models, such as BART, RoBERTa and GPT-2 (Chen et al., 2020a; Shin et al., 2021; Schucher et al., 2022). With larger pre-trained networks, such as GPT-3, prompting appears to yield a reasonable semantic parser without the need for fine-tuning (Shin et al., 2021; Drozdov et al., 2022). Another line of related work is to apply pretrained language models to relation extraction, the task of extracting semantic relationships from a text given two or more entities (Liu et al., 2021). Wang et al. (2022) do zero-shot relation extraction with pre-trained language models from the BERT family and GPT-2 variants. Zhou and Chen (2022) finetune BERT and RoBERTa models for the extraction of sentence-level relations. Chen et al. (2022) apply prompt-tuning to RoBERT_LARGE for relation extraction. Similar to ours, Agrawal et al. (2022) use a few-shot prompt with GPT-3 for the extraction of clinical relations. ## 2.2 Dual-System Model There is increasing interest in combining neural and symbolic systems (Marcus, 2018; Lamb et al., 2020; Sarker et al., 2021). Such dual-system models achieved new state-of-the-art results in visual question answering (Goldman et al., 2018; Sampat and Lee, 2018; Yi et al., 2019; Chen et al., 2020b; Ding et al., 2021). In the case of textual problems, to improve LLMs to generate more consistent and coherent sentences, Nye et al. (2021) suggest that generation be decomposed into two parts: candidate sentence generation by an LLM (system 1 thinking) and a logical pruning process (system 2 thinking) implemented via a separate symbolic module. They demonstrate that this neurosymbolic, dual-process model requires less training data, achieves higher accuracy, and exhibits better generalization. However, the main limitation of their work is that the symbolic module is manually constructed in Python code for the specific task at hand, requiring subtantial efforts. Additionally, their Python symbolic module is not readily reusable or composable. Furthermore, their main results primarily focus on the problem of consistent text generation, rather than evaluating the method on the datasets and comparing it with existing models. This is because writing the world models in Python is not a scalable approach. In our work, we follow the idea presented in (Nye et al., 2021) but adopt logic programming in place of the System 2 process. We argue that this combination is much more appealing than the approach in (Nye et al., 2021), as it can achieve the promised results without the limitations mentioned above. ## 2.3 Answer Set Programming Answer Set Programming (ASP) (Lifschitz, 2008; Brewka et al., 2011) is a declarative logic programming paradigm that has been shown to be effective in knowledge-intensive applications. It is based on the stable model (a.k.a. answer set) semantics of logic programs (Gelfond and Lifschitz, 1988), which could express causal reasoning, default reasoning, aggregates, and various other constraints. There are several efficient solvers, such as CLINGO, DLV, and WASP. We use CLINGO v5.6.0 as the answer set solver. For the language of CLINGO, we refer the reader to the textbook (Lifschitz, 2019) or the CLINGO manual.1 It is also known that classical logic-based action formalisms, such as the situation calculus (McCarthy and Hayes, 1969; Reiter, 2001) and the event calculus (Shanahan, 1995), can be formulated as answer set programs. For example, the following is one of the axioms in Discrete Event Calculus stating the commonsense law of inertia, saying that fluent F holds at the next time if there is no action affecting it. % (DEC5) holds_at(F,T+1) :- timepoint(T), fluent(F), holds_at(F,T), -released_at(F,T+1), not terminated(F,T). Such a rule is universal and applies to almost all objects. Answer set programs are also known to be *elaboration tolerant* (McCarthy, 1998). There has been work on modularizing knowledge bases in ASP, such as module theorem (Oikarinen and Janhunen, 2006; Babb and Lee, 2012) and knowledge modules (Baral et al., 2006). While ASP has been widely applied to many reasoning problems, it has not been considered as much in reasoning with natural language text because its input is expected to be strictly in a logical form, giving little flexibility in accepting diverse forms of natural language input. ## 3 Our Method We refer to our framework as [LLM]+ASP where [LLM] denotes a large pre-trained network such as GPT-3, which we use as a semantic parser to generate input to the ASP reasoner. Specifically, we assume data instances of the form ⟨*S, q, a*⟩, where S is a context story in natural language, q is a natural language query associated with S, and a is the answer. We use an LLM to convert a problem description (that is, context S and query q) into atomic facts, which are then fed into the ASP solver along with background knowledge encoded as ASP rules. The output of the ASP solver is interpreted as the prediction for the given data instance. Figure 1 illustrates the inference flow in the context of StepGame. The pipeline is simple but general enough to bke applied to various tasks without the need for retraining. It only requires replacing the few-shot prompts to the LLM and the ASP background knowledge with those suitable for the new tasks. By combining LLMs and ASP in this manner, we enable robust symbolic reasoning that can handle diverse and unprocessed textual input. The ASP knowledge modules remain unaffected by the diverse forms of input text that express the same facts. Our method does not rely on training datasets. Instead, a few examples that turn natural language sentences into atomic facts are sufficient to build a semantic parser due to the learned representations in LLMs. Furthermore, ASP knowledge modules can be reused for different tasks. ## 3.1 Prompts For Fact Extraction We use GPT-3 to extract atomic facts from the story and query. Most of the time, giving several examples yields accurate semantic parsing. The following is an example prompt for bAbI. Please parse the following statements into facts . The available keywords are: pickup, drop, and go. Sentence: Max journeyed to the bathroom. Semantic parse: go(Max, bathroom). Sentence: Mary grabbed the football there. Semantic parse: pickup(Mary, football). ... We find that GPT-3 is highly tolerable to linguistic variability. For example, in StepGame, GPT-3 can turn various sentences below into the same atomic fact top_right("C","D"). C is to the top right of D. C is to the right and above D at an angle of about 45 degrees. C is at a 45 degree angle to D, in the upper righthand corner. C is directly north east of D. C is above D at 2 o'clock. In the experiments to follow, we find that the following strategy works well for fact extraction. 1. In general, we find that if the information in a story (or query) can be extracted independently, parsing each sentence separately (using the same prompt multiple times) typically works better than parsing the whole story. 2. There is certain commonsense knowledge that GPT-3 is not able to leverage from the examples in the prompt. In this case, detailing the missing knowledge in the prompt could work. For example, in StepGame, clock numbers are used to denote cardinal directions, but GPT-3 couldn't translate correctly even with a few ![3_image_0.png](3_image_0.png) examples in the prompt. It works after enumerating all cases ("12 denotes top, 1 and 2 denote top_right, 3 denotes right, . . .") in the prompt. 3. Semantic parsing tends to work better if we instruct GPT-3 to use a predicate name that better reflects the intended meaning of the sentence. For example, "A is there and B is at the 5 position of a clock face" is better to be turned into down_right(B,A) than top_left(A,B) although, logically speaking, the relations are symmetric. The complete set of prompts for semantic parsing is given in Appendix C. ## 3.2 Knowledge Modules Instead of constructing a minimal world model for each task in Python code (Nye et al., 2021 ), we use ASP knowledge modules. While some knowledge could be lengthy to be described in English, it could be concisely expressed in ASP. For example, the location module contains rules for spatial reasoning in a 2D grid space and is used for bAbI, StepGame, and gSCAN. Below is the main rule in the location module that computes the location (Xa,Ya) of object A from the location (Xb,Yb) of object B by adding the offsets (Dx,Dy) defined by the spatial relation R between A and B . location(A, Xa, Ya) :- location(B, Xb, Yb), is(A, R, B), offset(R, Dx, Dy), Xa=Xb+Dx, Ya=Yb+Dy. The location module also includes 9 predefined offsets, e.g., offset(left,-1,0) , that can be used to model multi-hop spatial relations of objects or effects of a robot's moving in a 2D space. For example, queries in StepGame are about the spatial relation R of object A to B . Using the location module, one can fix B 's location to be (0,0) and compute the spatial relation R based on the location of A as follows. location(B, 0, 0) :- query(A, B) **Theorem(2) 0) :** **q**e**e**e**f** (4) **D) :** answer(R) := query(A, B), location(A, X, Y), offset(R, Dx, Dy), Dx=1: X<0; Dx=0: X=0; Dx=1: X>0; Dy=1: Y<0; Dy=0: Y=0; Dy=1: Y>0. The second rule above contains six conditional literals among which Dx=-1:X<0 says that " Dx must be -1 if X<0." For example, if A's location (X,Y) is (-3,0), then (Dx,Dy) is (-1,0) and the answer R is left . Similar rules can also be applied to bAbI task 17, which asks if A is R of B. In the above rules, the relation R in, e.g., is(A,R,B) , is a variable and can be substituted by any binary relation. Such high-order representation turns out to be quite general and applicable to many tasks that query relation or its arguments. ![3_image_1.png](3_image_1.png) used in each task on the top. Figure 2 shows the knowledge modules used in this paper, where DEC denotes the Discrete Event Calculus axioms from (Mueller, 2006; Lee and Palla, 2012). In this section, we explained the main rules in the **location** module. The complete ASP knowledge modules are given in Appendix E. ## 4 Experiments We apply the method in the previous section to four datasets.2 As a reminder, our approach involves few-shot in-context learning and does not require training. We use the same pipeline as shown in Figure 1, but with different prompts and knowledge modules for each dataset. For more detailed information about the experimental settings, please refer to the appendix. ## 4.1 Babi The bAbI dataset (Weston et al., 2016) is a collection of 20 QA tasks that have been widely applied to test various natural language reasoning problems, such as deduction, path-finding, spatial reasoning, and counting. State-of-the-art models, such as self-attentive associative-based two-memory model (STM) (Le et al., 2020) and Query-Reduction networks (QRN) (Seo et al., 2017) achieve close to 100% accuracy after training with 10k instances while QRN's accuracy drops to 90% with 1k training instances. We first designed two GPT-3 baselines, one with few shot prompts (containing a few example questions and answers) and the other with Chain-ofThought (CoT) prompts (Wei et al., 2022), which state the relevant information to derive the answer. We also apply GPT-3+ASP. For example, we use GPT-3 to turn "the kitchen is south of the bathroom" into an atomic fact is(kitchen, southOf, bathroom) by giving a few examples of the same kind. Regarding knowledge modules, Tasks 1–3, 6– 9, 10–14, and 19 are about events over time and use the DEC knowledge module. Tasks 4, 17, and 19 require various domain knowledge modules such as **location** and **action** knowledge modules. The remaining tasks do not require domain knowledge and rely only on simple rules to extract answers from parsed facts. Table 1 compares our method with the two GPT3 baselines, as well as two state-of-the-art methods on bAbI datasets, STM and QRN. Interestingly, the 2Due to space restriction, we put the experiments about Pick&Place in Appendix A. new GPT-3, text-davinci-003 (denoted GPT-3 (d3)), with basic few-shot prompting achieves 80.34% accuracy, while CoT improves it to 86.18%. GPT3(d3)+ASP achieves state-of-the-art performance on bAbI with 99.99% average performance among all tasks, producing only two answers that disagree with the labels in the dataset. It turns out that the two questions are malformed since the answers are ambiguous, and our model's answers can be considered correct.3 ## 4.2 Stepgame Although bAbI has been extensively tested, it has several problems. Shi et al. (2022) note data leakage between the train and the test sets where named entities are fixed and only a small number of relations are used. Palm et al. (2018) point out that models do not need multi-hop reasoning to solve the bAbI dataset. To address the issues, Shi et al. (2022) propose the StepGame dataset. It is a contextual QA dataset in which the system is required to interpret a story S about spatial relationships among several entities and answers a query q about the relative position of two of those entities, as illustrated in Figure 1. Unlike the bAbI dataset, StepGame uses a large number of named entities, and requires multi-hop reasoning up to as many as 10 reasoning steps. In the basic form of the StepGame dataset, each story consists of k sentences that describe k spatial relationships between k + 1 entities in a chain-like shape. In this paper, we evaluate the StepGame dataset with noise, where the original chain is extended with noise statements by branching out with new entities and relations. Similarly to bAbI, we designed two GPT-3 baselines and applied our method to the StepGame data set. More details on the prompts are available in Appendix C.2. For each k ∈ {1*, . . . ,* 10}, the StepGame dataset with noise consists of 30,000 training samples, 1000 validation samples, and 10,000 test samples. To save the API cost for GPT-3, we only evaluated the two GPT-3 baselines on the first 100 test samples and evaluated our method on the first 1,000 test samples for each k ∈ {1*, . . . ,* 10}. Table 2 compares the accuracy of our method with the two baselines of GPT-3 and the current methods, i.e. RN (Santoro et al., 2017), RRN (Palm et al., 2018), UT (Dehghani et al., 2018), STM (Le et al., 2020), 3See Appendix F.1 for the examples. | Task | GPT-3(d3) | GPT-3(d3) | GPT-3(d3) | STM(Le et al., 2020) | QRN(Seo et al., 2017) | | |---------------------------|-------------|-------------|-------------|------------------------|-------------------------|-------| | Few-Shot | CoT | +ASP | (10k train) | (10k train) | (1k train) | | | 1: Single supporting fact | 98.4 | 97.3 | 100.0 | 100.0 ± 0.0 | 100.0 | 100.0 | | 2: Two supporting facts | 60.8 | 72.2 | 100.0 | 99.79 ± 0.23 | 100.0 | 99.3 | | 3: Three supporting facts | 39.6 | 54.1 | 100.0 | 97.87 ± 1.14 | 100.0 | 94.3 | | 4: Two arg relations | 60.4 | 72.7 | 100.0 | 100.0 ± 0.0 | 100.0 | 100.0 | | 5: Three arg relations | 88.2 | 89.1 | 99.8 | 99.43 ± 0.18 | 100.0 | 98.9 | | 6: Yes/no questions | 97.4 | 97.3 | 100.0 | 100.0 ± 0.0 | 100.0 | 99.1 | | 7: Counting | 90.6 | 88.6 | 100.0 | 99.19 ± 0.27 | 100.0 | 90.4 | | 8: Lists/sets | 96.2 | 97.1 | 100.0 | 99.88 ± 0.07 | 99.6 | 94.4 | | 9 : Simple negation | 98.4 | 98.2 | 100.0 | 100.0 ± 0.0 | 100.0 | 100.0 | | 10: Indefinite knowledge | 93.6 | 92.4 | 100.0 | 99.97 ± 0.06 | 100.0 | 100.0 | | 11: Basic coreference | 93.6 | 99.2 | 100.0 | 99.99 ± 0.03 | 100.0 | 100.0 | | 12: Conjunction | 88.6 | 88.8 | 100.0 | 99.96 ± 0.05 | 100.0 | 100.0 | | 13: Compound coreference | 98.4 | 97.3 | 100.0 | 99.99 ± 0.03 | 100.0 | 100.0 | | 14: Time reasoning | 78.0 | 91.5 | 100.0 | 99.84 ± 0.17 | 99.9 | 99.2 | | 15: Basic deduction | 57.0 | 95.0 | 100.0 | 100.0 ± 0.0 | 100.0 | 100.0 | | 16: Basic induction | 90.8 | 97.5 | 100.0 | 99.71 ± 0.15 | 100.0 | 47.0 | | 17: Positional reasoning | 66.0 | 70.8 | 100.0 | 98.82 ± 1.07 | 95.9 | 65.6 | | 18: Size reasoning | 89.8 | 97.1 | 100.0 | 99.73 ± 0.28 | 99.3 | 92.1 | | 19: Path finding | 21.0 | 28.7 | 100.0 | 97.94 ± 2.79 | 99.9 | 21.3 | | 20: Agents motivations | 100.0 | 100.0 | 100.0 | 100.0 ± 0.0 | 100.0 | 99.8 | | Average | 80.34 | 86.18 | 99.99 | 99.85 | 99.70 | 90.1 | Table 1: Test accuracy on 20 tasks in bAbI data Method k=1 k=2 k=3 k=4 k=5 RN 22.6 17.1 15.1 12.8 11.5 RRN 24.1 20.0 16.0 13.2 12.3 UT 45.1 28.4 17.4 14.1 13.5 STM 53.4 36.0 23.0 18.5 15.1 TPR-RNN 70.3 46.0 36.1 26.8 24.8 TP-MANN 85.8 60.3 50.2 37.5 31.3 SynSup **98.6 95.0 92.0** 79.1 70.3 Few-Shot (d3) 55.0 37.0 25.0 30.0 32.0 CoT (d3) 61.0 45.0 30.0 35.0 35.0 GPT-3(c1)+ASP 44.7 38.8 40.5 58.8 62.4 GPT-3(d2)+ASP 92.6 89.9 89.1 **93.8 92.9** Method k=6 k=7 k=8 k=9 k=10 RN 11.1 11.5 11.2 11.1 11.3 RRN 11.6 11.4 11.8 11.2 11.7 UT 12.7 12.1 11.4 11.4 11.7 STM 13.8 12.6 11.5 11.3 11.8 TPR-RNN 22.3 19.9 15.5 13.0 12.7 TP-MANN 28.5 26.5 23.7 22.5 21.5 SynSup 63.4 58.7 52.1 48.4 45.7 Few-Shot (d3) 29.0 21.0 22.0 34.0 31.0 CoT (d3) 27.0 22.0 24.0 23.0 25.0 GPT-3(c1)+ASP 57.4 56.2 58.0 56.5 54.1 GPT-3(d2)+ASP **91.6 91.2 90.4 89.0 88.3** Table 2: Test accuracy on the StepGame test dataset, where (c1), (d2), and (d3) denote text-curie-001, textdavinci-002, and text-davinci-003 models, respectively TPR-RNN (Schlag and Schmidhuber, 2018), TPMANN (Shi et al., 2022), and SynSup (with pretraining on the SPARTUN dataset) (Mirzaee and Kordjamshidi, 2022). Surprisingly, the GPT-3 baselines could achieve accuracy comparable to other models (except for SynSup) for large k values. CoT does not always help and decreases the accuracy with big ks. This may be because there is a higher chance of making a mistake in a long chain of thought. GPT-3(d2)+ASP outperforms all stateof-the-art methods and the GPT-3 baselines by a large margin for k = 4*, . . . ,* 10. Although SynSup achieves a higher accuracy for k = 1, 2, 3, this is misleading due to errors in the dataset. As we analyze below, about 10.7% labels in the data are wrong. The SynSup training makes the model learn to make the same mistakes over the test dataset, which is why its performance looks better than ours. The modular design of GPT-3+ASP enables us to analyze the reasons behind its wrong predictions. We collected the first 100 data instances for each k ∈ {1*, . . . ,* 10} and manually analyzed the predictions on them. Among 1000 predictions of GPT-3(d2)+ASP, 108 of them disagree with the dataset labels, and we found that 107 of those have errors in the labels. For example, given the story and question "J and Y are horizontal and J is to the right of Y. What is the relation of the agent Y with the agent J?", the label in the dataset is "right" while the correct relation should be "left".4 Recall 4The remaining disagreeing case is due to text-davinci002's mistake. For the sentence, "if E is the center of a clock face, H is located between 2 and 3." text-davinci-002 turns it into "right(H, E)" whereas text-davinci-003 turns it into "top-right(H, E)" correctly. To save API cost for GPT-3, we that our method is interpretable, so we could easily identify the source of errors. ## 4.3 Clutrr CLUTRR (Sinha et al., 2019) is a contextual QA dataset that requires inferring family relationships from a story. Sentences in CLUTRR are generated using 6k template narratives written by Amazon Mechanical Turk crowd-workers, and thus are more realistic and complex compared to those in bAbI and StepGame. CLUTRR consists of two subtasks, systematic generalization that evaluates stories containing unseen combinations of logical rules (Minervini et al., 2020; Bergen et al., 2021) and *robust reasoning* that evaluates stories with noisy descriptions (Tian et al., 2021). Since we use ASP for logical reasoning, which easily works for any combination of logical rules, we focus on the robust reasoning task. | Method | CLU. | clean | supp. | irre. | disc. | |---------------|--------|---------|---------|---------|---------| | RN | 1.0 | 49 | 68 | 50 | 45 | | MAC | 1.0 | 63 | 65 | 56 | 40 | | Bi-att | 1.0 | 58 | 67 | 51 | 57 | | GSM | 1.0 | 68.5 | 48.6 | 62.9 | 52.8 | | GPT-3(d3)+ASP | 1.0 | 68.5 | 82.8 | 74.8 | 67.4 | | GPT-3(d3)+ASP | 1.3 | 97.0 | 84.0 | 92.0 | 90.0 | Table 3: Test accuracy on 4 categories in CLUTRR 1.0 and CLUTRR 1.3 datasets Table 3 compares our method with RN (Santoro et al., 2017), MAC (Hudson and Manning, 2018), BiLSTM-attention (Sinha et al., 2019), and GSM (Tian et al., 2021) on the original CLUTRR dataset, namely CLUTRR 1.0, in four categories of data instances: clean, supporting, irrelevant, and disconnected (Sinha et al., 2019). Except for our method, all other models are trained on the corresponding category of CLUTRR training data. Although our method achieves similar or higher accuracies in all categories, they are still much lower than we expected. We found that such low accuracy is due to the clear errors in CLUTRR, originating mostly from errors in the template narratives or the generated family graphs that violate common sense. The authors of CLUTRR recently published CLUTRR 1.3 codes to partially resolve this issue. 5 With the new code, we created a new dataset, namely CLUTRR did not re-run the whole experiments with text-davinci-003. 5https://github.com/facebookresearch/ clutrr/tree/develop 1.3, consisting of 400 data instances with 100 for each of the four categories. The last row in Table 3 shows that our method actually performs well on realistic sentences in CLUTRR. Indeed, with our method (using text-davinci-003) on CLUTRR 1.3 dataset, 363 out of 400 predictions are correct, 16 are still wrong due to data mistakes (e.g., the label says "Maryann has an uncle Bruno" while the noise sentence added to the story is "Maryann told her son Bruno to give the dog a bath"), and 21 are wrong due to GPT-3's parsing mistakes (e.g., GPT3 turned the sentence "Watt and Celestine asked their mother, if they could go play in the pool" into mother("Watt", "Celestine"). Since the sentences in CLUTRR 1.3 are more realistic than those in bAbI and StepGame, GPT-3 makes more mistakes even after reasonable efforts of prompt engineering. More details on data errors and GPT-3 errors are available in Appendix F.2 and Appendix D. | Method | clean | supp. | irre. | disc. | |---------------|---------|---------|---------|---------| | DeepProbLog | 100 | 100 | 100 | 94 | | GPT-3(d2)+ASP | 100 | 100 | 97 | 97 | | GPT-3(d3)+ASP | 100 | 100 | 100 | 100 | Table 4: Test accuracy on CLUTRR-S dataset We also evaluated our method on a simpler and cleaner variant of the CLUTRR data set, namely CLUTRR-S, that was used as a benchmark problem for a state-of-the-art neuro-symbolic approach DeepProbLog (Manhaeve et al., 2021). Table 4 compares the accuracy of our method and DeepProbLog in all 4 categories of test data. GPT3(d3)+ASP achieves 100% accuracy, outperforming DeepProbLog without the need for training. Remark: Due to the modular structure, our method could serve as a data set validation tool to detect errors in a dataset. We detected 107 wrong data instances in the first 1000 data in StepGame and 16 wrong data instances in the 400 data in CLUTRR 1.3. ## 4.4 Gscan The gSCAN dataset (Ruis et al., 2020) poses a task in which an agent must execute action sequences to achieve a goal (specified by a command in a natural language sentence) in a grid-based visual navigation environment. The dataset consists of two tasks, and we evaluate our method on the data splits from the compositional generalization task. There is one shared training set, one test set (split A) randomly sampled from the same distribution of the training set, and seven test sets (splits B to H) with only held-out data instances (i.e., not appearing in the training set) in different ways. In the gSCAN dataset, each data instance is a tuple ⟨*G, q, a*⟩ where G is the grid configuration (in JSON format) describing the size of the gird, the location and direction of the agent, and the location and features of each object in the grid; q is a query (e.g., "pull a yellow small cylinder hesitantly"); and a is the answer in the form of a sequence of actions (e.g., "turn right, walk, stay, pull, stay, pull, stay"). For each data instance, we (i) use a Python script to extract atomic facts (e.g., pos(agent,(2,3))) from the grid configuration G; (ii) extract atomic facts from query q into atomic facts (e.g., query(pull), queryDesc(yellow), while(hesitantly)) using GPT-3; and (iii) predict the sequence of actions for this query using ASP. The details of the prompts are given in Appendix C.4. | Method | A | B | C | D | |---------------|-------|-------|-------|-------| | GECA | 87.60 | 34.92 | 78.77 | 0.00 | | DualSys | 74.7 | 81.3 | 78.1 | 0.01 | | Vilbert+CMA | 99.95 | 99.90 | 99.25 | 0.00 | | GPT-3(c1)+ASP | 98.30 | 100 | 100 | 100 | | GPT-3(d2)+ASP | 100 | 100 | 100 | 100 | | Method | E | F | G | H | | GECA | 33.19 | 85.99 | 0.00 | 11.83 | | DualSys | 53.6 | 76.2 | 0.0 | 21.8 | | Vilbert+CMA | 99.02 | 99.98 | 0.00 | 22.16 | | GPT-3(c1)+ASP | 100 | 100 | 100 | 100 | | GPT-3(d2)+ASP | 100 | 100 | 100 | 100 | Table 5: Test accuracy on the gSCAN dataset Table 5 compares the accuracy of our method and the state-of-the-art methods, i.e., GECA (Ruis et al., 2020), DualSys (Nye et al., 2021) and Vilbert+CMA (Qiu et al., 2021), on the gSCAN test dataset in eight splits. To save API cost for GPT3, we only evaluated the first 1000 data instances of each split. With text-davinci-002, our method GPT-3+ASP achieves 100% accuracy. With textcurie-001, the accuracy is slightly lower, making 17 errors in split A. The errors are of two kinds. The language model fails to extract adverbs in the correct format for 11 data instances (e.g., GPT-3 responded queryDesc(while spinning) instead of while(spinning)) and didn't ground the last word in a query for 6 data instances (e.g., for query walk to a small square, GPT- 3 missed an atomic fact queryDesc(square)). Once the parsed results are correct, ASP does not make a mistake in producing plans. ## 4.5 Findings The following summarizes the findings of the experimental evaluation. - Our experiments confirm that LLMs like GPT3 are still not good at multi-step reasoning despite various prompts we tried. Chain-ofThought is less likely to improve accuracy when a long chain of thought is required. - On the other hand, LLMs are surprisingly good at turning a variety of expressions into a "canonical form" of information extraction. This in turn allows ASP knowledge modules to be isolated from linguistic variability in the input. - Even for generating simple atomic facts, larger models tend to perform better. For example, in StepGame and gSCAN, text-curie001 performs significantly worse compared to text-davinci-002 (Tables 2 and 5). - The total amount of knowledge that needs to be encoded for all of the above datasets is not too large. This is in part due to the fact that GPT-3 "normalized" various forms of input sentences for ASP to process and that knowledge modules could be reused across different datasets. - The modular design of our approach makes it possible to locate the root cause of each failed prediction in the training data and improve upon it. There are three sources of errors: semantic parsing in LLMs, symbolic constraints, and the dataset itself, and we can resolve the first two issues by improving the prompts and updating the constraints, respectively. - Our framework could serve as a few-shot dataset justifier and corrector. Among all predictions by our method that do not align with the labels, almost all of them (with only a few exceptions discussed in the paper) are due to errors in the dataset. ## 5 Conclusion Symbolic logic programming was previously considered limited in its ability to reason from text due to its inability to handle various and ambiguous linguistic expressions. However, combining it with a large language model that has learned distributed representations helps alleviate this problem. The method not only achieves high accuracy but also produces interpretable results, as the source of the errors can be identified. It is also general; by using pre-trained networks with few-shot prompts and reusable knowledge modules, adapting to a new domain does not require extensive training. The knowledge modules used in our experiments are reusable. For the above experiments, the modules are relatively simple to write, as are the prompts for parsing natural language for LLMs. However, acquiring this kind of knowledge on a massive scale is also an important line of research (Liu and Singh, 2004; Bosselut et al., 2019; Hwang et al., 2021) that needs to be combined. In addition, it is possible to use LLM's code generation capability (Chen et al., 2021) to generate logic program rules, which we leave for future work. One may think that the logic rules are too rigid. However, there are many weighted or probabilistic rules that can be defeated (Richardson and Domingos, 2006; Fierens et al., 2013; Lee and Wang, 2018). They could be used for more realistic settings, but for the benchmark problems above, they were not needed. ## Ethical Considerations All datasets used in this paper are publicly available. For CLUTRR dataset, the gender information is essential to tell if, e.g., A is B's uncle or niece. We used GPT-3 to predict the genders of persons in each story. Since each story is systematically generated using sampled common first names and sampled sentence templates, it does not reveal any identity. As mentioned, the original CLUTRR dataset had some errors, and we describe carefully the codes and settings of the generated CLUTRR 1.3 dataset in Appendix B.1. ## Limitations The current work requires that knowledge modules be written by hand. Commonly used axioms, such as general knowledge like the commonsense law of inertia expressed by event calculus, can be reused easily, but there are vast amounts of other commonsense knowledge that are not easy to obtain. LLMs could be used to supply this information, but we have not tried. Knowledge graphs, such as ConceptNet (Liu and Singh, 2004), COMET (Bosselut et al., 2019) and ATOMIC (Hwang et al., 2021), can be utilized to populate ASP rules. Like code models, we expect that LLMs could generate ASP code, which we leave for future work. Also, when using large language models, despite various efforts, sometimes it is not understandable why they do not behave as expected. ## Acknowledgements This work was partially supported by the National Science Foundation under Grant IIS-2006747. ## References Monica Agrawal, Stefan Hegselmann, Hunter Lang, Yoon Kim, and David Sontag. 2022. Large language models are few-shot clinical information extractors. In *Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing*, page 1998–2022. Association for Computational Linguistics. Michael Ahn, Anthony Brohan, Yevgen Chebotar, Chelsea Finn, Karol Hausman, Alexander Herzog, Daniel Ho, Julian Ibarz, Alex Irpan, Eric Jang, Ryan Julian, et al. 2022. Do as I can, not as I say: Grounding language in robotic affordances. In 6th Annual Conference on Robot Learning. Joseph Babb and Joohyung Lee. 2012. Module theorem for the general theory of stable models. Theory and Practice of Logic Programming, 12(4-5):719–735. Chitta Baral, Juraj Dzifcak, and Hiro Takahashi. 2006. Macros, macro calls and use of ensembles in modular answer set programming. In International Conference on Logic Programming, pages 376–390. Springer. Leon Bergen, Timothy O'Donnell, and Dzmitry Bahdanau. 2021. Systematic generalization with edge transformers. *Advances in Neural Information Processing Systems*, 34:1390–1402. Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Celikyilmaz, and Yejin Choi. 2019. Comet: Commonsense transformers for knowledge graph construction. In *Association for Computational Linguistics (ACL)*. Gerhard Brewka, Ilkka Niemelä, and Miroslaw Truszczynski. 2011. Answer set programming at a glance. *Communications of the ACM*, 54(12):92– 103. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. 2021. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374. Xiang Chen, Ningyu Zhang, Xin Xie, Shumin Deng, Yunzhi Yao, Chuanqi Tan, Fei Huang, Luo Si, and Huajun Chen. 2022. Knowprompt: Knowledgeaware prompt-tuning with synergistic optimization for relation extraction. In *Proceedings of the ACM* Web Conference 2022, pages 2778–2788. Xilun Chen, Asish Ghoshal, Yashar Mehdad, Luke Zettlemoyer, and Sonal Gupta. 2020a. Low-resource domain adaptation for compositional task-oriented semantic parsing. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language* Processing (EMNLP), pages 5090–5100. Zhenfang Chen, Jiayuan Mao, Jiajun Wu, KwanYee Kenneth Wong, Joshua B Tenenbaum, and Chuang Gan. 2020b. Grounding physical concepts of objects and events through dynamic visual reasoning. In International Conference on Learning Representations. Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, and Lukasz Kaiser. 2018. Universal transformers. In *International Conference on Learning Representations*. Mingyu Ding, Zhenfang Chen, Tao Du, Ping Luo, Josh Tenenbaum, and Chuang Gan. 2021. Dynamic visual reasoning by learning differentiable physics models from video and language. *Advances in Neural Information Processing Systems*, 34. Li Dong and Mirella Lapata. 2016. Language to logical form with neural attention. In *54th Annual Meeting of the Association for Computational Linguistics*, pages 33–43. Association for Computational Linguistics (ACL). Andrew Drozdov, Nathanael Schärli, Ekin Akyürek, Nathan Scales, Xinying Song, Xinyun Chen, Olivier Bousquet, and Denny Zhou. 2022. Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003. Daan Fierens, Guy Van den Broeck, Joris Renkens, Dimitar Shterionov, Bernd Gutmann, Ingo Thon, Gerda Janssens, and Luc De Raedt. 2013. Inference and learning in probabilistic logic programs using weighted boolean formulas. *Theory and Practice of* Logic Programming, pages 1–44. Michael Gelfond and Vladimir Lifschitz. 1988. The stable model semantics for logic programming. In Proceedings of International Logic Programming Conference and Symposium, pages 1070–1080. MIT Press. Omer Goldman, Veronica Latcinnik, Ehud Nave, Amir Globerson, and Jonathan Berant. 2018. Weakly supervised semantic parsing with abstract examples. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1809–1819. Wenlong Huang, Fei Xia, Ted Xiao, Harris Chan, Jacky Liang, Pete Florence, Andy Zeng, Jonathan Tompson, Igor Mordatch, Yevgen Chebotar, Pierre Sermanet, Tomas Jackson, Noah Brown, Linda Luu, Sergey Levine, Karol Hausman, and brian ichter. 2022. Inner monologue: Embodied reasoning through planning with language models. In 6th Annual Conference on Robot Learning. Drew A Hudson and Christopher D Manning. 2018. Compositional attention networks for machine reasoning. In International Conference on Learning Representations. Jena D Hwang, Chandra Bhagavatula, Ronan Le Bras, Jeff Da, Keisuke Sakaguchi, Antoine Bosselut, and Yejin Choi. 2021. (comet-) atomic 2020: On symbolic and neural commonsense knowledge graphs. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 6384–6392. Robin Jia and Percy Liang. 2016. Data recombination for neural semantic parsing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12–22. Tomáš Kocisk ˇ y, Gábor Melis, Edward Grefenstette, ` Chris Dyer, Wang Ling, Phil Blunsom, and Karl Moritz Hermann. 2016. Semantic parsing with semi-supervised sequential autoencoders. In *Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing*, pages 1078– 1087. Brenden Lake and Marco Baroni. 2018. Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. In *International conference on machine learning*, pages 2873–2882. PMLR. Luis C Lamb, Artur Garcez, Marco Gori, Marcelo Prates, Pedro Avelar, and Moshe Vardi. 2020. Graph neural networks meet neural-symbolic computing: A survey and perspective. In Proceedings of International Joint Conference on Artificial Intelligence (IJCAI), pages 4877–4884. Hung Le, Truyen Tran, and Svetha Venkatesh. 2020. Self-attentive associative memory. In *International* Conference on Machine Learning, pages 5682–5691. PMLR. Joohyung Lee and Ravi Palla. 2012. Reformulating the situation calculus and the event calculus in the general theory of stable models and in answer set programming. Journal of Artificial Inteligence Research (JAIR), 43:571–620. Joohyung Lee and Yi Wang. 2018. Weight learning in a probabilistic extension of answer set programs. In Proceedings of International Conference on Principles of Knowledge Representation and Reasoning (KR), pages 22–31. Vladimir Lifschitz. 2008. What is answer set programming? In *Proceedings of the AAAI Conference on* Artificial Intelligence, pages 1594–1597. MIT Press. Vladimir Lifschitz. 2019. *Answer set programming*. Springer Heidelberg. Hugo Liu and Push Singh. 2004. Conceptnet—a practical commonsense reasoning tool-kit. *BT technology* journal, 22(4):211–226. Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2021. Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Computing Surveys (CSUR). Robin Manhaeve, Sebastijan Dumanciˇ c, Angelika Kim- ´ mig, Thomas Demeester, and Luc De Raedt. 2021. Neural probabilistic logic programming in deepproblog. *Artificial Intelligence*, 298:103504. Gary Marcus. 2018. Deep learning: A critical appraisal. arXiv preprint arXiv:1801.00631. John McCarthy. 1998. Elaboration tolerance. In *Working Papers of the Fourth Symposium on Logical Formalizations of Commonsense Reasoning*. John McCarthy and Patrick Hayes. 1969. Some philosophical problems from the standpoint of artificial intelligence. In B. Meltzer and D. Michie, editors, Machine Intelligence, volume 4, pages 463–502. Edinburgh University Press, Edinburgh. Scott Miller, David Stallard, Robert Bobrow, and Richard Schwartz. 1996. A fully statistical approach to natural language interfaces. In *34th Annual Meeting of the Association for Computational Linguistics*, pages 55–61. Pasquale Minervini, Sebastian Riedel, Pontus Stenetorp, Edward Grefenstette, and Tim Rocktäschel. 2020. Learning reasoning strategies in end-to-end differentiable proving. In International Conference on Machine Learning, pages 6938–6949. PMLR. Roshanak Mirzaee and Parisa Kordjamshidi. 2022. Transfer learning with synthetic corpora for spatial role labeling and reasoning. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, page 6148–6165. Association for Computational Linguistics. Erik Mueller. 2006. *Commonsense reasoning*. Elsevier. Maxwell Nye, Michael Tessler, Josh Tenenbaum, and Brenden M Lake. 2021. Improving coherence and consistency in neural sequence models with dualsystem, neuro-symbolic reasoning. *Advances in* Neural Information Processing Systems, 34:25192– 25204. Emilia Oikarinen and Tomi Janhunen. 2006. Modular equivalence for normal logic programs. In *17th European Conference on Artificial Intelligence(ECAI)*, pages 412–416. Rasmus Palm, Ulrich Paquet, and Ole Winther. 2018. Recurrent relational networks. In Proceedings of Advances in Neural Information Processing Systems, pages 3368–3378. Linlu Qiu, Hexiang Hu, Bowen Zhang, Peter Shaw, and Fei Sha. 2021. Systematic generalization on gscan: What is nearly solved and what is next? In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 2180–2188. Raymond Reiter. 2001. Knowledge in Action: Logical Foundations for Specifying and Implementing Dynamical Systems. MIT Press. Matthew Richardson and Pedro Domingos. 2006. Markov logic networks. *Machine Learning*, 62(12):107–136. Sebastian Ruder. 2021. Challenges and Opportunities in NLP Benchmarking. http://ruder.io/ nlp-benchmarking. Laura Ruis, Jacob Andreas, Marco Baroni, Diane Bouchacourt, and Brenden M Lake. 2020. A benchmark for systematic generalization in grounded language understanding. *Advances in neural information processing systems*, 33:19861–19872. Shailaja Sampat and Joohyung Lee. 2018. A modelbased approach to visual reasoning on cnlvr dataset. In Sixteenth International Conference on Principles of Knowledge Representation and Reasoning. Adam Santoro, David Raposo, David G Barrett, Mateusz Malinowski, Razvan Pascanu, Peter Battaglia, and Timothy Lillicrap. 2017. A simple neural network module for relational reasoning. In *Advances in* neural information processing systems, pages 4967– 4976. Md Kamruzzaman Sarker, Lu Zhou, Aaron Eberhart, and Pascal Hitzler. 2021. Neuro-symbolic artificial intelligence. *AI Communications*, pages 1–13. Imanol Schlag and Jürgen Schmidhuber. 2018. Learning to reason with third order tensor products. *Advances in neural information processing systems*, 31. Nathan Schucher, Siva Reddy, and Harm de Vries. 2022. The power of prompt tuning for low-resource semantic parsing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 148–156. Min Joon Seo, Sewon Min, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Query-reduction networks for question answering. In *5th International Conference* on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. Murray Shanahan. 1995. A circumscriptive calculus of events. *Artif. Intell.*, 77(2):249–284. Zhengxiang Shi, Qiang Zhang, and Aldo Lipani. 2022. Stepgame: A new benchmark for robust multi-hop spatial reasoning in texts. *Association for the Advancement of Artificial Intelligence*. Richard Shin, Christopher Lin, Sam Thomson, Charles Chen Jr, Subhro Roy, Emmanouil Antonios Platanios, Adam Pauls, Dan Klein, Jason Eisner, and Benjamin Van Durme. 2021. Constrained language models yield few-shot semantic parsers. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7699–7715. Koustuv Sinha, Shagun Sodhani, Jin Dong, Joelle Pineau, and William L Hamilton. 2019. Clutrr: A diagnostic benchmark for inductive reasoning from text. In *Proceedings of the 2019 Conference on Empirical* Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4506–4515. Jidong Tian, Yitian Li, Wenqing Chen, HE Hao, and Yaohui Jin. 2021. A generative-symbolic model for logical reasoning in nlu. In *Is Neuro-Symbolic SOTA* still a myth for Natural Language Inference? The first workshop. Karthik Valmeekam, Alberto Olmo, Sarath Sreedharan, and Subbarao Kambhampati. 2022. Large language models still can't plan (a benchmark for LLMs on planning and reasoning about change). In *NeurIPS* 2022 Foundation Models for Decision Making Workshop. Chenguang Wang, Xiao Liu, and Dawn Song. 2022. Ielm: An open information extraction benchmark for pre-trained language models. In *Proceedings of the* 2022 Conference on Empirical Methods in Natural Language Processing, page 8417–8437. Association for Computational Linguistics. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed H. Chi, Quoc V Le, and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. In Advances in Neural Information Processing Systems. Jason Weston, Antoine Bordes, Sumit Chopra, and Tomás Mikolov. 2016. Towards ai-complete question answering: A set of prerequisite toy tasks. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings. Yuk Wah Wong and Raymond Mooney. 2007. Learning synchronous grammars for semantic parsing with lambda calculus. In *Proceedings of the 45th Annual* Meeting of the Association of Computational Linguistics, pages 960–967. Kexin Yi, Chuang Gan, Yunzhu Li, Pushmeet Kohli, Jiajun Wu, Antonio Torralba, and Joshua B Tenenbaum. 2019. CLEVRER: Collision events for video representation and reasoning. In *ICLR*. John M Zelle and Raymond J Mooney. 1996. Learning to parse database queries using inductive logic programming. In Proceedings of the national conference on artificial intelligence, pages 1050–1055. Andy Zeng, Adrian Wong, Stefan Welker, Krzysztof Choromanski, Federico Tombari, Aveek Purohit, Michael Ryoo, Vikas Sindhwani, Johnny Lee, Vincent Vanhoucke, et al. 2022. Socratic models: Composing zero-shot multimodal reasoning with language. *arXiv preprint arXiv:2204.00598*. Luke S Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form: structured classification with probabilistic categorial grammars. In Proceedings of the Twenty-First Conference on Uncertainty in Artificial Intelligence, pages 658–666. Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Olivier Bousquet, Quoc Le, and Ed Chi. 2022. Least-to-most prompting enables complex reasoning in large language models. arXiv preprint arXiv:2205.10625. Wenxuan Zhou and Muhao Chen. 2022. An improved baseline for sentence-level relation extraction. In *Proceedings of the 2nd Conference of the Asia-Pacific* Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 161–168, Online only. Association for Computational Linguistics. ## Appendix Section A presents another experiment with robot planning. Section B discusses more details about how we generated CLUTRR dataset and the experimental result on CLUTRR 1.0. Section C presents GPT-3 prompts for semantic parsing. Section D enumerates the errors with GPT-3 in semantic parsing. Section E presents ASP knowledge modules we used for the experiments. Section F enumerates the errors in the datasets. For bAbI, the prompts for the baseline few-shot prompting can be found in the directory bAbI_ baseline/example_prompts, while the prompts for chain-of-thought can be found in bAbI_baseline/COT_prompts_v3. For StepGame, the prompts for the baseline few-shot prompting and chain-of-thought can be found in the directory stepGame/prompts. The following table records the cost for GPT-3 queries used in GPT-3 baselines and our method, where Eng. denotes the engine of GPT-3, c1, d2, d3 denote text-curie-001, text-davinci-002, and text-davinci003. Dataset Method Eng. \#Data Cost | Dataset | Method | Eng. | #Data | Cost | |------------|-----------|--------|---------|--------| | Few-Shot | d3 | 20k | $190 | | | bAbI | CoT | d3 | 20k | $280 | | GPT-3+ASP | d3 | 20k | $41 | | | Few-Shot | d3 | 1k | $21 | | | CoT | d3 | 1k | $26 | | | StepGame | GPT-3+ASP | c1 | 10k | $89 | | GPT-3+ASP | d2 | 10k | $886 | | | CLUTRR 1.0 | GPT-3+ASP | d3 | 879 | $37 | | CLUTRR 1.3 | GPT-3+ASP | d3 | 400 | $17 | | CLUTRR-S | GPT-3+ASP | d3 | 563 | $19 | ![12_image_2.png](12_image_2.png) ![12_image_3.png](12_image_3.png) ![12_image_1.png](12_image_1.png) Few-Shot d3 1k $21 CoT d3 1k $26 GPT-3+ASP c1 10k $89 GPT-3+ASP d2 10k $886 CLUTRR 1.0 GPT-3+ASP d3 879 $37 CLUTRR 1.3 GPT-3+ASP d3 400 $17 CLUTRR-S GPT-3+ASP d3 563 $19 gSCAN GPT-3+ASP c1 8k $0.2 Pick&Place Few-Shot d3 40 $0.5 GPT-3+ASP d3 40 $0.4 All experiments were conducted on Ubuntu 18.04.2 LTS with two 10-core CPU Intel(R) Xeon(R) CPU E5-2640 v4 @ 2.40GHz and four GP104 [GeForce GTX 1080] graphics cards. All datasets used in this paper are publicly available. The bAbI dataset is under BSD license. The CLUTRR dataset is released under "AttributionNonCommercial 4.0 International" license. The StepGame dataset doesn't have a specified license. The gSCAN dataset is released under MIT license. ## A Robot Planning Recently, there has been increasing interest in using LLMs to find a sequence of executable actions for robots, aiming to achieve high-level goals expressed in natural language, such as SayCan (Ahn et al., 2022) and Innermonologue (Huang et al., 2022). However, it is worth noting that the actions ![12_image_0.png](12_image_0.png) generated by LLMs tend to be loosely connected and do not take into account the intermediate state changes that occur during the execution of these actions. We based our work on SayCan's open-source virtual tabletop environment6, where a robot is tasked ![12_image_4.png](12_image_4.png) with achieving a goal, such as "stack the blocks," on a table with colored blocks and bowls. We noticed that the successful plans demonstrated by SayCan are restricted to simple one-step look-ahead plans that do not take into account intermediate state changes. We randomly sampled 40 data instances of the form ⟨Si, Sg, L⟩ in the Pick&Place domain with 4 to 7 blocks and 3 to 7 bowls, possibly stacked together and with 3 to 10 steps of pick_and_place actions required by the robot to change the initial state Sito the goal state Sg. Here, the label L is the set of instructions to achieve the goals (e.g., "1. Move the violet block onto the blue block. 2..."). Among 40 data instances, 20 data instances contain only blocks that can be placed on the table while 20 data instances contain both blocks and bowls and assume all blocks must be on the bowls. The baseline for this dataset follows the method in SayCan's open-source virtual tabletop environment, where GPT-3 is used as the large language model to directly find the sequence of actions from Sito Sg. However, the baseline fails to find successful plans for all 40 randomly sampled data instances. This result confirms the claim by (Valmeekam et al., 2022) that large language models are not suitable as planners. We also applied our method to this task. We let GPT-3 turn the states Si and Sg into atomic facts of the form on(*A, B,* 0) and on(*A, B*), respectively. 6https://github.com/google-research/ google-research/tree/master/saycan Then, an ASP program for the Pick&Place domain is used to find an optimal plan. We found that while GPT-3 has only 0% accuracy in predicting the whole plan, it has 100% accuracy in fact extraction under the provided format. When we apply symbolic reasoning to these extracted atomic facts with an ASP program, we could achieve 100% accuracy on the predicted plans. Details of the prompts are available in Appendix C.5. | Method | Blocks | Blocks+Bowls | |---------------|----------|----------------| | GPT-3(d3) | 0 | 0 | | GPT-3(d3)+ASP | 100 | 100 | Table 6: Test accuracy on the Pick&Place dataset. (d3) ![13_image_0.png](13_image_0.png) denotes the text-davinci-003 model. ## B More About Clutrr B.1 Clutrr 1.3 Data Generation We used CLUTRR 1.3 codes to generate 400 test data instances. 7 Our generated CLUTRR 1.3 dataset consists of 100 data for each of the four categories: (assuming that the query is asking about the relation between persons A and D) - clean: each story describes 3 relations in a chain of four persons A − B − C − D; 7We used the development branch of CLUTRR repository https://github.com/facebookresearch/clutrr/tree/develop. - supporting: each story describes 3 relations in a chain of four persons A − B − C − D as well as an additional relation X − Y such that X, Y ∈ {*A, B, C, D*} and X − Y is not the queried pair; - irrelevant: each story describes 3 relations in a chain of four persons A − B − C − D as well as an additional relation X − Y such that X ∈ {*A, B, C, D*} and Y ̸∈ {*A, B, C, D*}; - disconnected: each story describes 3 relations in a chain of four persons A − B − C − D as well as an additional relation X − Y such that X, Y ̸∈ {*A, B, C, D*}. B.2 Evaluation on CLUTRR 1.0 | Training | Testing | BA | GSM | d2 | d3 | |--------------|--------------|------|-------|------|------| | Clean | 58 | 69 | 63 | 68 | | | Supporting | 76 | 66 | 62 | 62 | | | Clean | Irrelevant | 70 | 77 | 66 | 71 | | Disconnected | 49 | 36 | 59 | 59 | | | Supporting | Supporting | 67 | 49 | 83 | 83 | | Irrelevant | Irrelevant | 51 | 63 | 72 | 75 | | Disconnected | Disconnected | 57 | 53 | 63 | 67 | Table 7: Test accuracy on the CLUTRR dataset. BA denotes BiLSTM-Attention. d2 and d3 denote GPT3+ASP with text-davinci-002 and text-davinci-003 model. Table 7 compares the accuracy of our method and the state-of-the-art methods, i.e., BiLSTMAttention (Sinha et al., 2019) and GSM (with a BiLSTM encoder) (Tian et al., 2021), on the (original) CLUTRR test dataset. Except for our method, all other models are trained on a specific split of the CLUTRR training dataset. Training Testing DP d2 d3 Clean Clean **100 100 100** Supporting 99 96 99 Irrelevant 98 99 100 Disconnected 99 98 100 Supporting Supporting **100 100 100** Irrelevant Irrelevant 100 97 100 Disconnected Disconnected 94 97 100 Table 8: Test accuracy on the CLUTRR-S dataset. DP denotes DeepProbLog, d2 and d3 denote GPT-3+ASP with the text-davinci-002 and text-davinci-003 model. Table 8 compares the accuracy of our method and the state-of-the-art method, DeepProbLog (Manhaeve et al., 2021) on the CLUTRR-S test dataset. With GPT-3(d2)+ASP on the CLUTRRS dataset, 550 out of 563 predictions are correct, and 13 are wrong. All errors occur due to the entities in a relation being swapped. For example, we use "son(A,B)" to represent "A has a son B" while GPT-3 text-davinci-002 responded with "son(Robert,Ryan)" for the sentence "Robert is Ryan's son." On the other hand, text-davinci-003 performed better, with only a single error and 562 out of 563 predictions being correct. ## C Prompts For Semantic Parsing Below, we present the details of the general knowledge of the prompts that we summarized and applied in this paper, followed by some examples. 1. If the information in a story (or query) can be extracted independently, parsing each sentence separately (using the same prompt multiple times) typically works better than parsing the whole story. Since people usually cache all GPT-3 responses to save cost by avoiding duplicated GPT-3 requests for the same prompt, parsing each sentence separately also yields better usage of cached responses. Below are some examples. - In most bAbI tasks (except for tasks 11 and 13), the sentences in a story (including the query sentence) are independent of each other. We parse each sentence separately using GPT-3 as in the Appendix C.1. - In the stepGame dataset, each sentence in a story describes the spatial relation between 2 objects. There are 4 sentences in a story when k = 1 and about 20 sentences when k = 10. If we ask GPT-3 to extract all the atomic facts from the whole story, it always misses some atoms or predicts wrong atoms. Since every sentence is independent of each other as shown in Figure 1, we use the following (truncated) prompt multiple times for each data instance where each time [INPUT] is replaced with one sentence in the story or the query. This yields a much higher accuracy as in Section 4.3. The complete prompt is available in Appendix C.2. ![14_image_0.png](14_image_0.png) However, if some sentences in a story are dependent, splitting them may lead to unexpected results in the GPT-3 response. Below are some examples. - In bAbI task \#11 and \#13, a story may contain the two consecutive sentences "Mary went back to the bathroom. After that she went to the bedroom." There is a dependency on the sentences to understand that "she" in the second sentence refers to "Mary" in the first. For this reason, task \#11 stories are parsed as a whole. This is similar for task \#13. - In the CLUTRR dataset, a story may contain sentences with coreferences like "Shirley enjoys playing cards with her brother. His name is Henry." where the latter sentence depends on the former one, and a family relation can be correctly extracted only with both sentences. Thus for CLUTRR datasets (i.e., CLUTRR 1.0, CLUTRR 1.3, and CLUTRR-S), we extract the family relations and gender relations from the whole story. 2. There is certain commonsense knowledge that GPT-3 is not aware of, and describing the missing knowledge in the prompt works better than adding examples only. This happens when GPT-3 cannot generalize such knowledge well with a few examples. - For example, in StepGame dataset, clock numbers are used to denote cardinal directions, e.g., "H is below J at 4 o'clock" means "H is on the bottom-right of J". Such knowledge in the dataset is not well captured by GPT-3 and enumerating examples in the prompt doesn't work well. On the other hand, describing such knowledge at the beginning of the prompt as shown in Appendix C.2 increases the accuracy by a large margin. ## C.1 Babi For bAbI dataset, there are two prompts for each task, corresponding to the context and query. Each prompt has a consistent set of basic instructions followed by example pairs of text and parsed text. Below are the prompts used to parse the context and query facts from a story and query, where [Input] at the end is replaced with the story in each test data instance. We only present the prompts for Tasks 1,2, and 3. The rest of the prompts can be found in the repository in https: //github.com/azreasoners/LLM-ASP/ blob/main/bAbI/GPT_prompts.py. ## Tasks 1/2/3 (Context) Please parse the following statements into facts . The available keywords are: pickup, drop, and go. Sentence: Max journeyed to the bathroom. Semantic parse: go(Max, bathroom). Sentence: Mary grabbed the football there. Semantic parse: pickup(Mary, football). Sentence: Bob picked up the apple. Semantic parse: pickup(Bob, apple). Sentence: Susan dropped the milk. Semantic parse: drop(Susan, milk). Sentence: Bob got the football there. Semantic parse: pickup(Bob, football). Sentence: Max left the cup. Semantic parse: drop(Max, cup). Sentence: Kevin put down the pie there. Semantic parse: drop(Kevin, pie). Sentence: John took the football there. Semantic parse: pickup(John, football). Sentence: [INPUT] Semantic parse: ## Task 1 (Query) Please parse the following questions into query facts. The available keywords are: whereAgent. Sentence: Where is Mary? Semantic parse: whereAgent(Mary). Sentence: Where is Daniel? Semantic parse: whereAgent(Daniel). Sentence: Where is Sandra? Semantic parse: whereAgent(Sandra). Sentence: Where is John? Semantic parse: whereAgent(John). Sentence: [INPUT] Semantic parse: ## Task 2 (Query) Please parse the following questions into query facts. The available keywords are: loc. Sentence: Where is the toothbrush? Semantic parse: loc(toothbrush). Sentence: Where is the milk? Semantic parse: loc(milk). Sentence: Where is the apple? Semantic parse: loc(apple). Sentence: Where is the football? Semantic parse: loc(football). Sentence: [INPUT] Semantic parse: ## Task 3 (Query) Please parse the following questions into query facts. The available keywords are: loc. Sentence: Where was the football before the bathroom? Semantic parse: before(football,bathroom). Sentence: Where was the apple before the garden? Semantic parse: before(apple,garden). Sentence: Where was the milk before the kitchen? Semantic parse: before(milk,kitchen). Sentence: Where was the apple before the bedroom ? Semantic parse: before(apple,bedroom). Sentence: Where was the football before the hallway? Semantic parse: before(football,hallway). Sentence: [INPUT] Semantic parse: ## C.2 Stepgame For the StepGame dataset, there is only one prompt below to extract the location relations among objects. All example sentences are from the training data in (the noise split of) the original StepGame dataset.8 The [Input] at the end of the prompt is replaced with each sentence in a test data instance. Please parse each sentence into a fact. If the sentence is describing clock-wise information, then 12 denotes top, 1 and 2 denote top_right, 3 denotes right, 4 and 5 8https://github.com/ZhengxiangShi/ StepGame/tree/main/Code/babi_format/ noise denote down_right, 6 denotes down, 7 and 8 denote down_left, 9 denote left, 10 and 11 denote top_left. If the sentence is describing cardinal directions, then north denotes top, east denotes right, south denotes down, and west denotes left. If the sentence is a question, the fact starts with query. Otherwise, the fact starts with one of top, down, left, right, top_left, top_right, down_left, and down_right. Sentence: What is the relation of the agent X to the agent K? Semantic Parse: query("X", "K"). Sentence: H is positioned in the front right corner of M. Semantic Parse: top_right("H", "M"). Sentence: F is on the left side of and below Q. Semantic Parse: down_left("F", "Q"). Sentence: Y and I are parallel, and Y is on top of I. Semantic Parse: top("Y", "I"). Sentence: V is over there with T above. Semantic Parse: top("T", "V"). Sentence: V is slightly off center to the top left and G is slightly off center to the bottom right. Semantic Parse: top_left("V", "G"). Sentence: The objects S and A are over there. The object S is lower and slightly to the left of the object A. Semantic Parse: down_left("S", "A"). Sentence: D is diagonally below Z to the right at a 45 degree angle. Semantic Parse: down_right("D", "Z"). Sentence: V is at A's 9 o'clock. Semantic Parse: left("V", "A"). Sentence: J is at O's 6 o'clock. Semantic Parse: down("J", "O"). Sentence: H is below J at 4 o'clock. Semantic Parse: down_right("H", "J"). Sentence: O is there and C is at the 5 position of a clock face. Semantic Parse: down_right("C", "O"). Sentence: If H is the center of a clock face, B is located between 10 and 11. Semantic Parse: top_left("B", "H"). Sentence: [Input] Semantic Parse: ## C.3 Clutrr For CLUTRR dataset, there are two prompts to extract the family relations and genders from a story respectively. All example stories in both prompts are from the training data "data_06b8f2a1/2.2,2.3_train.csv" in the original CLUTRR dataset.9 Below is the prompt to extract family relations from a story where [Input] at the end is replaced with the story in each test data instance. Given a story, extract atomic facts of the form relation("Person", "Person"). Example relations are: father, mother, parent, son, daughter, child, grandfather, grandmother, grandson, granddaughter, wife, husband, spouse, sibling, nephew, niece, uncle, aunt, child_in_law, and parent_in_law. Story: [Verdie] waved good bye to her dad [Henry ] for the day and went next door with her sister [Amanda]. [Henry]'s daughter, [Amanda ], went to the city this weekend. She spent her time there visiting her grandfather, [ Kyle], and had a wonderful time with him. Semantic Parse: father("Verdie", "Henry"). sister("Verdie", "Amanda"). daughter("Henry ", "Amanda"). grandfather("Amanda", "Kyle"). Story: [Michelle] was excited for today, its her daughter's, [Theresa], spring break. She will finally get to see her. [Michael] was busy and sent his wife, [Marlene], instead. [Kristen] loved to care for her newborn child [Ronald]. [Eric]'s son is [Arthur]. Semantic Parse: daughter("Michelle", "Theresa"). wife("Michael", "Marlene"). child("Kristen ", "Ronald"). son("Eric", "Arthur"). Story: [Vernon] was present in the delivery room when his daughter [Raquel] was born, but when his daughter [Constance] was born he was too sick. [Vernon] and his daughter [ Margaret] went to the movies. [Constance], [ Margaret]'s sister, had to stay home as she was sick. Semantic Parse: daughter("Vernon", "Raquel"). daughter("Vernon", "Constance"). daughter(" Vernon", "Margaret"). sister("Margaret", " Constance"). Story: [Eric] who is [Carl]'s father grounded [ Carl] after finding out what [Carl] had done at school. [Ronald] was busy planning a 90 th birthday party for his aunt, [Theresa]. [ Eric] and his son [Carl] went to the park and saw [Eric]'s father [Kyle] there with his dog. Semantic Parse: father("Carl", "Eric"). aunt(" Ronald", "Theresa"). son("Eric", "Carl"). father("Eric", "Kyle"). Story: [Shirley] and [Edward] are siblings and best friends. They do everything together. [ Henry] walked his daughters [Amanda] and [ Michelle] to school. [Kyle] enjoys watching movies with his son's daughter. Her name is [Amanda]. Semantic Parse: sibling("Shirley", "Edward"). daughter("Henry", "Amanda"). daughter("Henry 9The original CLUTRR data is available in https:// github.com/facebookresearch/clutrr. ", "Michelle"). granddaughter("Kyle", " Amanda"). Story: [Raquel] and her brother [Casey] took her grandmother [Karen] to the store to buy a new dress. [Karen] and her husband [Kyle] just celebrated 10 years of marriage. [Karen ] loves her grandson, [Casey], and he loves her too. Semantic Parse: brother("Raquel", "Casey"). grandmother("Raquel", "Karen"). husband(" Karen", "Kyle"). grandson("Karen", "Casey"). Story: [Allen]'s father, [Eric], bought him some ice cream. [Karen] was baking cookies for her grandson, [Allen]. [Allen]'s brother [ Arthur] came home from school, so she baked some extra for him, too. [Eric]'s son, [ Arthur], was ill and needed to be picked up at school. [Eric] hurried to his side. Semantic Parse: father("Allen", "Eric"). grandson("Karen", "Allen"). brother("Allen", "Arthur"). son("Eric", "Arthur"). Story: [Karen] was spending the weekend with her grandson, [Eddie]. [Eddie]'s sister [ Michelle] was supposed to come too, but she was busy and could n't make it. [Theresa] took her daughter, [Michelle], out to High Tea yesterday afternoon. [Eddie]'s mother [ Theresa] baked brownies for dessert after they had dinner. Semantic Parse: grandson("Karen", "Eddie"). sister("Eddie", "Michelle"). daughter(" Theresa", "Michelle"). mother("Eddie", " Theresa"). Story: [Input] Semantic Parse: We also use a variant of the above prompt to extract the gender of each person in a story. The prompt context is a bit simpler as there are only two genders. The examples are the same while the Semantic Parse result is simply replaced with the atomic facts about gender information. Below is the prompt to extract the gender of each person in a story where [Input] is replaced with the story in each test data instance. Given a story, extract atomic facts of the form male("Person") or female("Person") for every person that appears in the sentences. Story: [Verdie] waved good bye to her dad [Henry ] for the day and went next door with her sister [Amanda]. [Henry]'s daughter, [Amanda ], went to the city this weekend. She spent her time there visiting her grandfather, [ Kyle], and had a wonderful time with him. Semantic Parse: female("Verdie"). male("Henry"). female("Amanda"). male("Kyle"). Story: [Michelle] was excited for today, its her daughter's, [Theresa], spring break. She will finally get to see her. [Michael] was busy and sent his wife, [Marlene], instead. [Kristen] loved to care for her newborn child [Ronald]. [Eric]'s son is [Arthur]. Semantic Parse: female("Michelle"). female(" Theresa"). male("Michael"). female("Marlene "). female("Kristen"). male("Ronald"). male ("Eric"). male("Arthur"). Story: [Vernon] was present in the delivery room when his daughter [Raquel] was born, but when his daughter [Constance] was born he was too sick. [Vernon] and his daughter [ Margaret] went to the movies. [Constance], [ Margaret]'s sister, had to stay home as she was sick. Semantic Parse: male("Vernon"). female("Raquel") . female("Constance"). female("Margaret"). Story: [Eric] who is [Carl]'s father grounded [ Carl] after finding out what [Carl] had done at school. [Ronald] was busy planning a 90 th birthday party for his aunt, [Theresa]. [ Eric] and his son [Carl] went to the park and saw [Eric]'s father [Kyle] there with his dog. Semantic Parse: male("Eric"). male("Carl"). male ("Ronald"). female("Theresa"). male("Kyle"). Story: [Shirley] and [Edward] are siblings and best friends. They do everything together. [ Henry] walked his daughters [Amanda] and [ Michelle] to school. [Kyle] enjoys watching movies with his son's daughter. Her name is [Amanda]. Semantic Parse: female("Shirley"). male("Edward "). male("Henry"). female("Amanda"). female ("Michelle"). male("Kyle"). Story: [Raquel] and her brother [Casey] took her grandmother [Karen] to the store to buy a new dress. [Karen] and her husband [Kyle] just celebrated 10 years of marriage. [Karen ] loves her grandson, [Casey], and he loves her too. Semantic Parse: female("Raquel"). male("Casey"). female("Karen"). male("Kyle"). Story: [Allen]'s father, [Eric], bought him some ice cream. [Karen] was baking cookies for her grandson, [Allen]. [Allen]'s brother [ Arthur] came home from school, so she baked some extra for him, too. [Eric]'s son, [ Arthur], was ill and needed to be picked up at school. [Eric] hurried to his side. Semantic Parse: male("Allen"). male("Eric"). female("Karen"). male("Arthur"). Story: [Karen] was spending the weekend with her grandson, [Eddie]. [Eddie]'s sister [ Michelle] was supposed to come too, but she was busy and could n't make it. [Theresa] took her daughter, [Michelle], out to High Tea yesterday afternoon. [Eddie]'s mother [ Theresa] baked brownies for dessert after they had dinner. Semantic Parse: female("Karen"). male("Eddie"). female("Michelle"). female("Theresa"). Story: [Input] Semantic Parse: For CLUTRR-S dataset, i.e., the simpler version of the CLUTRR dataset from DeepProbLog (Manhaeve et al., 2021) repository, there are also two prompts below to extract the family relations and genders from a story respectively.10 All example stories in both prompts are from the training data "data_a7d9402e/2.2,2.3_train.csv". Given a story, extract atomic facts of the form relation("Person", "Person") about family relationships that appear in the sentences. Story: [Mervin] is [Robert]'s father. [Robert] is the father of [Jim]. [Jon] is [Robert]'s brother. [Mervin] is the father of [Jon]. Semantic Parse: father("Robert", "Mervin"). father("Jim", "Robert"). brother("Robert", " Jon"). father("Jon", "Mervin"). Story: [Brooke] is [Cheryl]'s sister. [Jon] is the father of [Brooke]. [Melissa] is [Jon]' s mother. [Jon] is [Cheryl]'s father. Semantic Parse: sister("Cheryl", "Brooke"). father("Brooke", "Jon"). mother("Jon", " Melissa"). father("Cheryl", "Jon"). Story: [Jon] is [Carol]'s brother. [Carol] is [ Joyce]'s mother. [Helen] is [Carol]'s sister. [Helen] is a sister of [Jon]. Semantic Parse: brother("Carol", "Jon"). mother ("Joyce", "Carol"). sister("Carol", "Helen") . sister("Jon", "Helen"). Story: [Melissa] is [Glenn]'s grandmother. [ Melissa] is the mother of [Calvin]. [Glenn] is a son of [Lila]. [Calvin] is [Glenn]'s father. Semantic Parse: grandmother("Glenn", "Melissa"). mother("Calvin", "Melissa"). son("Lila", " Glenn"). father("Glenn", "Calvin"). Story: [Margaret] has a brother named [William]. [William] is [Carol]'s son. [Margaret] is [Carol]'s daughter. [Lila] is the aunt of [William]. Semantic Parse: brother("Margaret", "William"). son("Carol", "William"). daughter("Carol", " Margaret"). aunt("William", "Lila"). Story: [Stephanie] is a sister of [Lois]. [Lois ] is [Theresa]'s sister. [Helen] is [Lois]' s mother. [Helen] is [Stephanie]'s mother. Semantic Parse: sister("Lois", "Stephanie"). sister("Theresa", "Lois"). mother("Lois", " Helen"). mother("Stephanie", "Helen"). Story: [Jon] is [Elias]'s brother. [Michael] is a son of [Helen]. [Jon] is the uncle of [ Michael]. [Elias] is the father of [Michael ]. Semantic Parse: brother("Elias", "Jon"). son(" Helen", "Michael"). uncle("Michael", "Jon"). father("Michael", "Elias"). 10The CLUTRR-S dataset is from https://github. com/ML-KULeuven/deepproblog/tree/master/ src/deepproblog/examples/CLUTRR/data. Story: [Carol] has a son called [William]. [ Melissa] is the mother of [Jon]. [Jon] is the uncle of [William]. [Carol] has a brother named [Jon]. Semantic Parse: son("Carol", "William"). mother ("Jon", "Melissa"). uncle("William", "Jon"). brother("Carol", "Jon"). Story: [Robert] is the father of [Jim]. [Robert ] has a daughter called [Ashley]. [Elias] is [Robert]'s brother. [Elias] is the uncle of [Ashley]. Semantic Parse: father("Jim", "Robert"). daughter("Robert", "Ashley"). brother(" Robert", "Elias"). uncle("Ashley", "Elias"). Story: [Elias] is the father of [Carlos]. [ Elias] is the father of [Andrew]. [Andrew] is [Carlos]'s brother. [Jon] is a brother of [Elias]. Semantic Parse: father("Carlos", "Elias"). father("Andrew", "Elias"). brother("Carlos", "Andrew"). brother("Elias", "Jon"). Story: [Jon] is the father of [Ben]. [James] is [Kevin]'s brother. [Ben] is a brother of [ James]. [Jon] is [James]'s father. Semantic Parse: father("Ben", "Jon"). brother(" Kevin", "James"). brother("James", "Ben"). father("James", "Jon"). Story: [Carol] has a sister named [Lila]. [ William] is [Carol]'s son. [Helen] is [Lila ]'s sister. [Lila] is [William]'s aunt. Semantic Parse: sister("Carol", "Lila"). son(" Carol", "William"). sister("Lila", "Helen"). aunt("William", "Lila"). Story: [Calvin] is [Bruce]'s father. [Elias] is [Calvin]'s brother. [Calvin] is [Kira]'s father. [Kira] is [Bruce]'s sister. Semantic Parse: father("Bruce", "Calvin"). brother("Calvin", "Elias"). father("Kira", " Calvin"). sister("Bruce", "Kira"). Story: [Carol] is a sister of [Helen]. [Carol] is [Carlos]'s aunt. [Lila] is [Carol]'s sister. [Carlos] is [Helen]'s son. Semantic Parse: sister("Helen", "Carol"). aunt(" Carlos", "Carol"). sister("Carol", "Lila"). son("Helen", "Carlos"). Story: [Input] Semantic Parse: Note that, although the sentences in the CLUTRRS dataset is much simpler than those in CLUTRR dataset, we don't achieve 100% accuracy in GPT3 responses with the above long prompt. This is partially because the above prompt violates prompting strategy 3 in Section 3 as the order of names in a binary relation in sentences is mostly following "relationOf(A,B)" instead of "relation(B,A)". Given a story, extract atomic facts of the form male("Person") or female("Person") for every person that appears in the sentences. Story: [Jon] is [Carol]'s brother. [Mervin] has a daughter called [Carol]. [Chantell] is a daughter of [Jon]. [Mervin] has a son called [Jon]. Semantic Parse: male("Jon"). female("Carol"). male("Mervin"). female("Chantell"). Story: [Melissa] is [Glenn]'s grandmother. [ Melissa] is the mother of [Calvin]. [Glenn] is a son of [Lila]. [Calvin] is [Glenn]'s father. Semantic Parse: female("Melissa"). male("Glenn") . male("Calvin"). female("Lila"). Story: [Input] Semantic Parse: ## C.4 Gscan For gSCAN dataset, there is only one prompt below to extract the command in each data instance. All example sequences are from the training data.11 The [Input] at the end of the prompt is replaced with the command in each test data instance. Please parse each sequence of words into facts. Sequence: pull a yellow small circle Semantic Parse: query(pull). queryDesc(yellow). queryDesc(small). queryDesc(circle). Sequence: push a big square Semantic Parse: query(push). queryDesc(big). queryDesc(square). Sequence: push a green small square cautiously Semantic Parse: query(push). queryDesc(green). queryDesc(small). queryDesc(square). while( cautiously). Sequence: pull a circle hesitantly Semantic Parse: query(pull). queryDesc(circle). while(hesitantly). Sequence: walk to a yellow big cylinder while spinning Semantic Parse: query(walk). queryDesc(yellow). queryDesc(big). queryDesc(cylinder). while( spinning). Sequence: push a big square while zigzagging Semantic Parse: query(push). queryDesc(big). queryDesc(square). while(zigzagging). Sequence: push a cylinder hesitantly Semantic Parse: query(push). queryDesc(cylinder) . while(hesitantly). Sequence: [Input] Semantic Parse: 11https://github.com/LauraRuis/ groundedSCAN/tree/master/data/ compositional_splits.zip ## C.5 Pick&Place For the Pick&Place dataset, there are two prompts below to extract the atomic facts from the initial state and the goal state, respectively. Turn each sentence into an atomic fact of the form on(A, B, 0). Sentence: The red block is on the yellow bowl. Semantic Parse: on("red block", "yellow bowl", 0). Sentence: The violet block is on the blue block. Semantic Parse: on("violet block", "blue block", 0). Sentence: [INPUT] Semantic Parse: Turn each sentence into an atomic fact of the form on(A, B). Sentence: The red block is on the yellow bowl. Semantic Parse: on("red block", "yellow bowl"). Sentence: The violet block is on the blue block. Semantic Parse: on("violet block", "blue block") . Sentence: [INPUT] Semantic Parse: For each sentence in the initial or goal state, we replace [INPUT] in the corresponding prompt above with this sentence and request GPT-3 to extract a single atomic fact. The union of these atomic facts extracted from all sentences is then used in the symbolic reasoner module to find an optimal plan. For the GPT-3 baseline, we use the following prompt to let GPT-3 directly find a plan where [INPUT] at the end of the prompt is replaced with the initial and goal state of the queried data instance. Find a shortest plan to move blocks from an initial state to a goal state. Note that you cannot move a block if anything is on it. You cannot move a block onto a target block or bowl if there is anything is on the target block or bowl. At most two blocks can be placed in the same bowl with one on top of the other. \# Initial State: Nothing is on the green bowl. The violet block is on the blue bowl. The blue block is on the violet bowl. The green block is on the blue block. \# Goal State: The violet block is on the green bowl. The green block is on the violet block. The blue block is on the blue bowl. Nothing is on the violet bowl. Plan: 1. Move the violet block onto the green bowl. 2. Move the green block onto the violet block. 3. Move the blue block onto the blue bowl. \# Initial State: Nothing is on the blue bowl. The yellow block is on the green bowl. The green block is on the violet bowl. The violet block is on the green block. The blue block is on the yellow bowl. The red block is on the blue block. \# Goal State: The yellow block is on the blue bowl. The green block is on the yellow block. The red block is on the green bowl. Nothing is on the violet bowl. The blue block is on the yellow bowl. The violet block is on the blue block. Plan: 1. Move the yellow block onto the blue bowl. 2. Move the red block onto the green bowl. 3. Move the violet block onto the blue block. 4. Move the green block onto the yellow block. ## D Gpt-3 Errors In Semantic Parsing In this section, we group and record the errors in the GPT-3 responses in tables where each row records a 3-tuple ⟨ dataset, sentence(s), GPT-3 response ⟩. In this section, we list the following. - all 21 errors for the CLUTRR 1.3 dataset with text-davinci-003; - the single mistake in the first 100 data instances for every k ∈ {1*, . . . ,* 10} in the StepGame dataset with text-davinci-002. ## D.1 Argument Misorder A common mistake in the GPT-3 response is that the relation and arguments for an atom are correctly extracted, but the order of the arguments is incorrect. Such mistakes can be greatly alleviated by proper few-shot prompting where the orders of arguments in the example target atoms follow their orders in the stories. There are only 3 errors in CLUTRR 1.3 due to argument misorder. The first 2 mistakes are indeed due to their missing periods at the end of the sentences - if we simply add the periods back, their GPT-3 responses would become correct. ## D.2 Wrong Relation Sometimes the arguments are correct, but the relations extracted by GPT-3 are incorrect or cannot be recognized by the ASP program. | CLUTRR 1.3 CLUTRR 1.3 CLUTRR 1.3 | |------------------------------------| These kinds of mistake may be resolved by restricting the space of possible relations. For example, the mistakes in the first four rows can be resolved by simply adding the sentence "Use spouse("Person", "Person") if two persons are couples." in the prompt. ## D.3 Ambiguious Or Incorrect Co-Reference D.4 Anonymous Argument | 3. Move the violet block onto the blue block. 4. Move the green block onto the yellow block. [INPUT] Plan: | |--------------------------------------------------------------------------------------------------------------| | CLUTRR | [Leila] and [Enoch] have | married("Leila", | |------------------------|----------------------------|---------------------------------------| | 1.3 | been married for 35 years. | "Enoch") grandsons( "Lottie", "Clinton") grandsons( "Lottie", "Enoch") | | [Felix] loved going to | | | | CLUTRR | mom("Felix", | | | the store with his mom | | | | 1.3 | "Christena") | | | [Christena]. [Lottie] and her grandsons [Clinton] and [Enoch] picked berries from the garden. | | | | CLUTRR 1.3 | [Archibald] asked his father, [Abraham], to watch over his two sons, [Justus] and [Merle], for the weekend. | sons( "Archibald", "Justus", "Merle") | | CLUTRR 1.3 | If E is the center of a | | | StepGame | clock face, H is located | right("H", "E") | | between 2 and 3. | | | | CLUTRR | [Erasmus] bought to dress | father("Joeseph", | |----------|-----------------------------|---------------------| | 1.3 | for his father [Joeseph] | "Erasmus") | | CLUTRR | [Baldwin] bought to dress | father("Sidney", | | 1.3 | for his father [Sidney] | "Baldwin") | | CLUTRR | [Elizabeth] is the uncle of | uncle("Elizabeth", | | 1.3 | [Leopold]. | "Leopold") | | [Katherine] | was | eating | | |---------------------------------------------------------------------------------------------------------------|--------------------------------------------------------|----------|------| | lunch | with | his | son | | [Daniel] | and | his | wife | | [Adelle]. | son("Katherine", "Daniel") wife("Daniel", "Adelle") | | | | [Sylvester] and his son [Rollie] went to his mother [Ottilia]'s home | son("Sylvester", "Rollie") mother("Rollie", "Ottilia") | | | | [Elsie] and [Lewis] did the Father daughter dance at the recital and his wife [Dollie] was so happy she cried | father("Lewis", "Dollie") wife("Lewis", "Dollie") | | | | [Abe] is | proud of his | | | | |-----------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------|-------------------| | CLUTRR | son("Johnie", | | | | | daughter's son. His name | | | | | | 1.3 | "Abe") | | | | | is [Johnie]. | happens(E,T), T1<T, T<T2, initiates(E,F,T). | | | | | [Watt] | and | [Celestine] | | | | CLUTRR | mother("Watt", | | | | | asked their mother, if they | | | | | | 1.3 | "Celestine") | | | | | could go play in the pool. | % (DEC3) holds_at(F2,T1+T2) :- timepoint(T1), timepoint(T2), fluent(F1), fluent(F2), event(E), happens(E,T1), initiates(E,F1,T1), 0<T2, trajectory(F1,T1,F2,T2), not stopped_in(T1,F1,T1+T2 ). | | | | | wife("Davie", "Kitty") daughter("Davie", "Kitty") | | | | | | [Davie] | told | his | wife | | | CLUTRR | [Kitty] to pick up their | | | | | 1.3 | daughter. [Ethyl] just got married to her now husband, and in the process she got herself a daughter-in-law named [Henry]. | husband("Ethyl", "Michael"). daughter_in_law( "Ethyl", "Henry") | | | | CLUTRR 1.3 | [Barnett] | and | [Manie] | mother("Barnett", | | CLUTRR | asked their mother, if they | | | | | 1.3 | "Manie") | | | | | could go play in the pool. | wife("Santiago", "Octavia") daughter("Santiago", "Octavia") | % (DEC4) holds_at(F2,T1+T2) :- timepoint(T1), timepoint(T2), fluent(F1), fluent(F2), event(E), happens(E,T1), terminates(E,F1,T1), 0<T2, anti_trajectory(F1,T1,F2, T2), not started_in(T1,F1,T1+T2 ). | | | | [Santiago] told his wife | | | | | | CLUTRR | [Octavia] to pick up their | | | | | 1.3 | daughter. | | | | | D.5 | Missed to generate some atoms [Elizabeth] bought to dress for his father [Leopold] [Orson] took his wife [Abbie] out for dinner. | | | | | CLUTRR 1.3 | father("Leopold", "Orson") | | | | | [Asa] felt lonely when his wife [Magdalena] was gone to see her mother [Josiephine]. | | | | | | CLUTRR 1.3 | wife("Asa", "Magdalena") | initiated(F,T) :- timepoint(T), fluent(F), event(E), happens(E,T), initiates(E,F,T). | | | | [Warner]'s | father, | | | | | [Johnny], | and | grand | | | | father, | [Bryant], | went | | | | hiking | during | the | first | | | weekend of spring. | | | | | | CLUTRR 1.3 | male("Johnny") male("Bryant") | terminated(F,T) :- timepoint(T), fluent(F), event(E), happens(E,T), terminates(E,F,T). | | | | [Hollie] and [Rosanna], | | | | | | CLUTRR | the happy couple, just got | - | | | | 1.3 | married last week. [Violet] took her brother [Travis] to the park, but left her sister [Serena] at home. | | | | | CLUTRR 1.3 | brother("Violet", "Travis") | released(F,T) :- timepoint(T), fluent(F), event(E), happens(E,T), releases(E,F,T). | | | | E | ASP Knowledge Modules | | | | | E.1 | Discrete Event Calculus (DEC) Axioms Module | % (DEC5) holds_at(F,T+1) :- timepoint(T), fluent(F), holds_at(F,T), -released_at(F,T+1), not terminated(F,T). | | | | % (DEC1) stopped_in(T1,F,T2) :- timepoint(T), timepoint(T1), timepoint(T2), fluent(F), event(E), happens(E,T), T1<T, T<T2, terminates(E,F,T). | % (DEC6) -holds_at(F,T+1) :- timepoint(T), fluent(F), -holds_at(F,T), -released_at(F,T+1), not initiated(F,T). | | | | | % (DEC2) started_in(T1,F,T2) :- timepoint(T), timepoint(T1), timepoint(T2), fluent(F), event(E), | % (DEC7) released_at(F,T+1) :- timepoint(T), fluent(F), released_at(F,T), not initiated(F,T), not terminated(F,T). | | | | | 5207 | | | | | | Task | DEC Axioms | Action | Location | Family Relation | |------------------------------------------------------------------|--------------|----------|------------|-------------------| | 1: Single supporting fact | ✓ | ✓ | | | | 2: Two supporting facts | ✓ | ✓ | | | | 3: Three supporting facts | ✓ | ✓ | | | | 4: Two arg relations | ✓ | | | | | 5: Three arg relations | ✓ | | | | | 6: Yes/no questions | ✓ | ✓ | | | | 7: Counting | ✓ | ✓ | | | | 8: Lists/sets | ✓ | ✓ | | | | 9 : Simple negation | ✓ | ✓ | | | | 10: Indefinite knowledge | ✓ | ✓ | | | | 11: Basic coreference | ✓ | ✓ | | | | 12: Conjunction | ✓ | ✓ | | | | 13: Compound coreference | ✓ | ✓ | | | | 14: Time reasoning | ✓ | ✓ | | | | 15: Basic deduction 16: Basic induction 17: Positional reasoning | ✓ | | | | | 18: Size reasoning 19: Path finding | ✓ | ✓ | ✓ | | | 20: Agents motivations StepGame | ✓ | | | | | gSCAN | ✓ | ✓ | | | | CLUTRR | ✓ | | | | | Pick&Place | ✓ | ✓ | | | Table 9: Knowledge modules used for each of the tasks. Note that DEC Axioms, **action**, and **location** modules are used in at least two datasets. Some domains aren't listed as they are small and domain specific. % (DEC8) -released_at(F,T+1) :- timepoint(T), fluent(F), -released_at(F,T), not released(F,T). % (DEC9) holds_at(F,T+1) :- timepoint(T), fluent(F), event(E), happens(E,T), initiates(E,F,T). % (DEC10) -holds_at(F,T+1) :- timepoint(T), fluent(F), event(E), happens(E,T), terminates(E,F,T). % (DEC11) released_at(F,T+1) :- timepoint(T), fluent(F), event(E), happens(E,T), releases(E,F,T). % (DEC12) -released_at(F,T+1) :- timepoint(T), fluent(F), event(E), happens(E,T), initiates(E,F,T). -released_at(F,T+1) :- timepoint(T), fluent(F), event(E), happens(E,T), terminates(E,F,T). ## E.2 Action Module %******************** * common interface * check: if location(unknown) is needed *********************% % what happened in the given story happens(action(A, pickup, I), T) :- pickup(A, I, T). happens(action(A, drop, I), T) :- drop(A, I, T). happens(action(A1, give, A2, I), T) :- give(A1, I, A2, T). happens(action(A, goto, L), T) :- go(A, L, T). happens(action(A, goto, L), T) :- isIn(A, L, T). %******************** * basic atoms *********************% direction(east; west; north; south). agent(A) :- happens(action(A, _, _), _). agent(A) :- happens(action(A, _, _, _), _). agent(A) :- happens(action(_, give, A, _), _). item(I) :- happens(action(_, pickup, I), _). item(I) :- happens(action(_, drop, I), _). item(I) :- happens(action(_, give, _, I), _). location(L) :- happens(action(_, goto, L), _), not direction(L). %******************** * atoms in DEC_AXIOMS *********************% % event/1 event(action(A, pickup, I)) :- agent(A), item(I) . event(action(A, drop, I)) :- agent(A), item(I). event(action(A1, give, A2, I)) :- agent(A1), agent(A2), item(I), A1 != A2. event(action(A, goto, L)) :- agent(A), location( L). event(action(A, goto, D)) :- agent(A), direction (D). event(action(robot, pick_and_place, Src, Dst)) :- feature(Src, block), location(Dst), Src != Dst. % timepoint/1 timepoint(T) :- happens(_, T). % the timepoint in story timepoint(T) :- T=0..N, maxtime(N). % the timepoint for planning without story % fluent/1 fluent(at(A, L)) :- agent(A), location(L). fluent(at(I, L)) :- item(I), location(L). fluent(carry(A, I)) :- agent(A), item(I). fluent(on(B, L)) :- feature(B, block), location( L), B!=L. % -released_at/2 % 1. -released_at(F, T) means commonsense law of inertia (CLI) can be applied to fluent F at T % 2. CLI is also applied to this literal itself -released_at(F, 0) :- fluent(F). % holds_at/2 % initial states of fluents -- only location of items needs to be guessed {holds_at(at(I, L), 0): location(L)} = 1 :- item (I). holds_at(on(B, L), 0) :- on(B, L, 0). % happens/2 % for each timepoint, at most 1 event happens; and it happens as fewer as possible % {happens(E, T): event(E)}1 :- timepoint(T). % this rule would slow down many tasks :∼ happens(E, T). [1@0, E, T] % every action should have some effect %:- happens(E,T), not initiates(E,_,T). % precondition on actions -- pickup :- happens(action(A, pickup, I), T), holds_at(at (A, L1), T), holds_at(at(I, L2), T), L1 != L2. % initiates/3 and terminates/3 % effect of actions -- pickup initiates(action(A, pickup, I), carry(A, I), T) :- agent(A), item(I), timepoint(T). timepoint(T), A1 != A2. terminates(action(A1, give, A2, I), carry(A1, I) , T) :- agent(A1), agent(A2), item(I), timepoint(T), A1 != A2. % effect of actions -- goto initiates(action(A, goto, L), at(A, L), T) :- agent(A), location(L), timepoint(T). initiates(action(A, goto, L), at(I, L), T) :- holds_at(carry(A, I), T), location(L). initiates(action(A, goto, D), at(A, L2), T) :- agent(A), location(L1), location(L2), timepoint(T), holds_at(at(A, L1), T), is(L2, D, L1). terminates(action(A, goto, L1), at(A, L2), T) :- agent(A), location(L1), location(L2), timepoint(T), L1 != L2. terminates(action(A, goto, L1), at(I, L2), T) :- holds_at(carry(A, I), T), location(L1), location(L2), L1 != L2. terminates(action(A, goto, Direction), at(A, L), T) :- happens(action(A, goto, Direction), T), holds_at(at(A, L), T), Direction != L. % effect of actions -- pick_and_place initiates(action(robot, pick_and_place, Src, Dst ), on(Src, Dst), T) :- feature(Src, block), location(Dst), Src != Dst, timepoint(T), not holds_at(on(_, Src), T), not holds_at(on(_, Dst), T): Dst!="table". terminates(action(robot, pick_and_place, Src, Dst), on(Src, L), T) :- holds_at(on(Src, L), T), location(Dst), Dst != L. ## E.3 Location Module % general format translation, which can also be easily done in python script % (this is not needed if we directly extract the general form in the beginning as in bAbI task4) is(A, top, B) :- top(A, B). is(A, top, B) :- up(A, B). is(A, down, B) :- down(A, B). is(A, left, B) :- left(A, B). is(A, right, B) :- right(A, B). is(A, top_left, B) :- top_left(A, B). is(A, top_right, B) :- top_right(A, B). is(A, down_left, B) :- down_left(A, B). is(A, down_right, B) :- down_right(A, B). is(A, east, B) :- east(A, B). is(A, west, B) :- west(A, B). is(A, south, B) :- south(A, B). is(A, north, B) :- north(A, B). | ; east, eastOf; | |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | top, northOf; down, southOf; left, westOf; right, eastOf | | ). synonyms(A, B) :- synonyms(B, A). synonyms(A, C) :- synonyms(A, B), synonyms(B, C) , A!=C. | | % effect of actions -- drop terminates(action(A, drop, I), carry(A, I), T) :- agent(A), item(I), timepoint(T). % effect of actions -- give initiates(action(A1, give, A2, I), carry(A2, I), T) :- agent(A1), agent(A2), item(I), 5209 | % define the offsets of 8 spacial relations offset( overlap,0,0; top,0,1; down,0,-1; left,-1,0; right,1,0; top_left,-1,1; top_right,1,1; down_left ,-1,-1; down_right,1,-1 ). % derive the kind of spacial relation from synonyms and offset is(A, R1, B) :- is(A, R2, B), synonyms(R1, R2). is(A, R1, B) :- is(B, R2, A), offset(R2,X,Y), offset(R1,-X,-Y). % derive the location of every object % the search space of X or Y coordinate is within -100 and 100 % (to avoid infinite loop in clingo when data has error) nums(-100..100). location(A, Xa, Ya) :- location(B, Xb, Yb), nums(Xa), nums(Ya), is(A, Kind, B), offset(Kind, Dx, Dy), Xa-Xb=Dx, Ya-Yb=Dy. location(B, Xb, Yb) :- location(A, Xa, Ya), nums(Xb), nums(Yb), is_on(A, Kind, B), offset(Kind, Dx, Dy), Xa-Xb=Dx, Ya-Yb=Dy. ## E.4 Family Module female(B) :- granddaughter(A, B). female(B) :- daughter(A, B). female(B) :- niece(A, B). female(B) :- sister(A, B). female(B) :- mother(A, B). female(B) :- aunt(A, B). female(B) :- grandmother(A, B). % gender-irrelevant relationships | % gender | |------------| | male(B) :- grandson(A, B). male(B) :- son(A, B). male(B) :- nephew(A, B). male(B) :- brother(A, B). male(B) :- father(A, B). male(B) :- uncle(A, B). male(B) :- grandfather(A, B). | |--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| sibling(A, B) :- siblings(A, B). sibling(A, B) :- brother(A, B). sibling(A, B) :- sister(A, B). sibling(A, B) :- parent(A, C), parent(B, C), A != B. sibling(A, B) :- sibling(B, A). sibling(A, B) :- sibling(A, C), sibling(C, B), A != B. sibling(A, B); sibling_in_law(A, B) :- child(A, C), uncle(C, B). sibling(A, B); sibling_in_law(A, B) :- child(A, C), aunt(C, B). sibling_in_law(A, B) :- sibling_in_law(B, A). :- spouse(A, B), sibling(A, B). :- spouse(A, B), sibling_in_law(A, B). :- sibling(A, B), sibling_in_law(A, B). spouse(A, B) :- wife(A, B). spouse(A, B) :- husband(A, B). spouse(A, B) :- spouse(B, A). parent(A, B) :- father(A, B). parent(A, B) :- mother(A, B). parent(A, B) :- parent(A, C), spouse(C, B). parent(A, B) :- sibling(A, C), parent(C, B). parent(A, B) :- child(B, A). child(A, B) :- children(A, B). child(A, B) :- son(A, B). child(A, B) :- daughter(A, B). child(A, B) :- spouse(A, C), child(C, B). child(A, B) :- child(A, C), sibling(C, B). child(A, B) :- parent(B, A). grandparent(A, B) :- grandfather(A, B). grandparent(A, B) :- grandmother(A, B). grandparent(A, B) :- parent(A, C), parent(C, B). grandparent(A, B) :- grandchild(B, A). grandparent(A, B) :- sibling(A, C), grandparent( C, B). grandparent(A, B) :- grandparent(A, C), spouse(C , B). grandchild(A, B) :- grandson(A, B). grandchild(A, B) :- granddaughter(A, B). grandchild(A, B) :- grandparent(B, A). greatgrandparent(A, B) :- grandparent(A, C), parent(C, B). greatgrandchild(A, B) :- greatgrandparent(B, A). parent_in_law(A, B) :- spouse(A, C), parent(C, B ). parent(A, B) :- spouse(A, C), parent_in_law(C, B ). parent(A, B); parent_in_law(A, B) :- parent(C, A ), grandparent(C, B). :- parent(A, B), parent(B, A). :- parent(A, B), parent_in_law(A, B). child_in_law(A, B) :- parent_in_law(B, A). % gender-relevant relationships greatgrandson(A, B) :- greatgrandchild(A, B), male(B). greatgranddaughter(A, B) :- greatgrandchild(A, B ), female(B). grandson(A, B) :- grandchild(A, B), male(B). granddaughter(A, B) :- grandchild(A, B), female( B). son(A, B) :- child(A, B), male(B). daughter(A, B) :- child(A, B), female(B). nephew(A, B) :- sibling(A, C), son(C, B). niece(A, B) :- sibling(A, C), daughter(C, B). husband(A, B) :- spouse(A, B), male(B). wife(A, B) :- spouse(A, B), female(B). brother(A, B) :- sibling(A, B), male(B). sister(A, B) :- sibling(A, B), female(B). father(A, B) :- parent(A, B), male(B). mother(A, B) :- parent(A, B), female(B). uncle(A, B) :- parent(A, C), brother(C, B). 5210 aunt(A, B) :- parent(A, C), sister(C, B). grandfather(A, B) :- grandparent(A, B), male(B). grandmother(A, B) :- grandparent(A, B), female(B ). greatgrandfather(A, B) :- greatgrandparent(A, B) , male(B). greatgrandmother(A, B) :- greatgrandparent(A, B) , female(B). son_in_law(A, B) :- child_in_law(A, B), male(B). daughter_in_law(A, B) :- child_in_law(A, B), female(B). father_in_law(A, B) :- parent_in_law(A, B), male (B). mother_in_law(A, B) :- parent_in_law(A, B), female(B). ## E.5 Domain Specific Modules In this section, we list all domain-specific rules for each task. Some rules serve as an interface to turn the atoms in GPT-3 responses into a general format used in ASP modules. These rules are not necessary and can be removed if we let GPT-3 directly return the general atoms, e.g., "query(at(A, where))" instead of "whereAgent(A)". To save the cost for GPT-3 requests, we did not reproduce the experiments using new GPT-3 prompts with atoms in general formats. ## E.5.1 Babi Tasks 1 And 11 %%%% Interface -- these rules can be removed if we let GPT3 return the heads directly query(at(A, where)) :- whereAgent(A). % Find where last location of agent is answer(L) :- query(at(A, where)), holds_at(at(A, L), T), T>=Tx: holds_at(at(A, _), Tx). %%%% Interface -- these rules can be removed if we let GPT3 return the heads directly query(at(I, where)) :- loc(I). % Find where last location of object is answer(L) :- query(at(A, where)), holds_at(at(A, L), T), T>=Tx: holds_at(at(A, _), Tx). ## E.5.3 Babi Tasks 3 And 14 % the query before(O, L) is given, asking about the location of O before moving to L % find all location changes of the queried object location_change(L1, L2, T) :- before(O, _), holds_at(at(O, L1), T), holds_at(at(O, L2), T+1), L1 != L2. % find the last location change to queried location answer(L1) :- before(_, L2), location_change(L1, L2, T), T>=Tx: location_change(_, L2, Tx). answer(A) :- query(what, R1, B), is(A, R1, B). answer(B) :- query(A, R1, what), is(A, R1, B). candidate(A1, T) :- query(action(who, give, A, I )), happens(action(A1, give, A2, I), T), A2=A: A!=anyone. candidate(A2, T) :- query(action(A, give, who, I )), happens(action(A1, give, A2, I), T), A1=A: A!=anyone. candidate(I, T) :- query(action(A1, give, A2, what)), happens(action(A1, give, A2, I), T). location(unknown). %%%% Interface -- these rules can be removed if we let GPT-3 return the heads directly give(A1, A2, I, T) :- gave(A1, I, A2, T). query(action(A1, give, A2, what)) :- whatWasGiven(A1, A2). query(action(anyone, give, who, I)) :- received( I). query(action(A1, give, who, I)) :- whoWasGiven( A1, I). query(action(who, give, anyone, I)) :- whoGave(I ). query(action(who, give, A2, I)) :- whoGave(I, A2 ). answer(A) :- candidate(A, T), Tx<=T: candidate(_ , Tx). ## E.5.6 Babi Tasks 6 And 9 answer(yes) :- query(at(A, L)), holds_at(at(A, L ), T), Tx<=T: holds_at(at(A, _), Tx). answer(no) :- not answer(yes). %%%% Interface -- these rules can be removed if we let GPT-3 return the heads directly query(at(A, L)) :- isIn(A, L). % find all items I that A is carrying at the last moment; then count I carry(A, I) :- query(carry(A, count)), holds_at( carry(A,I),T), T>Tx: happens(E,Tx). location(unknown). %%%% Interface -- these rules can be removed if we let GPT-3 return the heads directly query(carry(A, count)) :- howMany(A). answer(N) :- query(carry(A, count)), N=\#count{I: carry(A, I)}. %%%% Interface -- these rules can be removed if we let GPT-3 return the heads directly query(carry(A, what)) :- carrying(A). location(unknown). % find all items I that A is carrying at the last moment answer(I) :- query(carry(A, what)), holds_at( carry(A,I),T), T>Tx: happens(E,Tx). released(F,T) :- fluent(F), timepoint(T). answer(yes) :- query(at(A, L)), holds_at(at(A, L ), T), Tx<=T: holds_at(at(A, _), Tx). answer(maybe) :- query(at(A, L)), timepoint(T), 1{isEither(A, L, _, T); isEither(A, _, L, T) }, Tx<=T: holds_at(at(A, _), Tx); Tx<=T: isEither(A, _, _, Tx). answer(no) :- not answer(yes), not answer(maybe) . %%%% Interface -- these rules may be removed if we let GPT-3 return the heads directly query(at(A, L)) :- isInQ(A, L). holds_at(at(A, L), T) :- isIn(A, L, T). go(A, L, T) :- move(A, L, T). timepoint(T) :- isIn(_, _, T). timepoint(T) :- isEither(_, _, _, T). ## E.5.10 Babi Tasks 12 And 13 %%%% Interface -- these rules can be removed if we let GPT-3 return the heads directly query(at(A, where)) :- whereAgent(A). go(A1, L, T) :- go(A1, A2, L, T). go(A2, L, T) :- go(A1, A2, L, T). query(afraid(N, what)) :- agent_afraid(N). animal(frog;lion;swan;rhino). color(green;white;yellow;gray). isColor(Agent2,Color):- isAnimal(Agent,Animal), isColor(Agent,Color),isAnimal(Agent2,Animal) . answer(Color) :- isColor(Name), isColor(Name, Color). % assume the 2nd queried object is at location (0,0) location(B, 0, 0) :- query(_, _, B). % the queried relation R is correct if its offset agrees with the location of A answer(yes) :- query(A, R, B), offset(R, Dx, Dy) , location(A, X, Y), X>0: Dx=1; X<0: Dx=-1; Y>0: Dy=1; Y<0: Dy=-1. answer(no) :- not answer(yes). %%%% Interface -- these rules can be removed if we let GPT-3 return the heads directly is(A, left, B) :- leftOf(A, B). is(A, right, B) :- rightOf(A, B). is(A, top, B) :- above(A, B). is(A, down, B) :- below(A, B). query(A, left, B) :- leftOf_nondirect(A, B). query(A, right, B) :- rightOf_nondirect(A, B). query(A, top, B) :- above_nondirect(A, B). query(A, down, B) :- below_nondirect(A, B). smaller(A, B) :- bigger(B, A). smaller(A, C) :- smaller(A, B), smaller(B, C). answer(yes) :- query(smaller(A, B)), smaller(A, B). answer(no) :- not answer(yes). %%%% Interface -- these rules can be removed if we let GPT-3 return the heads directly query(smaller(A, B)) :- doesFit(A, B). query(smaller(A, B)) :- isBigger(B, A). agent(agent). maxtime(10). % location location(L) :- is(L,_,_). location(L) :- is(_,_,L). % for each timestep, we take at most 1 action {happens(action(agent, goto, D), T): direction(D )}1 :- timepoint(T). % initial location holds_at(at(agent, L), 0) :- initial_loc(L). % goal :- goal(L), not holds_at(at(agent, L), _). % we aim to achieve the goal as early as possible :∼ goal(L), holds_at(at(agent, L), T). [-T@1, goal] loc(kitchen). loc(bedroom). loc(kitchen). loc( garden). obj(pajamas). obj(football). obj(milk). obj( apple). answer(Location) :- query(where, Agent, go), is( Agent, Quality), motivation(Quality,Location ), loc(Location). | 5212 | |--------| answer(Quality) :- query(why, Agent, go, Location), is(Agent, Quality), motivation( Quality, Location), loc(Location). answer(Quality) :- query(why,Agent, get, Obj),is (Agent, Quality), motivation(Quality, Obj), obj(Obj). answer(Location) :- query(where, Agent, go), is( Agent, Quality), motivation(Quality, Location), loc(Location). ## E.5.17 Stepgame % assume the 2nd queried object is at location (0,0) location(Q2, 0, 0) :- query(_, Q2). % extract answer relation R such that the offset (Ox,Oy) of R is in the same direction of (X ,Y) answer(R) :- query(Q1, _), location(Q1, X, Y), offset(R, Ox, Oy), Ox=-1: X<0; Ox=0: X=0; Ox=1: X>0; Oy=-1: Y<0; Oy=0: Y=0; Oy=1: Y>0. ## E.5.18 Gscan %******************** * find the goal *********************% % features of objects feature(O, shape, V) :- shape(O, V). feature(O, color, V) :- color(O, V). feature(O, size, V) :- size(O, V). % feature of destination feature(destination, V) :- query(walk), queryDesc(V). feature(destination, V) :- query(push), queryDesc(V). feature(destination, V) :- query(pull), queryDesc(V). % find the destination object and location pos_same(destination, O) :- feature(O,_,_), feature(O,_,V): feature(destination, V), feature(_,_,V). | * basic atoms *********************% agent(agent). item(I) :- pos(I, L), I!=agent. location((X,Y)) :- X=0..N-1, Y=0..N-1, gridSize( N). | |-------------------------------------------------------------------------------------------------------------------------------------------| same(destination, O) :- pos_same(destination, O) , feature(O, size, V), Vx<=V: feature(destination, big), pos_same( destination, Ox), feature(Ox, size, Vx); Vx>=V: feature(destination, small), pos_same (destination, Ox), feature(Ox, size, Vx) . goal(at(agent,L)) :- same(destination, O), pos(O ,L). is((X1,Y1), east, (X2,Y2)) :- location((X1,Y1)), location((X2,Y2)), X1=X2, Y1=Y2+1. is((X1,Y1), west, (X2,Y2)) :- location((X1,Y1)), location((X2,Y2)), X1=X2, Y1=Y2-1. is((X1,Y1), north, (X2,Y2)) :- location((X1,Y1)) , location((X2,Y2)), X1=X2-1, Y1=Y2. is((X1,Y1), south, (X2,Y2)) :- location((X1,Y1)) , location((X2,Y2)), X1=X2+1, Y1=Y2. pos_actions(walk; turn_left; turn_right; stay; push; pull). left_dir(east, north; north, west; west, south; south, east). %******************** * atoms in DEC_AXIOMS *********************% % fluent/1 fluent(dir(A, L)) :- agent(A), direction(L). fluent(ready(A)) :- agent(A). % event/1 event(action(Agent, A)) :- agent(Agent), pos_actions(A). % initial fluent values holds_at(at(O,L),0) :- pos(O, L). holds_at(dir(A,D),0) :- dir(A, D). %%%%%%%%%%%%%%% % action -- walk (to check simplification) %%%%%%%%%%%%%%% | holds_at(dir(A,D),0) :- dir(A, D). % for each timestep, we take at most 1 action {happens(action(agent, A), T): pos_actions(A)}1 :- timepoint(T). % initial location holds_at(at(agent, L), 0) :- initial_loc(L). | |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| % precondition % we don't walk in a deadend (i.e., the walk will result in no location change) :- happens(action(agent, walk), T), not initiates(action(agent, walk), _, T). | initiates(action(A, walk), at(A, L2), T) :- agent(A), location(L), timepoint(T), holds_at(dir(A, D), T), holds_at(at(A, L1), T), is(L2, D, L1). % terminates/3 terminates(action(A, walk), at(A, L1), T) :- agent(A), location(L), timepoint(T), holds_at(dir(A, D), T), holds_at(at(A, L1), T), is(L2, D, L1). % precondition % we don't walk in a deadend (i.e., the walk will result in no location change) | |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| %%%%%%%%%%%%%%% % action -- turn_left (to check simplification) %%%%%%%%%%%%%%% 5213 | %%%%%%%%%%%%%%% % initiates/3 initiates(action(A, turn_left), dir(A, D2), T) :- agent(A), timepoint(T), holds_at(dir(A, D1), T), left_dir(D1, D2). | |------------------------------------------------------------------------------------------------------------------------------------------------------| % terminates/3 terminates(action(A, turn_left), dir(A, D1), T) :- agent(A), timepoint(T), holds_at(dir(A, D1), T). %%%%%%%%%%%%%%% % action -- turn_right (to check simplification) %%%%%%%%%%%%%%% % initiates/3 initiates(action(A, turn_right), dir(A, D2), T) :- agent(A), timepoint(T), holds_at(dir(A, D1), T), left_dir(D2, D1). % terminates/3 terminates(action(A, turn_right), dir(A, D), T) :- agent(A), timepoint(T), holds_at(dir(A, D), T). %%%%%%%%%%%%%%% % action -- push/pull %%%%%%%%%%%%%%% % initiates/3 for objects with size <= 2 initiates(action(A, push), at(A, L2), T) :- agent(A), holds_at(at(A, L1), T), holds_at( dir(A, D), T), same(destination, Target), holds_at(at( Target, L1), T), is(L2, D, L1), feature(Target, size, V), V <= 2. initiates(action(A, push), at(Target, L2), T) :- agent(A), holds_at(at(A, L1), T), holds_at( dir(A, D), T), same(destination, Target), holds_at(at( Target, L1), T), is(L2, D, L1), feature(Target, size, V), V <= 2. initiates(action(A, pull), at(A, L2), T) :- agent(A), holds_at(at(A, L1), T), holds_at( dir(A, D), T), same(destination, Target), holds_at(at( Target, L1), T), is(L1, D, L2), feature(Target, size, V), V <= 2. initiates(action(A, pull), at(Target, L2), T) :- agent(A), holds_at(at(A, L1), T), holds_at( dir(A, D), T), same(destination, Target), holds_at(at( Target, L1), T), is(L1, D, L2), feature(Target, size, V), V <= 2. % terminates/3 for objects with size <= 2 terminates(action(A, push), at(A, L1), T) :- agent(A), holds_at(at(A, L1), T), holds_at( dir(A, D), T), same(destination, Target), holds_at(at( Target, L1), T), is(L2, D, L1), feature(Target, size, V), V <= 2. terminates(action(A, push), at(Target, L1), T) :- agent(A), holds_at(at(A, L1), T), holds_at( dir(A, D), T), same(destination, Target), holds_at(at( Target, L1), T), is(L2, D, L1), feature(Target, size, V), V <= 2. terminates(action(A, pull), at(A, L1), T) :- agent(A), holds_at(at(A, L1), T), holds_at( dir(A, D), T), same(destination, Target), holds_at(at( Target, L1), T), is(L1, D, L2), feature(Target, size, V), V <= 2. terminates(action(A, pull), at(Target, L1), T) :- agent(A), holds_at(at(A, L1), T), holds_at( dir(A, D), T), same(destination, Target), holds_at(at( Target, L1), T), is(L1, D, L2), feature(Target, size, V), V <= 2. | same(destination, Target), holds_at(at( Target, L1), T), is(L1, D, L2), feature(Target, size, V), V <= 2. | |--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | % initiates/3 for objects with size >= 3 initiates(action(A, push), ready(A), T) :- agent(A), holds_at(at(A, L1), T), holds_at( dir(A, D), T), not holds_at(ready(A), T) , same(destination, Target), holds_at(at( Target, L1), T), feature(Target, size, V ), V >= 3. initiates(action(A, push), at(A, L2), T) :- | | agent(A), holds_at(at(A, L1), T), holds_at( dir(A, D), T), holds_at(ready(A), T), same(destination, Target), holds_at(at( Target, L1), T), feature(Target, size, V ), V >= 3, is(L2, D, L1). | |-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | initiates(action(A, push), at(Target, L2), T) :- agent(A), holds_at(at(A, L1), T), holds_at( dir(A, D), T), holds_at(ready(A), T), same(destination, Target), holds_at(at( Target, L1), T), feature(Target, size, V ), V >= 3, is(L2, D, L1). initiates(action(A, pull), ready(A), T) :- agent(A), holds_at(at(A, L1), T), holds_at( dir(A, D), T), not holds_at(ready(A), T) , same(destination, Target), holds_at(at( Target, L1), T), feature(Target, size, V ), V >= 3. | initiates(action(A, pull), at(A, L2), T) :- agent(A), holds_at(at(A, L1), T), holds_at( dir(A, D), T), holds_at(ready(A), T), same(destination, Target), holds_at(at( Target, L1), T), feature(Target, size, V ), V >= 3, is(L1, D, L2). initiates(action(A, pull), at(Target, L2), T) :- agent(A), holds_at(at(A, L1), T), holds_at( dir(A, D), T), holds_at(ready(A), T), same(destination, Target), holds_at(at( Target, L1), T), feature(Target, size, V ), V >= 3, is(L1, D, L2). % terminates/3 for objects with size >= 3 { terminates(action(A, push), ready(A), T); terminates(action(A, push), at(A, L1), T); terminates(action(A, push), at(Target, L1), T) }=3 :- agent(A), holds_at(at(A, L1), T), holds_at( dir(A, D), T), holds_at(ready(A), T), same(destination, Target), holds_at(at( Target, L1), T), feature(Target, size, V ), V >= 3, is(L2, D, L1). { terminates(action(A, pull), ready(A), T); terminates(action(A, pull), at(A, L1), T); terminates(action(A, pull), at(Target, L1), T) }=3 :- agent(A), holds_at(at(A, L1), T), holds_at( dir(A, D), T), holds_at(ready(A), T), same(destination, Target), holds_at(at( Target, L1), T), feature(Target, size, V ), V >= 3, is(L1, D, L2). % precondition % 1. we don't push/pull in a deadend (i.e., the action will result in no location change) :- happens(action(agent, push), T), not initiates(action(agent, push), _, T). :- happens(action(agent, pull), T), not initiates(action(agent, pull), _, T). % 2. the agent can push/pull only if it's queried :- happens(action(agent, push), _), not query( push). :- happens(action(agent, pull), _), not query( pull). % 2. it's not allowed to have 3 objects (agent + 2 items) in the same cell % (I use holds_at(_, T) instead of timepoint(T) since the latter doesn't cover the last T+1 timestamp) %:- holds_at(_, T), location(L), N = \#count{O: holds_at(at(O, L), T)}, N>2. % 3. after push/pull, the agent cannot do a different action in {walk, push, pull} :- happens(action(agent, A1), T1), happens( action(agent, A2), T2), A1!=A2, T1<T2, 1{A1=push; A1=pull}, 1{A2=push; A2=pull; A2=walk}. % 4. the agent cannot change its direction to push/pull after reaching destination reach_destination(T) :- goal(at(agent,L)), holds_at(at(agent, L), T), not reach_destination(Tx): timepoint(Tx), Tx <T. :- reach_destination(T1), holds_at(dir(agent, D1 ), T1), holds_at(dir(agent, D2), T2), happens(action (agent, push), T2), T1<T2, D1!=D2. :- reach_destination(T1), holds_at(dir(agent, D1 ), T1), holds_at(dir(agent, D2), T2), happens(action (agent, pull), T2), T1<T2, D1!=D2. ## %%%%%%%%%%%%%%% % Goal %%%%%%%%%%%%%%% % 1. (optional to speed up) we need to reach the destination and as early as possible :- goal(at(agent,L)), not reach_destination(_). :∼ goal(at(agent, L)), reach_destination(T). [ T@10, goal] % 2. we need to reach the goal and as early as possible % a. the direction when reaching goal must align with the direction when reaching destination % b. if it's not deadend, there must be something blocking the next push/pull reach_goal(T) :- agent(A), holds_at(at(A, L1), T), holds_at( dir(A, D), T), same(destination, Target), holds_at(at( Target, L1), T), reach_destination(Tr), holds_at(dir(A, D), Tr), holds_at(at(_, L2), T): query(push), is(L2, D, L1); holds_at(at(_, L2), T): query(pull), is(L1, D, L2); not reach_goal(Tx): timepoint(Tx), Tx<T. :- not reach_goal(_). :∼ reach_goal(T). [T@9, goal] %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % additional requirements to achieve the goal %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % the agent cannot move further before reaching destination :- reach_destination(T), goal(at(agent, (Xg,Yg)) ), holds_at(at(agent, (X1,Y1)), Tx), holds_at( at(agent, (X2,Y2)), Tx+1), Tx<T, |X1-Xg| + |Y1-Yg| < |X2-Xg| + |Y2-Yg|. % by default, walking all the way horizontally first and then vertically move(horizontally, T) :- happens(action(agent, walk), T), holds_at(dir(agent, D), T), 1{D= east; D=west}. move(vertically, T) :- happens(action(agent, walk), T), holds_at(dir(agent, D), T), 1{D= south; D=north}. :- not while(zigzagging), move(horizontally, T1) , move(vertically, T2), T1>T2. % hesitantly: the agent must stay after every action in {walk, push, pull} :- while(hesitantly), happens(action(agent, A), T), 1{A=walk; A=push; A=pull}, not happens(action(agent, stay), T+1). % cautiously cautious(T) :- happens(action(agent, turn_left), T), happens(action(agent, turn_right), T+1), happens(action(agent, turn_right), T+2), happens(action(agent, turn_left), T+3). % the agent must be cautious before every action in {walk, push, pull} :- while(cautiously), happens(action(agent, A), T), 1{A=walk; A=push; A=pull}, not cautious(T-4). % spinning spin(T) :- happens(action(agent, turn_left), T), happens(action(agent, turn_left), T+1), happens(action(agent, turn_left), T+2), happens(action(agent, turn_left), T+3). % we always spin at the beginning if there is any action :- while(spinning), happens(_,_), not spin(0). % we always spin after every action in {walk, push, pull} except for the last one :- while(spinning), happens(action(agent, A1), T1), happens(action(agent, A2), T2), T1<T2, 1{A1=walk; A1=push; A1=pull}, 1{A2=walk; A2=push; A2=pull}, not spin(T1+1). % zigzagging % if horizontal move is needed, the first move must be horizontal :- while(zigzagging), move(horizontally, _), move(D, Tmin), D!=horizontally, Tmin<=Tx: move(_,Tx). % if a different kind of move D2 is after D1, D2 must be followed directly :- while(zigzagging), move(D1, T1), move(D2, T2) , D1!=D2, T1<T2, not move(D2, T1+2). ## E.5.19 Pick&Place %%%%% % Set up the environment %%%%% % Define the number of grippers for the robot \#const grippers=1. % Define the maximum number of steps to consider {maxtime(M): M=0..10} = 1. :∼ maxtime(M). [M] %%%%% % Extract the features for all items in the intial and goal states % we assume these items form the complete set of items in this example %%%%% feature(I, F) :- on(I,_), F=@gen_feature(I). feature(I, F) :- on(I,_,0), F=@gen_feature(I). feature(I, F) :- on(_,I), I!="table", F= @gen_feature(I). feature(I, F) :- on(_,I,0), I!="table", F= @gen_feature(I). % Define all locations location("table"). location(L) :- feature(L, block). location(L) :- feature(L, bowl). %******************** * atoms in DEC_AXIOMS *********************% % happens/2 {happens(E,T): event(E)}grippers :- timepoint(T) . % **** constraints **** % the goal must be achieved in the end :- maxtime(M), on(A, B), not holds_at(on(A,B), M +1). % At any time T, for each block/bowl, there cannot be 2 items directly on it :- timepoint(T), feature(L, _), 2{holds_at(on(I, L), T): feature(I,_)}. % if there are bowls on the table, a block can only be on a block or a bowl; :- feature(_,bowl), feature(I,block), holds_at( on(I,L),_), {feature(L, block); feature(L, bowl)} = 0. % there cannot be more than max_height-1 blocks stacked on a block up(A,B,T) :- holds_at(on(A, B), T). up(A,C,T) :- up(A,B,T), up(B,C,T). :- timepoint(T), feature(L, block), \#count{I: up (I,L,T)} >= max_height. ## F Dataset Errors This section enumerates the errors in the datasets we found. ## F.1 Babi In task 5, the dataset has two errors with regard to the labels. Error \#1. In the following example, the answer is ambiguous since Bill gives Mary both the football ## And The Apple. CONTEXT: Mary journeyed to the kitchen. Mary went to the bedroom. Mary moved to the bathroom. Mary grabbed the football there. Mary moved to the garden. Mary dropped the football. Fred went back to the kitchen. Jeff went back to the office. Jeff went to the bathroom. Bill took the apple there. Mary picked up the milk there. Mary picked up the football there. Bill went back to the kitchen. Bill went back to the hallway. Fred journeyed to the office. Bill discarded the apple. Mary journeyed to the kitchen. Fred journeyed to the garden. Mary went to the hallway. Mary gave the football to Bill. Bill passed the football to Mary. Bill took the apple there. Bill gave the apple to Mary. Jeff travelled to the kitchen. QUERY: What did Bill give to Mary? PREDICTION: apple Answer: football Error \#2. In the following example, the answer is ambiguous since Fred gives Bill both the milk and the apple. CONTEXT: Mary journeyed to the bathroom. Mary moved to the hallway. Mary went to the kitchen. Bill went back to the bedroom. Bill grabbed the apple there. Fred went back to the garden. Mary went to the garden. Fred took the milk there. Jeff moved to the hallway. Bill dropped the apple there. Fred handed the milk to Mary. Mary handed the milk to Fred. Fred went back to the bedroom. Fred passed the milk to Bill. Fred took the apple there. Fred gave the apple to Bill. Jeff went to the kitchen. Bill dropped the milk. QUERY: What did Fred give to Bill? PREDICTION: apple Answer: milk ## F.2 Clutrr We detected 16 data errors in the CLUTRR 1.3 dataset using our method. These errors can be grouped into the following 4 categories. - 5 data instances are due to incorrect relation graphs. For example, one relation graph contains the main part "A-son-B-daughter-Caunt-D" and a noise (supporting) relation "Bspouse-D". However, if B and D are couples, then C should have mother D instead of aunt D. - 9 data instances have a correct relation graph (e.g., A-son-B-grandmother-C-brother-D with a noise supporting relation B-mother-A) but the noise relation is translated into a sentence with a wrong person name (e.g., "D has mother A" instead of "B has mother A"). - 1 data instance has a correct relation graph and story, but has a wrong label (i.e., the label should be mother_in_law instead of mother). - 1 data instance has a correct relation graph and story, but the query cannot be answered due to the ambiguity of a sentence. It uses "A has grandsons B and C" to represent brother(B, C), while B and C may have different parents. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 7. A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Sections 2, 3, 4. ✓ B1. Did you cite the creators of artifacts you used? Sections 2, 3, 4. ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? At the beginning of the appendix. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section 4 and Appendix A, B. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 4. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 4. ## C ✓ **Did You Run Computational Experiments?** Section 4. ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? At the beginning of the appendix. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4. ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 4. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 4. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
tam-etal-2023-evaluating
Evaluating the Factual Consistency of Large Language Models Through News Summarization
https://aclanthology.org/2023.findings-acl.322
While large language models (LLMs) have proven to be effective on a large variety of tasks, they are also known to hallucinate information. To measure whether an LLM prefers factually consistent continuations of its input, we propose a new benchmark called FIB (Factual Inconsistency Benchmark) that focuses on the task of summarization. Specifically, our benchmark involves comparing the scores an LLM assigns to a factually consistent versus a factually inconsistent summary for an input news article. For factually consistent summaries, we use human-written reference summaries that we manually verify as factually consistent. To generate summaries that are factually inconsistent, we generate summaries from a suite of summarization models that we have manually annotated as factually inconsistent. A model{'}s factual consistency is then measured according to its accuracy, i.e. the proportion of documents where it assigns a higher score to the factually consistent summary. To validate the usefulness of {pasted macro {`}BENCHMARK{'}}, we evaluate 23 large language models ranging from 1B to 176B parameters from six different model families including BLOOM and OPT. We find that existing LLMs generally assign a higher score to factually consistent summaries than to factually inconsistent summaries. However, if the factually inconsistent summaries occur verbatim in the document, then LLMs assign a higher score to these factually inconsistent summaries than factually consistent summaries. We validate design choices in our benchmark including the scoring method and source of distractor summaries.
# Evaluating The Factual Consistency Of Large Language Models Through News Summarization Derek Tam Anisha Mascarenhas Shiyue Zhang ## Sarah Kwan Mohit Bansal Colin Raffel University of North Carolina at Chapel Hill {dtredsox,amascare,shiyue,mbansal,craffel}@cs.unc.edu ## Abstract While large language models (LLMs) have proven to be effective on a large variety of tasks, they are also known to hallucinate information. To measure whether an LLM prefers factually consistent continuations of its input, we propose a new benchmark called FIB (Factual Inconsistency Benchmark) that focuses on the task of summarization. Specifically, our benchmark involves comparing the scores an LLM assigns to a factually consistent versus a factually inconsistent summary for an input news article. For factually consistent summaries, we use human-written reference summaries that we manually verify as factually consistent. To generate summaries that are factually inconsistent, we generate summaries from a suite of summarization models that we have manually annotated as factually inconsistent. A model's factual consistency is then measured according to its accuracy, i.e. the proportion of documents where it assigns a higher score to the factually consistent summary. To validate the usefulness of FIB, we evaluate 23 large language models ranging from 1B to 176B parameters from six different model families including BLOOM and OPT. We find that existing LLMs generally assign a higher score to factually consistent summaries than to factually inconsistent summaries. However, if the factually inconsistent summaries occur verbatim in the document, then LLMs assign a higher score to these factually inconsistent summaries than factually consistent summaries. We validate design choices in our benchmark including the scoring method and source of distractor summaries. 1 ## 1 Introduction Factual inconsistency is a widespread problem in natural language generation tasks (Maynez et al., 2020; Weng et al., 2020; Devaraj et al., 2022). For text summarization in particular, it has been shown that models often hallucinate new information or 1We include our code in the supplementary ![0_image_0.png](0_image_0.png) generate content that contradicts the source document (Cao et al., 2018; Maynez et al., 2020). These works usually study supervised summarization models that are either trained from scratch or fine-tuned from a pre-trained language model (Wan and Bansal, 2022). Recently, however, NLP has experienced a paradigm shift towards using large language models (LLMs) rather than supervised models. LLMs are generally pre-trained on a large corpus of unstructured text and then applied to a task through instructive prompts. In light of this new paradigm, our goal is to evaluate the factual consistency of large language models using text summarization as a testbed. To achieve this goal, we propose FIB (the Factual Inconsistency Benchmark) to measure how often models prefer factually consistent summaries over factually inconsistent summaries. In FIB, models are given a document and are evaluated on whether they assign a higher score to a factually consistent summary than a factually inconsistent summary. Scores are assigned based on a model's assigned probability to the summary. We use accuracy on this binary classification task as a proxy for how factually consistent a model is. FIB consists of over 3,500 pairs of summaries that were all manually annotated as either factually consistent or factually inconsistent. The benchmark is based on documents and summaries from the XSum (Narayan et al., 2018b) and CNN/DM (Hermann et al., 2015) datasets to test behavior on abstractive and extractive summarization, respectively. For factually consistent summaries, we use reference summaries from the datasets that we verify are factually consistent or manually edit to make them factually consistent. The factually inconsistent summaries were generated from 22 models trained for summarization and then annotated as factually inconsistent. To explore the behavior of existing models on FIB, we evaluate 23 LLMs from 6 different model families including BLOOM, OPT, GPT, and T0 (Radford et al., 2019; Zhang et al., 2022b; Sanh et al., 2022; Chung et al., 2022; Lester et al., 2021; Scao et al., 2022) ranging from 1B to 176B parameters. Next, we analyze whether the method used to generate the factually inconsistent summaries affects how often models prefers factually consistent summaries over factually inconsistent summaries. To do so, we evaluate these models on factually inconsistent summaries from three additional sources: (1) unedited reference summaries that we annotated as factually inconsistent, (2) summaries edited via FactCC (Kryscinski et al., 2020), and (3) summaries produced by MFMA (Lee et al., 2022). In addition, we test 4 different scoring functions: conditional log-likelihood (LL), length-normalized LL, pointwise mutual information (PMI), and lengthnormalized PMI. Overall, we find that: (1) The LLMs we consider typically assign a higher score to factually consistent summaries than to factually inconsistent summaries (e.g. 72.4% of the time for BLOOM (Scao et al., 2022)), but (2) LLMs rarely prefer factually consistent summaries over factually inconsistent summaries copied verbatim from the document (e.g. 9.6% of the time for BLOOM), (3) LLMs generally become more factually consistent as they are scaled up, and (4) FactCC-generated factually inconsistent summaries can fool some LLMs at a similar rate to model-generated factually inconsistent summaries. In summary, our contributions are: (1) a benchmarking procedure and collection of annotated summaries for probing the factual consistency of LLMs and (2) a thorough evaluation of 23 LLMs from 6 different model families of up to 176B parameters. We hope FIB and our results help shed light on the factuality of LLMs. ## 2 Related Work 2.1 Factuality Evaluation Datasets In the literature on text summarization, many datasets with human-labeled factually consistent and inconsistent summaries have been introduced for meta-evaluation purposes (i.e., evaluating factuality evaluation metrics) or for training the metrics themselves. Pagnoni et al. (2021) introduced the FRANK benchmark that contains 2250 modelgenerated summaries with factuality labels for each summary sentence. Similarly, Gabriel et al. (2021) proposed the GO FIGURE meta-evaluation framework that has 1500 model-generated summaries that include factuality labels. Besides these two benchmarks, many other works collected their own small-scale factuality evaluation datasets for evaluating their proposed metrics or analyzing the factuality of summarization models (Falke et al., 2019; Maynez et al., 2020; Kryscinski et al., 2020; Wang et al., 2020a; Durmus et al., 2020; Lux et al., 2020). Ribeiro et al. (2022) combined labeled datasets from four works and formed the FactCollect dataset with more than 9000 summary sentences and their factuality labels. Additionally, a few other works proposed to automatically obtain factually inconsistent summaries by perturbing the reference summaries (Kryscinski et al., 2020; Lee et al., 2022), e.g., entity swapping. However, Goyal and Durrett (2021) showed that these automatic techniques target inherently different error distributions than those seen in actual model generations. Goyal and Durrett (2020) considered model outputs at the top of beam search as factual and bottom generations as non-factual. The aforementioned works mainly focus on abstractive summarization; in contrast, Zhang et al. (2022a) introduced a factuality evaluation dataset for extractive summarization which we use as part of FIB. Previous datasets do not annotate reference summaries and instead only annotate model generations as factually consistent or factually inconsistent. However, the reference summaries are not always factually consistent (Maynez et al., 2020; Bommasani and Cardie, 2020; Tejaswin et al., 2021) which means that some of the factually inconsistent summaries might not have any factually consistent summary to pair with. Hence, we perform a manual verification of reference summaries as factually consistent for FIB. Additionally, FIB aims to evaluate the factual consistency of LLMs themselves instead of meta-evaluating evaluation metrics. Besides summarization, Devaraj et al. (2022) proposed a factuality evaluation dataset for text simplification. In addition, some datasets have been introduced for checking a fact or claim against a large knowledge base (Thorne et al., 2018; Augenstein et al., 2019); here, we instead focus on factual consistency of conditional model continuations. ## 2.2 Factuality Evaluation Metrics Many metrics have been proposed to evaluate the factual consistency of model-generated summaries. These metrics can be roughly categorized into entailment-based metrics and questiongeneration/answering (QA/QG)-based metrics. Entailment-based metrics check whether each summary sentence (or a more fine-grained subsentence) is entailed by the source document (Falke et al., 2019; Kryscinski et al., 2020; Goyal and Durrett, 2020; Maynez et al., 2020). QA/QG-based metrics are designed based on the idea that a question should have the same answer whether it is based on the summary or the document (Wang et al., 2020a; Durmus et al., 2020; Scialom et al., 2021). Relatedly, Goodrich et al. (2019) evaluated facutality by checking factual tuples extracted by OpenIE and Ribeiro et al. (2022) used the AMR graphs of the summary and the document for assessing factual consistency. All these metrics were designed to evaluate models trained specifically for summarization. In this work, we focus more broadly on evaluating the factual consistency of LLMs. ## 3 Fib**: Factual Inconsistency Benchmark** Each example in FIB consists of a document and two summaries: a factually consistent summary and a factually inconsistent summary. Models are evaluated based on the proportion of times they assign a higher score to a factually consistent summary than to a factually inconsistent summary. We define a factually consistent summary as a summary whose contents can be inferred solely from the document. This means that even if a summary contains true information, if the information is not found in the document, then the summary is factually inconsistent. For example, the Gold summary in fig. 1 is factually consistent as it is written, but if we swapped *Peveril Point* with *a cliff*, then it would no longer be factually consistent, even if *Peveril* Point is technically *a cliff*, since this fact cannot be inferred from the document. We compare the factual consistency of models on both extractive and abstractive summaries. Extractive summaries occur verbatim in the document while abstractive summaries do not. We use two summarization datasets as our testbed: CNN/DM (See et al., 2017; Hermann et al., 2015) for extractive summaries and XSum (Narayan et al., 2018a) for abstractive summaries. CNN/DM consists of English documents about the news from CNN/Daily Mail and summaries that are several sentences long with 287K/13K/11K examples for train/val/test.2 XSum consists of English documents about the news from BBC and short summaries with 204K/11K/11K examples for train/val/test.3The CNN/DM dataset is distributed under an Apache 2.0 license and XSum is under a Creative Commons Attribution 4.0 International license. Our use is consistent with the intended use and we release our code under an Apache 2.0 license and the data for FIB under a Creative Commons Attribution 4.0 International license. ## 3.1 Dataset Construction We describe how we construct the factually consistent and factually inconsistent summaries for FIB. When performing annotations, each summary was annotated by two annotators. Four of the authors performed the annotations. Our inter-annotator agreement was 91.3%. Whenever there was a disagreement on a given summary, the two annotators would discuss and resolve the disagreement. See appendix A for annotator instructions. Factually Consistent Summaries. Though the summarization datasets we consider include reference summaries, the reference summaries are not necessarily factually consistent with the document (Maynez et al., 2020). To account for this, we annotate reference summaries for 500 and 100 documents from XSum and CNN/DM respectively 2https://huggingface.co/datasets/cnn_ dailymail 3https://huggingface.co/datasets/xsum as either factually consistent or factually inconsistent. Then, we edit the factually inconsistent reference summaries to be factually consistent using minimal edits. Factually inconsistent reference summaries usually contain information that is true but not found in the document. Thus, most edits involve removing or changing certain keywords or phrases not present in the document. Two annotators then verified the edited summary was factually consistent. The percentage of factually consistent summaries that were edited from the original reference summary was roughly 90% for XSum and 30% for CNN/DM. We denote these annotated factually consistent reference summaries as *Gold* summaries. See appendix B for some examples of edited summaries. Factually Inconsistent Summaries. To obtain factually inconsistent summaries, we generate summaries from models trained on a given summarization dataset and annotate the generated summaries as factually consistent or factually inconsistent. We then retain the model-generated summaries that were annotated as factually inconsistent. We use 15 extractive models to generate summaries for CNN/DM and 7 generative models to generate summaries for XSum. See appendix D for the list of models used to generate the summaries. For XSum, we annotate the model-generated summaries ourselves and for CNN/DM we source the factualconsistency annotations from Zhang et al. (2022a). See appendix C for some examples of factually inconsistent model-extracted summaries. For the dataset underlying our benchmark, we create a paired example for every possible factually inconsistent summary with the Gold summary for a given document. In the end, we have 3,124 factually consistent/inconsistent summary pairs across 500 unique documents for XSum and 457 pairs across 96 unique documents for CNN/DM (4 CNN/DM documents were dropped since all the models generated factually consistent summaries for them). A model's accuracy on FIB is then simply the proportion of summary pairs where the model assigns a higher score to the Gold summary than to the factually inconsistent summary. ## 3.2 Scoring Function For FIB, we are primarily interested in a scoring function to measure the consistency of the summary and the document. A natural scoring function is the model's assigned log-likelihood (LL) of the summary given the document, but LL has two major issues. First, the log-likelihood has a bias towards shorter summaries since the probability of each token in a summary is multiplied together to obtain the log-likelihood of the entire summary, and thus shorter summaries tend to produce higher log-likehoods. Second, if the summary alone has a high likelihood, then the model might assign a high likelihood to the summary, even if the summary and the document are not that related. To address the first issue, we normalize by the length of the summary. To address the second issue, we use the pointwise mutual information (PMI), which accounts for the likelihood of the summary by subtracting the log-likelihood of the summary alone from the log-likelihood of the summary conditioned on the document. Several recent works have used the pointwise mutual information (PMI) as a way of scoring a language model's generations: Holtzman et al. (2021) used PMI to solve multiple-choice tasks that probe for knowledge using GPT3 and Padmakumar and He (2021) used PMI for unsupervised extractive summarization. Concurrently, van der Poel et al. (2022) show that optimizing for PMI during decoding can decrease hallucinations in language models. To address both these issues, we use the lengthnormalized PMI as our default scoring function, where the length normalization is performed by averaging over tokens. Specifically, given document d and summary s which consists of T tokens {s1, s2*, ..., s*T }, the length-normalized PMI is defined as $$\begin{array}{l}{{\frac{1}{T}\log\sum_{t=1}^{T}P(s_{t}|d,s_{1},...,s_{t-1})}}\\ {{-\frac{1}{T}\log\sum_{t=1}^{T}P(s_{t}|,s_{1},...,s_{t-1})}}\end{array}$$ We ablate the impact of using different scoring functions in section 4.4. ## 4 Experiments Having defined our benchmark, we now evaluate the factual consistency of various LLMs and compare with several other methods for generating alternative summaries and assigning scores to LM generations. ## 4.1 Models We evaluate 23 large language models (1B to 176B parameters) from 6 different model families: ![4_image_0.png](4_image_0.png) - **GPT:** GPT2-XL (Radford et al., 2019), GPTNeo-1.3B, GPT-Neo-2.7B, GPT-NeoX-20B (Black et al., 2022) - **OPT:** OPT-1.3B, OPT-2.7B, OPT-6.7B, OPT13B, OPT-30B, OPT-66B, OPT-175B (Zhang et al., 2022b) - **BLOOM:** BLOOM-1.1B, BLOOM-1.7B, BLOOM-3B, BLOOM-7B, BLOOM (Scao et al., 2022) - T0: T0-3B, T0 (Sanh et al., 2022) - **FLAN-T5:** FLAN-T5-XL, FLAN-T5-XXL (Chung et al., 2022) - **T5-LM-Adapt:** T5-LM-Adapt-XL, T5-LMAdapt-XXL (Lester et al., 2021) Our chosen models consist of both zero-shot models that were not trained on XSum or CNN/DM (GPT, OPT, BLOOM, T5-LM-Adapt) and models that were trained on XSum and CNN/DM in a multi-task fashion (T0, FLAN-T5). For each model, we use the same 3 prompts and report the median performance across prompts, following Sanh et al. (2022). See appendix E for the prompt templates used. We use a maximum sequence length of 512, which was also applied when sampling 500 documents from XSUM for annotating factual consistency. We use Pytorch (Paszke et al., 2019) and HuggingFace (Wolf et al., 2020) to run the models, and use bitsandbytes (Dettmers et al., 2022) to do 8-bit inference for the larger models. All experiments were run on NVIDIA A6000s or 80GB NVIDIA A100s (depending on the model) and took about two days. ## 4.2 Main Results We show the performance of all the models on XSum and CNN/DM in fig. 2. On XSum, we highlight the following: - *Factual Consistency:* Models generally prefer Gold summaries over factually inconsistent model-generated summaries, but the average accuracy of any model is still far from 100%. - *Effect of Scale:* Performance generally increases slightly with scale within a given model family with the exception of T0, where the 11-billionparameter model underperforms T0-3B. For zeroshot LLMs, the performance is remarkably similar across model families. - *Effect of Training:* Both FLAN-T5 and T0 underperform the zero-shot models, which could be because they were trained on the XSum dataset, which had many reference summaries that were factually inconsistent. In contrast to our results on XSum, we find that models rarely assign a higher score to factually consistent reference summaries than to factually inconsistent model-extracted summaries on the CNN/DM dataset. However, if the factually consistent summary is also model-extracted, then models also assign higher scores to the factually consistent model-extracted summary. This suggests that all models have a strong preference for text copied from the input regardless of its factual-consistency. ## 4.3 Generating Alternative Summaries We also analyze the impact of the the method used to generate factually inconsistent summaries. To do so, we compare the model's performance when using different methods for generating the factually inconsistent summary. We note that Goyal and Durrett (2021) showed that these automatic techniques target inherently different error distributions than those seen in actual model generations. We experiment with the following alternative methods for obtaining factually inconsistent summaries: - MFMA, proposed by Lee et al. (2022), uses pretrained masked language models to generate factually inconsistent summaries. Specifically, summaries are generated by reconstructing the reference summary conditioned on the document and reference summary with α and β percent of the entities masked out respectively. The MFMA procedure first fine-tunes a pre-trained masked LM to reconstruct summaries in this setup and then uses the fine-tuned model to generate new summaries. For example, in fig. 1, if we masked out ![5_image_0.png](5_image_0.png) Peveril Point in the reference summary and the model generated *the grand canyon* instead, then the factually-inconsistent MFMA-generated summary would be A middle-aged woman has been driven by ambulance to a park after falling from the grand canyon. We follow the setup in MFMA and use T5-base (Raffel et al., 2020) and BARTbase (Lewis et al., 2020a) to generate the summaries with α = 0.8 and β = 0.6. Since there is no guarantee that the model-reconstructed summaries are factually inconsistent, we annotate their factual-consistency and only keep the ones that are factually inconsistent. We construct factually inconsistent summaries from MFMA by combining all factually inconsistent summaries generated by T5-base and BART-base. - FactCC, proposed by Kryscinski et al. (2020), generates factually inconsistent summaries via heuristic perturbations to reference summaries. FactCC uses two ways to perturb the reference summary: entity swapping and sentence negation. Entity swapping replaces an entity (i.e. pronouns, dates, numbers and named entities) in the reference summary with a different entity from the document and sentence negation refers to negating a verb. For example, in fig. 1, if we negated has to *hasn't*, then the factuallyinconsistent FactCC-generated summary would be *A middle-aged woman hasn't been airlifted to* a park after falling from Peveril Point. - FIR (factually inconsistent reference) summaries. Since some of the original reference summaries were factually inconsistent and had to be edited to become factually consistent, we use these original reference summaries as an alternative source of factually inconsistent summaries. As an additional baseline, we consider using factually consistent model-generated summaries rather than a factually inconsistent summary as the alternative summary. This allows us to test whether models prefer model-generated summaries over Gold summaries. We call this setup of where the alternative choice is a factually consistent modelgenerated summaries FCMG (Factually-Consistent Model-Generated summaries). A comparison of different methods for generating alternative summaries is shown in fig. 3. We only plot results for BLOOM and T0 since the results for other decoder-only zero-shot LLMs are similar to those for BLOOM and the results for FLAN-T5 are similar to T0. We highlight the following trends: - *Preference for factually consistent modelgenerated summaries depends on whether summaries are extractive:* On XSum, models are almost at chance when distinguishing between factually consistent model-generated summaries and Gold summaries. This is evident from the accuracy on FCMG being around 50%. However, on CNN/DM, models consistently prefer factually consistent model-extracted summaries to Gold summaries. We conclude that models prefer model-extracted summaries that occur verbatim in the document, regardless of their factual consistency. ![6_image_0.png](6_image_0.png) - *MFMA's Ineffectiveness:* On both XSum and CNN/DM, models rarely assign MFMAgenerated summaries a higher score than Gold summaries - the accuracy on MFMA is between 85% to 100% across all models. - *FactCC's Effectiveness for zero-shot LLMs:* On XSum, BLOOM's performance is similar when either FactCC or model-generated factually inconsistent summaries are used as an alternative, and on CNN/DM, performance is similar for FactCC and factually inconsistent reference summaries. This suggests that FactCC generates somewhat plausible factually inconsistent summaries for zero-shot decoder-only LLMs. - *FactCC's Effectiveness for other models:* However, T0, FLAN-T5, and T5-LM-Adapt (see appendix H for FLAN-T5 and T5-LM-Adapt accuracies) all perform better when using FactCCgenerated factually inconsistent summaries than when using model-generated factually inconsistent summaries. This indicates FactCC might not be effective in generating plausible factually inconsistent summaries across all model architectures and training schemes. - *Preference for Edited Summaries:* On XSum and CNN/DM, models tend to prefer factually consistent reference summaries over factually inconsistent reference summaries. This is evident from the accuracy on FIR being around 80% and indicates that models tend to prefer factually consistent summaries over factually inconsistent summaries. ## 4.4 Scoring Function In FIB, we use the length-normalized PMI as the scoring function. To validate this choice, we compare various alternative scoring functions: standard log-likelihood, length-normalized log-likelihood, and the non-length-normalized PMI. We show results for BLOOM, OPT-175B and T0 on XSum and CNN/DM using different scoring methods in fig. 4. In general we see that the average PMI enables models to best distinguish between factually consistent and factually inconsistent summaries. We also compare each scoring function on the alternate sources of factually inconsistent summaries; see appendix F for detailed results. We find that log-likelihood works best when the factually inconsistent summary was produced by FactCC or is a model generation on CNN/DM. We hypothesize that log-likelihood works better than lengthnormalized PMI on FactCC because the generated summaries are often non-fluent and therefore are assigned a low likelihood regardless of their factual consistency. For model-extracted summaries on CNN/DM, we hypothesize that log-likelihood works better than length-normalized PMI because log-likelihood is not as biased towards summaries extracted from the document as PMI is. ## 5 Analysis To get a better sense of what kind of factually inconsistent model-generated summaries tend to fool models into assigning a higher score than the Gold summary, we show some examples for BLOOM in table 1. These factually inconsistent summaries consist of extrinsic hallucinations that | Document | Factually Consistent | Factually Inconsistent | |-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------|--------------------------| | Summary | Summary | | | The $5m (3.2m) prize is supposed to be awarded each year to an elected leader who governed well, raised living standards and then left office. This is the fourth time in five years there has been no winner ... Sudan-born telecoms entrepreneur Mr Ibrahim launched the prize in an attempt to encourage African leaders to leave power peacefully | The prize from Ibrahim for | | | good governance in Africa has gone unclaimed yet again. | The winner of the prestigious Africa Leadership Prize has been announced by the African Union's executive committee. | | | The character with a huge papier mache head ... | | | | Hundreds of people attended an unveiling ceremony earlier, many in fancy dress for the occasion. Neil Taylor, who helped raise the donations for the statue, said its installation would mean that Frank will gaze ¨ on the Timperley sunset forever ¨ ... Frank Sidebottom created a whole ... | A statue of the character Frank Sidebottom has been unveiled in Timperley. | A statue of Timperley's | | character Frank Sidebottom has been unveiled at a Manchester museum. | | | ![7_image_0.png](7_image_0.png) add new information rather than intrinsic hallucinations that manipulate the information in the document (Maynez et al., 2020). In addition, these factually inconsistent summaries contain information that is actually false, not just information absent from the document. ## 5.1 Factual Consistency Of Models Used To Generate Summaries We take the models used to generate the factually inconsistent summaries for XSum and evaluate them against each other using the same procedure as in FIB. Specifically, we use factually inconsistent summaries produced by a "generating model" and measure how often an "evaluated model" assigns a higher score to the Gold summary than it does to the factually inconsistent model-generated summaries. The result is summarized in fig. 5, with full results in appendix K. The accuracies down the diagonal are the lowest, which means models perform poorly when scoring their own factually inconsistent summary. This is expected since models should give high scores to factually inconsistent summaries they generate. In most cases, Gold summaries are preferred less than 50% of the time, suggesting that summarization models tend to assign higher scores to model-generated factually inconsistent summaries. However, certain models (BLOOM and T5-large) almost always produce summaries that are assigned low scores by the other models. We leave exploration of this trend to future work. ## 6 Conclusion And Takeaways We present FIB, a new benchmark for evaluating the factual consistency of language models, and evaluate 23 large language models on FIB. Our takeaways are: (1) LLMs tend to assign higher scores to factually consistent summaries than to factually inconsistent summaries, except that LLMs almost always assign higher scores to extracted summaries even if they are factually inconsistent and (2) length-normalized PMI enables models to most effectively detect factually inconsistent summaries. Our results open new avenues for future work, including a more fine-grained study on the type of factually inconsistent errors different LLMs make and investigating the effect training on summarization has on the factual consistency of LLMs. ## 7 Limitations One limitation with FIB is that it only measures the factual consistency of language models for the task of summarization, and specifically news summarization. It is not clear how well the results will generalize, for example, to other domains such as scientific article or other tasks such as question answering. ## Acknowledgements This work was supported by NSF-AI Engage Institute DRL-2112635. ## References Isabelle Augenstein, Christina Lioma, Dongsheng Wang, Lucas Chaves Lima, Casper Hansen, Christian Hansen, and Jakob Grue Simonsen. 2019. MultiFC: A real-world multi-domain dataset for evidencebased fact checking of claims. In *Proceedings of* the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4685–4697, Hong Kong, China. Association for Computational Linguistics. Sidney Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, Usvsn Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, and Samuel Weinbach. 2022. GPT-NeoX-20B: An opensource autoregressive language model. In Proceedings of BigScience Episode \#5 - Workshop on Challenges & Perspectives in Creating Large Language Models, pages 95–136, virtual+Dublin. Association for Computational Linguistics. Rishi Bommasani and Claire Cardie. 2020. Intrinsic evaluation of summarization datasets. In *Proceedings of the 2020 Conference on Empirical Methods* in Natural Language Processing (EMNLP), pages 8075–8096, Online. Association for Computational Linguistics. Ziqiang Cao, Furu Wei, Wenjie Li, and Sujian Li. 2018. Faithful to the original: Fact aware neural abstractive summarization. *Proceedings of the AAAI Conference* on Artificial Intelligence, 32(1). Yen-Chun Chen and Mohit Bansal. 2018. Fast abstractive summarization with reinforce-selected sentence rewriting. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 675–686, Melbourne, Australia. Association for Computational Linguistics. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. 2022. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416. Tim Dettmers, Mike Lewis, Younes Belkada, and Luke Zettlemoyer. 2022. Llm.int8(): 8-bit matrix multiplication for transformers at scale. *arXiv preprint* arXiv:2208.07339. Ashwin Devaraj, William Sheffield, Byron Wallace, and Junyi Jessy Li. 2022. Evaluating factuality in text simplification. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7331–7345, Dublin, Ireland. Association for Computational Linguistics. Yue Dong, Yikang Shen, Eric Crawford, Herke van Hoof, and Jackie Chi Kit Cheung. 2018. BanditSum: Extractive summarization as a contextual bandit. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 3739–3748, Brussels, Belgium. Association for Computational Linguistics. Esin Durmus, He He, and Mona Diab. 2020. FEQA: A question answering evaluation framework for faithfulness assessment in abstractive summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5055– 5070, Online. Association for Computational Linguistics. Tobias Falke, Leonardo F. R. Ribeiro, Prasetya Ajie Utama, Ido Dagan, and Iryna Gurevych. 2019. Ranking generated summaries by correctness: An interesting but challenging application for natural language inference. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 2214–2220, Florence, Italy. Association for Computational Linguistics. Saadia Gabriel, Asli Celikyilmaz, Rahul Jha, Yejin Choi, and Jianfeng Gao. 2021. GO FIGURE: A meta evaluation of factuality in summarization. In *Findings of* the Association for Computational Linguistics: ACLIJCNLP 2021, pages 478–487, Online. Association for Computational Linguistics. Ben Goodrich, Vinay Rao, Peter J. Liu, and Mohammad Saleh. 2019. Assessing the factual accuracy of generated text. In *Proceedings of the 25th ACM SIGKDD* International Conference on Knowledge Discovery & Data Mining. Tanya Goyal and Greg Durrett. 2020. Evaluating factuality in generation with dependency-level entailment. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 3592–3603, Online. Association for Computational Linguistics. Tanya Goyal and Greg Durrett. 2021. Annotating and modeling fine-grained factuality in summarization. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1449–1462, Online. Association for Computational Linguistics. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in neural information processing systems. Ari Holtzman, Peter West, Vered Shwartz, Yejin Choi, and Luke Zettlemoyer. 2021. Surface form competition: Why the highest probability answer isn't always right. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7038–7051, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Wojciech Kryscinski, Bryan McCann, Caiming Xiong, and Richard Socher. 2020. Evaluating the factual consistency of abstractive text summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9332–9346, Online. Association for Computational Linguistics. Hwanhee Lee, Kang Min Yoo, Joonsuk Park, Hwaran Lee, and Kyomin Jung. 2022. Masked summarization to generate factually inconsistent summaries for improved factual consistency checking. In *Findings of the Association for Computational Linguistics: NAACL 2022*, pages 1019–1030, Seattle, United States. Association for Computational Linguistics. Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. In *Proceedings of the 2021 Conference on* Empirical Methods in Natural Language Processing, pages 3045–3059, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020a. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 7871–7880. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020b. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 7871–7880, Online. Association for Computational Linguistics. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In *Text Summarization Branches Out*, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Klaus-Michael Lux, Maya Sappelli, and Martha Larson. 2020. Truth or error? towards systematic analysis of factual errors in abstractive summaries. In *Proceedings of the First Workshop on Evaluation and* Comparison of NLP Systems, pages 1–10, Online. Association for Computational Linguistics. Joshua Maynez, Shashi Narayan, Bernd Bohnet, and Ryan McDonald. 2020. On faithfulness and factuality in abstractive summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1906–1919, Online. Association for Computational Linguistics. Rada Mihalcea and Paul Tarau. 2004. TextRank: Bringing order into text. In *Proceedings of the 2004 Conference on Empirical Methods in Natural Language* Processing, pages 404–411, Barcelona, Spain. Association for Computational Linguistics. Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018a. Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. In *Proceedings of the 2018* Conference on Empirical Methods in Natural Language Processing, pages 1797–1807, Brussels, Belgium. Association for Computational Linguistics. Shashi Narayan, Shay B Cohen, and Mirella Lapata. 2018b. Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. In *Proceedings of the 2018* Conference on Empirical Methods in Natural Language Processing, pages 1797–1807. Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018c. Ranking sentences for extractive summarization with reinforcement learning. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1747–1759, New Orleans, Louisiana. Association for Computational Linguistics. Vishakh Padmakumar and He He. 2021. Unsupervised extractive summarization using pointwise mutual information. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2505–2512, Online. Association for Computational Linguistics. Artidoro Pagnoni, Vidhisha Balachandran, and Yulia Tsvetkov. 2021. Understanding factuality in abstractive summarization with FRANK: A benchmark for factuality metrics. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, pages 4812–4829, Online. Association for Computational Linguistics. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. *Advances in* neural information processing systems, 32. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Teaching machines to read and comprehend. In *OpenAI Blog*. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(140):1–67. Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong, China. Association for Computational Linguistics. Leonardo Ribeiro, Mengwen Liu, Iryna Gurevych, Markus Dreyer, and Mohit Bansal. 2022. FactGraph: Evaluating factuality in summarization with semantic graph representations. In *Proceedings of the 2022* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3238–3253, Seattle, United States. Association for Computational Linguistics. Victor Sanh, Albert Webson, Colin Raffel, Stephen H Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, et al. 2022. Multitask prompted training enables zeroshot task generalization. *International Conference* on Learning Representations (ICLR). Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ilic, Daniel Hesslow, Roman ´ Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, Jonathan Tow, Alexander M. Rush, Stella Biderman, Albert Webson, Pawan Sasanka Ammanamanchi, Thomas Wang, Benoît Sagot, Niklas Muennighoff, Albert Villanova del Moral, Olatunji Ruwase, Rachel Bawden, Stas Bekman, Angelina McMillan-Major, Iz Beltagy, Huu Nguyen, Lucile Saulnier, Samson Tan, Pedro Ortiz Suarez, Victor Sanh, Hugo Laurençon, Yacine Jernite, Julien Launay, Margaret Mitchell, Colin Raffel, Aaron Gokaslan, Adi Simhi, Aitor Soroa, Alham Fikri Aji, Amit Alfassy, Anna Rogers, Ariel Kreisberg Nitzav, Canwen Xu, Chenghao Mou, Chris Emezue, Christopher Klamm, Colin Leong, Daniel van Strien, David Ifeoluwa Adelani, Dragomir Radev, Eduardo González Ponferrada, Efrat Levkovizh, Ethan Kim, Eyal Bar Natan, Francesco De Toni, Gérard Dupont, Germán Kruszewski, Giada Pistilli, Hady Elsahar, Hamza Benyamina, Hieu Tran, Ian Yu, Idris Abdulmumin, Isaac Johnson, Itziar Gonzalez-Dios, Javier de la Rosa, Jenny Chim, Jesse Dodge, Jian Zhu, Jonathan Chang, Jörg Frohberg, Joseph Tobing, Joydeep Bhattacharjee, Khalid Almubarak, Kimbo Chen, Kyle Lo, Leandro Von Werra, Leon Weber, Long Phan, Loubna Ben allal, Ludovic Tanguy, Manan Dey, Manuel Romero Muñoz, Maraim Masoud, María Grandury, Mario Šaško, Max Huang, Maximin Coavoux, Mayank Singh, Mike Tian-Jian Jiang, Minh Chien Vu, Mohammad A. Jauhar, Mustafa Ghaleb, Nishant Subramani, Nora Kassner, Nurulaqilla Khamis, Olivier Nguyen, Omar Espejel, Ona de Gibert, Paulo Villegas, Peter Henderson, Pierre Colombo, Priscilla Amuok, Quentin Lhoest, Rheza Harliman, Rishi Bommasani, Roberto Luis López, Rui Ribeiro, Salomey Osei, Sampo Pyysalo, Sebastian Nagel, Shamik Bose, Shamsuddeen Hassan Muhammad, Shanya Sharma, Shayne Longpre, Somaieh Nikpoor, Stanislav Silberberg, Suhas Pai, Sydney Zink, Tiago Timponi Torrent, Timo Schick, Tristan Thrush, Valentin Danchev, Vassilina Nikoulina, Veronika Laippala, Violette Lepercq, Vrinda Prabhu, Zaid Alyafeai, Zeerak Talat, Arun Raja, Benjamin Heinzerling, Chenglei Si, Elizabeth Salesky, Sabrina J. Mielke, Wilson Y. Lee, Abheesht Sharma, Andrea Santilli, Antoine Chaffin, Arnaud Stiegler, Debajyoti Datta, Eliza Szczechla, Gunjan Chhablani, Han Wang, Harshit Pandey, Hendrik Strobelt, Jason Alan Fries, Jos Rozen, Leo Gao, Lintang Sutawika, M Saiful Bari, Maged S. Al-shaibani, Matteo Manica, Nihal Nayak, Ryan Teehan, Samuel Albanie, Sheng Shen, Srulik Ben-David, Stephen H. Bach, Taewoon Kim, Tali Bers, Thibault Fevry, Trishala Neeraj, Urmish Thakker, Vikas Raunak, Xiangru Tang, ZhengXin Yong, Zhiqing Sun, Shaked Brody, Yallow Uri, Hadar Tojarieh, Adam Roberts, Hyung Won Chung, Jaesung Tae, Jason Phang, Ofir Press, Conglong Li, Deepak Narayanan, Hatim Bourfoune, Jared Casper, Jeff Rasley, Max Ryabinin, Mayank Mishra, Minjia Zhang, Mohammad Shoeybi, Myriam Peyrounette, Nicolas Patry, Nouamane Tazi, Omar Sanseviero, Patrick von Platen, Pierre Cornette, Pierre François Lavallée, Rémi Lacroix, Samyam Rajbhandari, Sanchit Gandhi, Shaden Smith, Stéphane Requena, Suraj Patil, Tim Dettmers, Ahmed Baruwa, Amanpreet Singh, Anastasia Cheveleva, Anne-Laure Ligozat, Arjun Subramonian, Aurélie Névéol, Charles Lovering, Dan Garrette, Deepak Tunuguntla, Ehud Reiter, Ekaterina Taktasheva, Ekaterina Voloshina, Eli Bogdanov, Genta Indra Winata, Hailey Schoelkopf, JanChristoph Kalo, Jekaterina Novikova, Jessica Zosa Forde, Jordan Clive, Jungo Kasai, Ken Kawamura, Liam Hazan, Marine Carpuat, Miruna Clinciu, Najoung Kim, Newton Cheng, Oleg Serikov, Omer Antverg, Oskar van der Wal, Rui Zhang, Ruochen Zhang, Sebastian Gehrmann, Shani Pais, Tatiana Shavrina, Thomas Scialom, Tian Yun, Tomasz Limisiewicz, Verena Rieser, Vitaly Protasov, Vladislav Mikhailov, Yada Pruksachatkun, Yonatan Belinkov, Zachary Bamberger, Zdenek Kasner, Alice Rueda, ˇ Amanda Pestana, Amir Feizpour, Ammar Khan, Amy Faranak, Ana Santos, Anthony Hevia, Antigona Unldreaj, Arash Aghagol, Arezoo Abdollahi, Aycha Tammour, Azadeh HajiHosseini, Bahareh Behroozi, Benjamin Ajibade, Bharat Saxena, Carlos Muñoz Ferrandis, Danish Contractor, David Lansky, Davis David, Douwe Kiela, Duong A. Nguyen, Edward Tan, Emi Baylor, Ezinwanne Ozoani, Fatima Mirza, Frankline Ononiwu, Habib Rezanejad, Hessie Jones, Indrani Bhattacharya, Irene Solaiman, Irina Sedenko, Isar Nejadgholi, Jesse Passmore, Josh Seltzer, Julio Bonis Sanz, Karen Fort, Livia Dutra, Mairon Samagaio, Maraim Elbadri, Margot Mieskes, Marissa Gerchick, Martha Akinlolu, Michael McKenna, Mike Qiu, Muhammed Ghauri, Mykola Burynok, Nafis Abrar, Nazneen Rajani, Nour Elkott, Nour Fahmy, Olanrewaju Samuel, Ran An, Rasmus Kromann, Ryan Hao, Samira Alizadeh, Sarmad Shubber, Silas Wang, Sourav Roy, Sylvain Viguier, Thanh Le, Tobi Oyebade, Trieu Le, Yoyo Yang, Zach Nguyen, Abhinav Ramesh Kashyap, Alfredo Palasciano, Alison Callahan, Anima Shukla, Antonio MirandaEscalada, Ayush Singh, Benjamin Beilharz, Bo Wang, Caio Brito, Chenxi Zhou, Chirag Jain, Chuxin Xu, Clémentine Fourrier, Daniel León Periñán, Daniel Molano, Dian Yu, Enrique Manjavacas, Fabio Barth, Florian Fuhrimann, Gabriel Altay, Giyaseddin Bayrak, Gully Burns, Helena U. Vrabec, Imane Bello, Ishani Dash, Jihyun Kang, John Giorgi, Jonas Golde, Jose David Posada, Karthik Rangasai Sivaraman, Lokesh Bulchandani, Lu Liu, Luisa Shinzato, Madeleine Hahn de Bykhovetz, Maiko Takeuchi, Marc Pàmies, Maria A Castillo, Marianna Nezhurina, Mario Sänger, Matthias Samwald, Michael Cullan, Michael Weinberg, Michiel De Wolf, Mina Mihaljcic, Minna Liu, Moritz Freidank, Myungsun Kang, Natasha Seelam, Nathan Dahlberg, Nicholas Michio Broad, Nikolaus Muellner, Pascale Fung, Patrick Haller, Ramya Chandrasekhar, Renata Eisenberg, Robert Martin, Rodrigo Canalli, Rosaline Su, Ruisi Su, Samuel Cahyawijaya, Samuele Garda, Shlok S Deshmukh, Shubhanshu Mishra, Sid Kiblawi, Simon Ott, Sinee Sang-aroonsiri, Srishti Kumar, Stefan Schweter, Sushil Bharati, Tanmay Laud, Théo Gigant, Tomoya Kainuma, Wojciech Kusa, Yanis Labrak, Yash Shailesh Bajaj, Yash Venkatraman, Yifan Xu, Yingxin Xu, Yu Xu, Zhe Tan, Zhongli Xie, Zifan Ye, Mathilde Bras, Younes Belkada, and Thomas Wolf. 2022. Bloom: A 176b-parameter open-access multilingual language model. Thomas Scialom, Paul-Alexis Dray, Sylvain Lamprier, Benjamin Piwowarski, Jacopo Staiano, Alex Wang, and Patrick Gallinari. 2021. QuestEval: Summarization asks for fact-based evaluation. In *Proceedings of* the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6594–6604, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointergenerator networks. In *Proceedings of the 55th Annual Meeting of the Association for Computational* Linguistics (Volume 1: Long Papers), pages 1073– 1083, Vancouver, Canada. Association for Computational Linguistics. Priyam Tejaswin, Dhruv Naik, and Pengfei Liu. 2021. How well do you know your summarization datasets? In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 3436–3449, Online. Association for Computational Linguistics. James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. FEVER: a large-scale dataset for fact extraction and VERification. In *Proceedings of the 2018* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 809–819, New Orleans, Louisiana. Association for Computational Linguistics. Liam van der Poel, Ryan Cotterell, and Clara Meister. 2022. Mutual information alleviates hallucinations in abstractive summarization. *arXiv preprint* arXiv:2210.13210. David Wan and Mohit Bansal. 2022. FactPEGASUS: Factuality-aware pre-training and fine-tuning for abstractive summarization. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1010–1028, Seattle, United States. Association for Computational Linguistics. Alex Wang, Kyunghyun Cho, and Mike Lewis. 2020a. Asking and answering questions to evaluate the factual consistency of summaries. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5008–5020, Online. Association for Computational Linguistics. Danqing Wang, Pengfei Liu, Yining Zheng, Xipeng Qiu, and Xuanjing Huang. 2020b. Heterogeneous graph neural networks for extractive document summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6209–6219, Online. Association for Computational Linguistics. Rongxiang Weng, Heng Yu, Xiangpeng Wei, and Weihua Luo. 2020. Towards enhancing faithfulness for neural machine translation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2675–2684, Online. Association for Computational Linguistics. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Jiacheng Xu, Zhe Gan, Yu Cheng, and Jingjing Liu. 2020. Discourse-aware neural extractive text summarization. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 5021–5031, Online. Association for Computational Linguistics. Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter Liu. 2020. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. In *International Conference on Machine Learning*, pages 11328–11339. PMLR. Shiyue Zhang, David Wan, and Mohit Bansal. 2022a. Extractive is not faithful: An investigation of broad unfaithfulness problems in extractive summarization. arXiv preprint arXiv:2209.03549. Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. 2022b. Opt: Open pre-trained transformer language models. *arXiv preprint arXiv:2205.01068*. Hao Zheng and Mirella Lapata. 2019. Sentence centrality revisited for unsupervised summarization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6236– 6247, Florence, Italy. Association for Computational Linguistics. Ming Zhong, Pengfei Liu, Yiran Chen, Danqing Wang, Xipeng Qiu, and Xuanjing Huang. 2020. Extractive summarization as text matching. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6197–6208, Online. Association for Computational Linguistics. - BART-base (Lewis et al., 2020b) - VictorSanh/bart-base-finetuned-xsum - distil-PEGASUS (Zhang et al., 2020) - sshleifer/distill-pegasus-xsum-16-8 Ming Zhong, Pengfei Liu, Danqing Wang, Xipeng Qiu, and Xuanjing Huang. 2019. Searching for effective neural extractive summarization: What works and what's next. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 1049–1058, Florence, Italy. Association for Computational Linguistics. - BART-large (Lewis et al., 2020b) - facebook/bart-large-xsum - PEGASUS (Zhang et al., 2020) - google/pegasus-xsum - distil-BART (Lewis et al., 2020b) - sshleifer/distilbart-xsum-12-6 Qingyu Zhou, Nan Yang, Furu Wei, Shaohan Huang, Ming Zhou, and Tiejun Zhao. 2018. Neural document summarization by jointly learning to score and select sentences. In *Proceedings of the 56th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 654–663, Melbourne, Australia. Association for Computational Linguistics. - T5-large (Raffel et al., 2020)- sysresearch101/t5-large-finetuned-xsum - Oracle (discourse) (Xu et al., 2020) - RNN Ext RL (Chen and Bansal, 2018) - BanditSumm (Dong et al., 2018) - NeuSumm (Zhou et al., 2018) - Refresh (Narayan et al., 2018c) ## B Sample Edited Summaries C Sample Model-Extracted Factually Inconsistent D Models Used To Generate Summaries A Annotation Instructions We show some examples of documents with the original factually inconsistent reference summary and the edited factually consistent summary on XSum in table 2. We show some examples of documents with model-extracted factually inconsistent summaries on CNN/DM in table 3. We use the following models to generate summaries for XSum and include the respective HuggingFace model name: - BLOOM-560m (Scao et al., 2022) - mrm8488/bloom-560m-finetuned-newssummarization-xsum We use greedy decoding for all models with a maximum generation length of 50 tokens. We use the following models to generate summaries for CNN/DM. See Zhang et al. (2022a) for more description of the models. The annotators were instructed to mark a summary as factually inconsistent if any information in the summary was not implied in the document. We assume no access to external knowledge so the summary has to be implied solely from the document. External knowledge is broadly defined as any knowledge that cannot be inferred from common sense alone. For example, the capital of a country or the rules of a sport would be external knowledge. - Oracle (Lin, 2004) | Document | Original Ref. Summary | Edited Ref. Summary | |-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------| | West Midlands Ambulance Service said the car was discovered on Sunday at 09:35 GMT by two cyclists in Crakemarsh near Uttoxeter, Staffordshire. A spokesman said the black Ford Fiesta appeared to have hit a tree in very foggy conditions on the B5030. The girl, in the back of the car, was treated at hospital for minor injuries. The man, who was 25 and from the local area, has not yet been named ... | A five-year-old girl has been found with her dead father in a crashed car which had been in a ditch "for some time". | A girl has been found in a crashed car. | | Aiden Webb, 22, from Norwich, was climbing Fansipan mountain alone on Friday when he fell down a ravine and lost his way ... in the fall on the 3,100m (10,300ft) high Fansipan mountain in the north of Vietnam ... A Foreign and Commonwealth Office spokeswoman said: "We are supporting the family of Aiden Webb, a British man reported missing in Vietnam. We are working closely with the local authorities leading the search." | A British man is missing in Vietnam after falling while attempting to climb the country's highest mountain. | A British man is missing in Vietnam after falling while | | attempting to climb a mountain. | | | Table 2: These examples have id 34696511 and id 36459564 respectively. | Document | Model-Extracted Factually Inconsistent Summary | |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | the california public utilities commission on thursday said it is ordering pacific gas & electric co. to pay a record 1.6 billion penalty ... 850 million will go to " gas transmission pipeline safety infrastructure improvements , " the commission said ... pg & e failed to uphold the public 's trust , " commission president michael picker said ... the company 's chief executive officer said ... " since the 2010 explosion of our natural gas transmission pipeline in san bruno , we have worked hard to do the right thing for the victims , their families and the community of san bruno , " tony earley said ... | ... 850 million will go to " gas transmission pipeline safety infrastructure improvements , " the commission said . " since the 2010 explosion of our natural gas transmission pipeline in san bruno , we have worked hard to do the right thing for the victims , their families and the community of san bruno ... | | a passenger on an atlanta-bound air canada flight told a cnn reporter on the plane friday that a stranger sitting behind him tried to choke him . oliver minatel , 22 , said he was sleeping on air canada flight 8623 from toronto when he felt something around his neck ... " i forced it ( the cord ) down and then other people came to help , and then i got out and he started saying that we were here to kill him , " minatel said . the man was not restrained for the rest of the trip , but the flight crew told him to stay seated with his seat belt on . the man kept trying to get out of his seat but other passengers yelled at him whenever he tried to stand up . | oliver minatel , 22 , said he was sleeping on air canada flight 8623 from toronto when he felt something around his neck . the man kept trying to get out of his seat but other passengers yelled at him whenever he tried to stand up . the suspect was escorted off the plane . | Table 3: Two examples of model-extracted factually inconsistent summaries. The annotations were sourced from Zhang et al. (2022a). These examples have id 41c6edecee127c396d17e2e9115a4a89252cc52b and id 32655a04c9e4733a1ae4b210a045bc6e0d443d85 respectively. The first example uses Textrank (Mihalcea and Tarau, 2004) to extract the summary. It is factually incorrect since 'we' refers to pg & e and not the commission. The second example uses MatchSumm (Zhong et al., 2020) to extract the summary. It is factually inconsistent since the man refers to the stranger and not Oliver Minatel. - BERT+LSTM+PN+RL (Zhong et al., 2019) - MatchSumm (Zhong et al., 2020) - HeterGraph (Wang et al., 2020b) - Lead3 - Textrank (Mihalcea and Tarau, 2004) - Textrank (ST) (Reimers and Gurevych, 2019) - PacSum (tfidf) (Zheng and Lapata, 2019) - PacSum (bert) - MI-unsup (Padmakumar and He, 2021) - "[input]" - "The summary of "[input]" is " - "Summarize: [input]" ## I Accuracies From Factual Model-Generated Summaries J Accuracies From Fib **Summaries** E Prompt Templates K Accuracies From Models Used To Generate Summaries F Accuracies Across All Scoring Functions G Accuracies From Mfma-Generated Summaries H Accuracies From Factcc-Generated Summaries We show the performance of different models on factually consistent model-generated summaries broken down by the model used to generate the summary using different scoring functions on XSum in table 24, table 25, table 26, and table 27 and on CNN/DM in table 28, table 29, table 30, and table 31 We show the performance of different models on FIB broken down by the model used to generate the summary using different scoring functions for XSum in table 32, table 33, table 34, and table 35 and for CNN/DM in table 36, table 37, table 38, and table 39. We use the following 3 prompt templates for all models, where [input] is replaced with the document: We show the performance of different models using the same models to generate the alternative summaries for XSum using different scoring functions in table 40. We show the performance of all the models across different scoring functions for XSum in table 4, table 5, table 6, and table 7 and for CNN/DM in table 8, table 9, table 10, and table 11. We show the performance of different models on MFMA-generated summaries broken down by the model used to generate the summary for XSum using different scoring functions in table 12, table 13, table 14, and table 15. We show the performance of different models on FactCC-generated summaries broken down by the method used to generate the summary using different scoring functions for XSum in table 16, table 17, table 18, table 19 and for CNN/DM in table 20, table 21, table 22, table 23. Model FIR FCMG FIB FactCC MFMA T0-3B 53.2 41.6 57.6 87.6 85.1 T0 29.6 34.9 46.6 89.8 83.9 FLAN-T5-xl 58.1 47.8 59.9 87.3 85.6 FLAN-T5-xxl 59.0 51.3 63.7 87.1 87.3 T5-LM-Adapt-xl 81.3 49.5 68.7 78.7 87.5 T5-LM-Adapt-xxl 81.7 50.7 69.8 84.2 88.7 GPT-Neo-1.3B 88.0 45.7 72.1 68.9 87.1 GPT2-XL 84.9 46.3 69.2 71.5 83.2 GPT-Neo-2.7B 87.8 47.7 72.3 72.2 85.1 GPTJ-6B 88.0 51.2 75.4 74.0 87.3 GPT-Neox-20B 82.9 49.6 73.4 74.1 86.4 BLOOM 84.9 46.2 72.4 75.1 88.1 BLOOM-7B1 85.7 43.8 71.8 71.1 86.5 BLOOM-3B 89.3 43.2 72.6 70.4 86.6 BLOOM-1B7 88.9 42.9 70.5 67.8 87.1 BLOOM-1B1 87.5 41.3 68.8 64.0 85.3 OPT-175B 84.4 48.3 75.1 71.2 87.0 OPT-66B 83.5 47.8 73.9 70.8 87.2 OPT-30B 84.4 48.3 73.8 72.0 87.2 OPT-13B 85.1 49.0 72.9 71.6 86.5 OPT-6.7B 83.3 47.4 71.3 70.5 86.3 OPT-2.7B 84.4 48.1 71.3 70.5 85.8 OPT-1.3B 85.7 46.3 69.7 70.5 86.0 T0-3B 20.0 15.5 29.1 97.7 68.2 T0 14.9 21.4 33.0 96.9 73.2 FLAN-T5-xl 23.6 16.2 29.4 97.7 68.9 FLAN-T5-xxl 21.6 17.6 32.1 98.1 72.0 T5-LM-Adapt-xl 34.1 17.7 23.9 93.1 62.3 T5-LM-Adapt-xxl 28.1 19.2 26.4 95.7 67.0 GPT-Neo-1.3B 37.4 18.1 24.7 94.7 59.1 GPT2-XL 33.6 19.3 26.0 95.3 60.7 GPT-Neo-2.7B 35.9 19.5 26.9 95.8 62.0 GPTJ-6B 28.3 21.1 28.4 96.8 68.9 GPT-Neox-20B 23.4 20.8 30.5 97.0 69.8 BLOOM 26.5 24.3 32.1 97.8 73.1 BLOOM-7B1 39.9 21.5 28.8 96.3 65.6 BLOOM-3B 44.3 20.5 28.2 95.7 63.9 BLOOM-1B7 49.0 20.8 27.1 94.7 61.2 BLOOM-1B1 51.4 20.4 27.4 93.0 59.7 OPT-175B 16.9 23.1 34.4 97.9 77.1 OPT-66B 18.7 22.8 32.3 97.5 75.1 OPT-30B 20.3 21.6 32.6 97.4 72.4 OPT-13B 22.5 21.4 31.0 96.6 73.2 OPT-6.7B 22.0 21.3 28.7 96.7 70.2 OPT-2.7B 29.0 20.1 28.4 96.7 68.7 OPT-1.3B 30.7 19.9 26.3 95.9 64.7 Model FIR FCMG FIB FactCC MFMA Model FIR FCMG FIB FactCC MFMA T0-3B 18.3 46.0 49.1 83.2 83.7 T0 16.7 36.8 45.6 89.0 83.7 FLAN-T5-xl 16.7 52.0 49.0 82.0 82.9 FLAN-T5-xxl 16.7 51.2 53.6 81.3 85.6 T5-LM-Adapt-xl 39.0 52.6 54.7 69.9 83.8 T5-LM-Adapt-xxl 35.4 51.5 55.3 76.8 85.1 GPT-Neo-1.3B 58.4 46.5 57.2 60.5 83.9 GPT2-XL 56.1 51.6 54.9 64.5 80.2 GPT-Neo-2.7B 57.5 49.4 55.2 66.3 82.3 GPTJ-6B 55.7 54.9 57.8 66.7 84.3 GPT-Neox-20B 53.0 49.5 58.1 69.2 83.6 BLOOM 53.0 48.9 59.3 72.9 84.7 BLOOM-7B1 59.5 48.5 57.5 67.5 85.2 BLOOM-3B 59.5 49.3 59.9 65.7 85.3 BLOOM-1B7 63.3 46.2 56.6 63.9 83.4 BLOOM-1B1 60.8 44.7 54.9 58.6 82.3 OPT-175B 50.3 50.5 60.0 65.2 86.1 OPT-66B 53.5 50.9 57.5 65.1 84.5 OPT-30B 58.1 49.8 57.6 66.6 85.4 OPT-13B 54.6 51.3 56.6 65.3 83.7 OPT-6.7B 56.3 50.5 55.5 65.3 84.3 OPT-2.7B 56.6 52.1 55.4 66.2 84.2 OPT-1.3B 57.2 48.9 54.0 64.7 82.6 Model FIR FCMG FIB FactCC MFMA T0-3B 45.2 15.9 34.4 98.5 73.5 T0 34.7 23.0 38.9 97.9 78.0 FLAN-T5-xl 52.8 18.5 35.6 98.3 74.9 FLAN-T5-xxl 49.4 18.5 39.2 98.3 78.1 T5-LM-Adapt-xl 82.6 23.8 44.6 98.1 71.4 T5-LM-Adapt-xxl 72.2 22.0 43.4 98.3 75.1 GPT-Neo-1.3B 83.3 22.2 46.9 97.0 66.1 GPT2-XL 78.6 22.1 45.6 97.3 67.9 GPT-Neo-2.7B 81.3 23.1 46.8 97.1 67.6 GPTJ-6B 72.2 22.9 47.2 98.0 74.6 GPT-Neox-20B 68.2 26.9 47.7 97.9 75.9 BLOOM 70.6 24.5 48.6 98.5 78.8 BLOOM-7B1 81.7 24.4 48.4 97.6 71.9 BLOOM-3B 85.1 24.4 48.6 97.3 68.5 BLOOM-1B7 87.3 25.4 48.5 96.2 65.1 BLOOM-1B1 90.4 24.7 49.3 96.2 64.2 OPT-175B 53.2 26.4 48.1 98.3 81.8 OPT-66B 61.0 25.5 47.4 98.3 80.2 OPT-30B 60.6 25.6 47.0 98.1 78.3 OPT-13B 66.8 24.6 46.3 98.1 78.8 OPT-6.7B 66.1 25.9 45.6 97.6 75.7 OPT-2.7B 72.6 24.6 45.7 98.1 73.2 OPT-1.3B 77.3 23.1 45.2 97.4 71.8 Table 7: The performance of the models on XSum with various alternative-choices using LL as the scoring function. Model FIR FCMG FIB FactCC MFMA T0-3B 65.6 7.0 17.7 82.4 98.0 T0 50.0 4.4 11.4 79.9 92.0 FLAN-T5-xl 65.6 7.4 16.0 79.7 100.0 FLAN-T5-xxl 59.4 6.3 13.8 76.5 100.0 T5-LM-Adapt-xl 62.5 4.9 12.7 79.6 99.0 T5-LM-Adapt-xxl 59.4 6.0 12.0 76.8 99.0 GPT-Neo-1.3B 78.1 6.4 8.7 77.7 100.0 GPT2-XL 78.1 8.2 9.8 79.5 99.0 GPT-Neo-2.7B 78.1 7.9 10.1 78.2 99.0 GPTJ-6B 78.1 7.5 8.1 82.0 99.0 GPT-Neox-20B 71.9 8.6 10.5 76.2 97.0 BLOOM 75.0 10.8 9.2 79.3 99.0 BLOOM-7B1 84.4 9.8 10.3 81.8 99.0 BLOOM-3B 78.1 8.0 7.9 78.2 100.0 BLOOM-1B7 84.4 6.8 9.2 76.3 99.0 BLOOM-1B1 84.4 7.5 11.2 75.8 100.0 OPT-175B 71.9 11.9 10.7 75.2 98.0 OPT-66B 71.9 8.8 9.2 75.9 99.0 OPT-30B 71.9 11.1 9.0 77.3 100.0 OPT-13B 75.0 8.2 9.6 79.5 99.0 OPT-6.7B 81.2 10.2 9.9 79.8 99.0 OPT-2.7B 75.0 7.8 9.6 74.1 98.0 OPT-1.3B 78.1 6.8 8.1 75.3 100.0 | Model | FIR | FCMG | FIB | FactCC | MFMA | |-----------------|-------|--------|-------|----------|--------| | T0-3B | 40.6 | 3.3 | 11.6 | 90.3 | 100.0 | | T0 | 37.5 | 2.2 | 8.3 | 90.8 | 100.0 | | FLAN-T5-xl | 40.6 | 1.7 | 9.0 | 91.4 | 100.0 | | FLAN-T5-xxl | 40.6 | 1.1 | 6.1 | 88.9 | 100.0 | | T5-LM-Adapt-xl | 40.6 | 1.6 | 6.6 | 88.2 | 99.0 | | T5-LM-Adapt-xxl | 31.2 | 1.2 | 5.3 | 89.8 | 100.0 | | GPT-Neo-1.3B | 46.9 | 0.7 | 1.3 | 93.6 | 99.0 | | GPT2-XL | 56.2 | 0.9 | 2.6 | 92.5 | 99.0 | | GPT-Neo-2.7B | 50.0 | 0.8 | 1.8 | 92.9 | 97.0 | | GPTJ-6B | 46.9 | 0.5 | 2.0 | 95.2 | 99.0 | | GPT-Neox-20B | 40.6 | 0.2 | 1.8 | 94.2 | 98.0 | | BLOOM | 40.6 | 0.3 | 1.8 | 93.8 | 99.0 | | BLOOM-7B1 | 50.0 | 1.0 | 2.8 | 95.9 | 100.0 | | BLOOM-3B | 53.1 | 1.2 | 2.2 | 93.5 | 100.0 | | BLOOM-1B7 | 53.1 | 0.9 | 2.2 | 92.9 | 99.0 | | BLOOM-1B1 | 62.5 | 1.3 | 2.6 | 93.6 | 98.0 | | OPT-175B | 40.6 | 0.6 | 2.2 | 91.4 | 99.0 | | OPT-66B | 43.8 | 0.9 | 2.2 | 92.8 | 99.0 | | OPT-30B | 43.8 | 0.8 | 2.0 | 94.1 | 99.0 | | OPT-13B | 43.8 | 0.9 | 1.8 | 95.5 | 99.0 | | OPT-6.7B | 56.2 | 0.9 | 2.6 | 94.6 | 98.0 | | OPT-2.7B | 43.8 | 1.2 | 2.6 | 92.9 | 98.0 | | OPT-1.3B | 46.9 | 1.2 | 2.0 | 92.5 | 98.0 | Model FIR FCMG FIB FactCC MFMA T0-3B 46.9 1.6 8.5 76.6 100.0 T0 28.1 1.2 6.1 75.9 96.0 FLAN-T5-xl 40.6 1.6 7.2 74.6 100.0 FLAN-T5-xxl 34.4 1.7 5.9 69.9 100.0 T5-LM-Adapt-xl 34.4 1.1 6.1 69.4 98.0 T5-LM-Adapt-xxl 34.4 0.9 5.3 68.4 99.0 GPT-Neo-1.3B 50.0 0.5 3.7 69.8 99.0 GPT2-XL 43.8 0.4 3.5 69.8 99.0 GPT-Neo-2.7B 46.9 0.4 2.6 66.9 99.0 GPTJ-6B 59.4 0.5 2.4 73.6 99.0 GPT-Neox-20B 56.2 0.4 2.4 69.0 99.0 BLOOM 40.6 0.5 2.4 69.7 99.0 BLOOM-7B1 56.2 0.5 2.9 73.9 100.0 BLOOM-3B 56.2 0.5 2.9 71.1 100.0 BLOOM-1B7 53.1 0.5 3.3 64.8 98.0 BLOOM-1B1 59.4 0.5 3.5 68.4 99.0 OPT-175B 53.1 0.7 2.8 70.4 98.0 OPT-66B 59.4 0.5 2.4 68.1 99.0 OPT-30B 53.1 0.6 3.1 71.9 99.0 OPT-13B 43.8 0.6 3.1 71.3 98.0 OPT-6.7B 53.1 0.5 2.4 72.6 99.0 OPT-2.7B 56.2 0.5 3.1 66.0 98.0 OPT-1.3B 53.1 0.5 3.7 69.3 99.0 Model FIR FCMG FIB FactCC MFMA T0-3B 71.9 45.1 52.7 98.7 97.0 T0 62.5 37.4 42.7 97.4 97.0 FLAN-T5-xl 75.0 42.8 48.6 98.4 98.0 FLAN-T5-xxl 68.8 26.9 35.5 97.0 99.0 T5-LM-Adapt-xl 90.6 39.7 45.1 97.0 89.0 T5-LM-Adapt-xxl 68.8 31.4 32.6 98.7 94.0 GPT-Neo-1.3B 78.1 24.3 20.1 97.4 99.0 GPT2-XL 81.2 26.9 26.5 96.6 97.0 GPT-Neo-2.7B 75.0 24.1 19.9 97.0 98.0 GPTJ-6B 78.1 21.0 18.6 97.9 99.0 GPT-Neox-20B 75.0 22.5 20.4 98.0 99.0 BLOOM 59.4 16.7 16.6 98.3 100.0 BLOOM-7B1 78.1 22.1 21.0 97.6 100.0 BLOOM-3B 78.1 25.2 20.6 98.0 98.0 BLOOM-1B7 81.2 23.4 20.1 97.0 98.0 BLOOM-1B1 84.4 26.2 23.2 97.4 98.0 OPT-175B 65.6 25.9 20.8 97.3 99.0 OPT-66B 68.8 26.7 23.6 97.9 99.0 OPT-30B 75.0 25.3 21.0 97.9 100.0 OPT-13B 68.8 28.1 24.3 97.9 100.0 OPT-6.7B 78.1 29.4 26.7 98.7 100.0 OPT-2.7B 71.9 29.5 25.8 98.3 100.0 OPT-1.3B 75.0 27.8 23.8 98.3 100.0 | Model | BART-base | T5-base | |-----------------|-------------|-----------| | T0-3B | 93.4 | 74.9 | | T0 | 94.2 | 71.2 | | FLAN-T5-xl | 94.8 | 74.3 | | FLAN-T5-xxl | 95.0 | 77.9 | | T5-LM-Adapt-xl | 94.2 | 79.3 | | T5-LM-Adapt-xxl | 95.0 | 81.0 | | GPT-Neo-1.3B | 93.6 | 79.1 | | GPT2-XL | 91.7 | 72.9 | | GPT-Neo-2.7B | 94.4 | 73.7 | | GPTJ-6B | 94.2 | 78.8 | | GPT-Neox-20B | 95.2 | 75.7 | | BLOOM | 95.0 | 79.6 | | BLOOM-7B1 | 94.6 | 76.5 | | BLOOM-3B | 94.4 | 77.1 | | BLOOM-1B7 | 95.0 | 77.4 | | BLOOM-1B1 | 93.2 | 75.7 | | OPT-175B | 94.6 | 77.7 | | OPT-66B | 95.2 | 77.4 | | OPT-30B | 94.8 | 77.9 | | OPT-13B | 95.0 | 76.0 | | OPT-6.7B | 95.0 | 75.7 | | OPT-2.7B | 94.0 | 75.7 | | OPT-1.3B | 93.8 | 76.5 | | Model | BART-base | T5-base | |-----------------|-------------|-----------| | T0-3B | 79.7 | 54.2 | | T0 | 83.0 | 61.2 | | FLAN-T5-xl | 81.0 | 54.2 | | FLAN-T5-xxl | 82.8 | 58.7 | | T5-LM-Adapt-xl | 71.2 | 51.4 | | T5-LM-Adapt-xxl | 74.9 | 57.3 | | GPT-Neo-1.3B | 65.6 | 51.1 | | GPT2-XL | 66.5 | 53.6 | | GPT-Neo-2.7B | 69.6 | 52.8 | | GPTJ-6B | 76.8 | 59.2 | | GPT-Neox-20B | 76.0 | 62.3 | | BLOOM | 80.1 | 64.5 | | BLOOM-7B1 | 72.3 | 57.5 | | BLOOM-3B | 71.4 | 54.7 | | BLOOM-1B7 | 69.4 | 51.1 | | BLOOM-1B1 | 67.9 | 49.7 | | OPT-175B | 83.0 | 69.9 | | OPT-66B | 81.8 | 67.0 | | OPT-30B | 78.7 | 64.8 | | OPT-13B | 79.5 | 65.6 | | OPT-6.7B | 76.0 | 63.1 | | OPT-2.7B | 74.1 | 62.0 | | OPT-1.3B | 70.8 | 57.3 | | Model | BART-base | T5-base | |-----------------|-------------|-----------| | T0-3B | 93.6 | 71.5 | | T0 | 94.2 | 70.9 | | FLAN-T5-xl | 93.2 | 70.4 | | FLAN-T5-xxl | 94.4 | 74.9 | | T5-LM-Adapt-xl | 91.9 | 74.0 | | T5-LM-Adapt-xxl | 93.6 | 74.6 | | GPT-Neo-1.3B | 92.3 | 73.7 | | GPT2-XL | 91.1 | 66.8 | | GPT-Neo-2.7B | 92.3 | 70.1 | | GPTJ-6B | 93.2 | 73.5 | | GPT-Neox-20B | 93.4 | 71.5 | | BLOOM | 93.2 | 74.3 | | BLOOM-7B1 | 93.8 | 74.6 | | BLOOM-3B | 94.0 | 74.6 | | BLOOM-1B7 | 93.4 | 71.2 | | BLOOM-1B1 | 91.7 | 70.7 | | OPT-175B | 94.0 | 76.5 | | OPT-66B | 93.4 | 73.7 | | OPT-30B | 94.4 | 74.3 | | OPT-13B | 94.2 | 70.9 | | OPT-6.7B | 93.0 | 73.7 | | OPT-2.7B | 93.6 | 72.6 | | OPT-1.3B | 92.1 | 70.9 | | Model | BART-base | T5-base | |-----------------|-------------|-----------| | T0-3B | 85.9 | 58.4 | | T0 | 88.2 | 65.6 | | FLAN-T5-xl | 87.4 | 59.5 | | FLAN-T5-xxl | 89.6 | 64.0 | | T5-LM-Adapt-xl | 80.3 | 60.6 | | T5-LM-Adapt-xxl | 84.7 | 63.4 | | GPT-Neo-1.3B | 73.3 | 57.3 | | GPT2-XL | 75.4 | 58.7 | | GPT-Neo-2.7B | 75.8 | 57.5 | | GPTJ-6B | 83.2 | 64.0 | | GPT-Neox-20B | 83.2 | 67.0 | | BLOOM | 86.3 | 69.6 | | BLOOM-7B1 | 78.3 | 64.0 | | BLOOM-3B | 76.4 | 58.9 | | BLOOM-1B7 | 72.0 | 56.7 | | BLOOM-1B1 | 72.3 | 54.2 | | OPT-175B | 88.6 | 73.5 | | OPT-66B | 86.1 | 72.9 | | OPT-30B | 86.1 | 68.7 | | OPT-13B | 86.1 | 69.8 | | OPT-6.7B | 84.3 | 65.1 | | OPT-2.7B | 81.2 | 63.4 | | OPT-1.3B | 78.5 | 63.7 | | Model | Date Swap | Entity Swap | Negation | Number Swap | Pronoun | |-----------------|-------------|---------------|------------|---------------|-----------| | T0-3B | 76.4 | 86.6 | 94.5 | 76.5 | 78.7 | | T0 | 85.5 | 86.9 | 93.9 | 92.6 | 84.8 | | FLAN-T5-xl | 72.7 | 86.0 | 96.1 | 82.4 | 72.6 | | FLAN-T5-xxl | 76.4 | 85.5 | 97.2 | 85.3 | 67.1 | | T5-LM-Adapt-xl | 67.3 | 75.9 | 89.9 | 60.3 | 65.2 | | T5-LM-Adapt-xxl | 69.1 | 81.4 | 94.5 | 70.6 | 72.0 | | GPT-Neo-1.3B | 52.7 | 66.3 | 75.5 | 42.6 | 72.0 | | GPT2-XL | 60.0 | 69.2 | 82.1 | 41.2 | 63.4 | | GPT-Neo-2.7B | 65.5 | 65.7 | 81.2 | 54.4 | 70.7 | | GPTJ-6B | 60.0 | 70.6 | 85.1 | 54.4 | 63.4 | | GPT-Neox-20B | 61.8 | 68.9 | 86.2 | 55.9 | 62.8 | | BLOOM | 60.0 | 72.1 | 83.4 | 67.6 | 66.5 | | BLOOM-7B1 | 60.0 | 71.5 | 76.8 | 52.9 | 65.9 | | BLOOM-3B | 50.9 | 69.5 | 75.7 | 57.4 | 69.5 | | BLOOM-1B7 | 54.5 | 65.1 | 70.5 | 60.3 | 73.8 | | BLOOM-1B1 | 58.2 | 63.1 | 65.9 | 54.4 | 66.5 | | OPT-175B | 56.4 | 64.8 | 83.2 | 61.8 | 59.8 | | OPT-66B | 58.2 | 63.7 | 84.0 | 60.3 | 57.3 | | OPT-30B | 61.8 | 65.1 | 84.5 | 63.2 | 59.1 | | OPT-13B | 65.5 | 68.6 | 81.6 | 63.2 | 55.5 | | OPT-6.7B | 63.6 | 66.9 | 80.1 | 60.3 | 57.9 | | OPT-2.7B | 60.0 | 65.1 | 82.7 | 51.5 | 59.1 | | OPT-1.3B | 63.6 | 63.1 | 83.2 | 57.4 | 58.5 | | Model | Date Swap | Entity Swap | Negation | Number Swap | Pronoun | |-----------------|-------------|---------------|------------|---------------|-----------| | T0-3B | 96.4 | 96.5 | 98.7 | 94.1 | 99.4 | | T0 | 100.0 | 95.3 | 96.7 | 97.1 | 99.4 | | FLAN-T5-xl | 100.0 | 96.2 | 98.7 | 92.6 | 99.4 | | FLAN-T5-xxl | 98.2 | 95.9 | 99.1 | 98.5 | 99.4 | | T5-LM-Adapt-xl | 92.7 | 91.0 | 92.8 | 89.7 | 100.0 | | T5-LM-Adapt-xxl | 94.5 | 93.3 | 96.9 | 89.7 | 100.0 | | GPT-Neo-1.3B | 96.4 | 89.5 | 97.6 | 88.2 | 99.4 | | GPT2-XL | 96.4 | 91.3 | 97.8 | 86.8 | 100.0 | | GPT-Neo-2.7B | 96.4 | 92.4 | 98.2 | 86.8 | 100.0 | | GPTJ-6B | 98.2 | 93.9 | 98.9 | 88.2 | 100.0 | | GPT-Neox-20B | 98.2 | 93.6 | 99.3 | 89.7 | 100.0 | | BLOOM | 98.2 | 95.3 | 99.6 | 92.6 | 100.0 | | BLOOM-7B1 | 98.2 | 92.7 | 99.1 | 85.3 | 100.0 | | BLOOM-3B | 92.7 | 91.6 | 99.1 | 85.3 | 100.0 | | BLOOM-1B7 | 92.7 | 89.8 | 98.5 | 83.8 | 99.4 | | BLOOM-1B1 | 90.9 | 86.9 | 96.7 | 85.3 | 99.4 | | OPT-175B | 100.0 | 95.6 | 99.3 | 92.6 | 100.0 | | OPT-66B | 98.2 | 94.8 | 99.6 | 89.7 | 100.0 | | OPT-30B | 98.2 | 95.1 | 98.9 | 91.2 | 100.0 | | OPT-13B | 98.2 | 94.8 | 97.8 | 88.2 | 100.0 | | OPT-6.7B | 98.2 | 95.1 | 98.5 | 83.8 | 100.0 | | OPT-2.7B | 98.2 | 93.9 | 98.9 | 86.8 | 100.0 | | OPT-1.3B | 96.4 | 91.9 | 98.5 | 89.7 | 99.4 | | Model | Date Swap | Entity Swap | Negation | Number Swap | Pronoun | |-----------------|-------------|---------------|------------|---------------|-----------| | T0-3B | 83.6 | 83.7 | 84.2 | 80.9 | 80.5 | | T0 | 87.3 | 86.0 | 92.3 | 91.2 | 86.0 | | FLAN-T5-xl | 80.0 | 78.8 | 87.1 | 83.8 | 74.4 | | FLAN-T5-xxl | 78.2 | 79.9 | 86.2 | 86.8 | 69.5 | | T5-LM-Adapt-xl | 70.9 | 70.9 | 69.8 | 64.7 | 70.1 | | T5-LM-Adapt-xxl | 74.5 | 75.0 | 79.9 | 72.1 | 75.0 | | GPT-Neo-1.3B | 63.6 | 63.4 | 57.1 | 38.2 | 72.0 | | GPT2-XL | 65.5 | 64.0 | 68.5 | 42.6 | 63.4 | | GPT-Neo-2.7B | 65.5 | 64.8 | 67.8 | 54.4 | 70.7 | | GPTJ-6B | 69.1 | 66.9 | 69.4 | 52.9 | 63.4 | | GPT-Neox-20B | 65.5 | 66.0 | 76.4 | 55.9 | 62.8 | | BLOOM | 65.5 | 69.5 | 79.9 | 64.7 | 66.5 | | BLOOM-7B1 | 63.6 | 67.4 | 71.3 | 50.0 | 65.9 | | BLOOM-3B | 58.2 | 65.4 | 67.4 | 52.9 | 69.5 | | BLOOM-1B7 | 54.5 | 63.7 | 63.2 | 52.9 | 73.8 | | BLOOM-1B1 | 58.2 | 59.9 | 56.2 | 50.0 | 66.5 | | OPT-175B | 54.5 | 61.9 | 71.1 | 64.7 | 59.8 | | OPT-66B | 67.3 | 58.7 | 73.3 | 60.3 | 57.3 | | OPT-30B | 61.8 | 62.5 | 73.3 | 64.7 | 59.1 | | OPT-13B | 67.3 | 64.5 | 69.4 | 63.2 | 55.5 | | OPT-6.7B | 67.3 | 62.8 | 70.7 | 57.4 | 57.9 | | OPT-2.7B | 63.6 | 65.4 | 72.2 | 50.0 | 59.1 | | OPT-1.3B | 67.3 | 60.5 | 71.1 | 55.9 | 58.5 | | Model | Date Swap | Entity Swap | Negation | Number Swap | Pronoun | |-----------------|-------------|---------------|------------|---------------|-----------| | T0-3B | 98.2 | 96.8 | 100.0 | 95.6 | 99.4 | | T0 | 98.2 | 95.6 | 99.1 | 98.5 | 98.8 | | FLAN-T5-xl | 100.0 | 96.2 | 100.0 | 94.1 | 99.4 | | FLAN-T5-xxl | 98.2 | 95.6 | 100.0 | 98.5 | 99.4 | | T5-LM-Adapt-xl | 98.2 | 95.9 | 100.0 | 91.2 | 100.0 | | T5-LM-Adapt-xxl | 98.2 | 96.8 | 100.0 | 89.7 | 100.0 | | GPT-Neo-1.3B | 96.4 | 93.9 | 99.8 | 88.2 | 99.4 | | GPT2-XL | 96.4 | 95.1 | 99.6 | 86.8 | 100.0 | | GPT-Neo-2.7B | 96.4 | 94.8 | 99.1 | 88.2 | 100.0 | | GPTJ-6B | 98.2 | 96.2 | 100.0 | 88.2 | 100.0 | | GPT-Neox-20B | 98.2 | 95.9 | 99.8 | 89.7 | 100.0 | | BLOOM | 100.0 | 97.1 | 99.8 | 91.2 | 100.0 | | BLOOM-7B1 | 98.2 | 95.3 | 100.0 | 86.8 | 100.0 | | BLOOM-3B | 92.7 | 94.8 | 100.0 | 88.2 | 100.0 | | BLOOM-1B7 | 90.9 | 93.0 | 99.3 | 88.2 | 99.4 | | BLOOM-1B1 | 94.5 | 92.2 | 99.6 | 86.8 | 99.4 | | OPT-175B | 100.0 | 96.2 | 99.8 | 92.6 | 100.0 | | OPT-66B | 98.2 | 97.1 | 100.0 | 89.7 | 100.0 | | OPT-30B | 98.2 | 96.5 | 99.6 | 91.2 | 100.0 | | OPT-13B | 100.0 | 96.8 | 99.8 | 86.8 | 100.0 | | OPT-6.7B | 100.0 | 96.2 | 99.6 | 83.8 | 100.0 | | OPT-2.7B | 100.0 | 96.5 | 100.0 | 86.8 | 100.0 | | OPT-1.3B | 98.2 | 94.8 | 100.0 | 88.2 | 99.4 | | Model | Date Swap | Entity Swap | Negation | Number Swap | Pronoun | |-----------------|-------------|---------------|------------|---------------|-----------| | T0-3B | 81.8 | 78.3 | 91.6 | 75.0 | 80.0 | | T0 | 81.8 | 73.9 | 94.0 | 66.7 | 73.3 | | flan-t5-xl | 78.2 | 75.4 | 92.8 | 77.8 | 66.7 | | flan-t5-xxl | 76.4 | 71.0 | 90.4 | 69.4 | 66.7 | | t5-lm-adapt-xl | 80.0 | 81.2 | 84.3 | 75.0 | 71.1 | | t5-lm-adapt-xxl | 80.0 | 71.0 | 86.7 | 75.0 | 66.7 | | GPT-Neo-1.3B | 72.7 | 75.4 | 85.5 | 75.0 | 75.6 | | GPT2-XL | 78.2 | 79.7 | 86.7 | 75.0 | 71.1 | | GPT-Neo-2.7B | 74.5 | 73.9 | 85.5 | 80.6 | 75.6 | | GPTJ-6B | 80.0 | 76.8 | 91.6 | 83.3 | 75.6 | | GPT-Neox-20B | 67.3 | 72.5 | 88.0 | 77.8 | 71.1 | | BLOOM | 80.0 | 75.4 | 85.5 | 77.8 | 75.6 | | BLOOM-7B1 | 81.8 | 78.3 | 84.3 | 80.6 | 84.4 | | BLOOM-3B | 80.0 | 79.7 | 75.9 | 80.6 | 75.6 | | BLOOM-1B7 | 78.2 | 73.9 | 77.1 | 77.8 | 75.6 | | BLOOM-1B1 | 80.0 | 71.0 | 78.3 | 77.8 | 73.3 | | OPT-175B | 70.9 | 72.5 | 84.3 | 75.0 | 68.9 | | OPT-66B | 69.1 | 72.5 | 83.1 | 75.0 | 77.8 | | OPT-30B | 74.5 | 68.1 | 88.0 | 77.8 | 77.8 | | OPT-13B | 80.0 | 78.3 | 84.3 | 72.2 | 77.8 | | OPT-6.7B | 76.4 | 84.1 | 88.0 | 66.7 | 71.1 | | OPT-2.7B | 65.5 | 76.8 | 81.9 | 69.4 | 68.9 | | OPT-1.3B | 72.7 | 75.4 | 79.5 | 72.2 | 73.3 | | Model | Date Swap | Entity Swap | Negation | Number Swap | Pronoun | |-----------------|-------------|---------------|------------|---------------|-----------| | T0-3B | 92.7 | 89.9 | 91.6 | 86.1 | 88.9 | | T0 | 92.7 | 92.8 | 94.0 | 80.6 | 86.7 | | flan-t5-xl | 94.5 | 92.8 | 91.6 | 86.1 | 88.9 | | flan-t5-xxl | 92.7 | 88.4 | 94.0 | 80.6 | 82.2 | | t5-lm-adapt-xl | 89.1 | 88.4 | 89.2 | 86.1 | 86.7 | | t5-lm-adapt-xxl | 90.9 | 92.8 | 88.0 | 88.9 | 86.7 | | GPT-Neo-1.3B | 87.3 | 97.1 | 97.6 | 86.1 | 93.3 | | GPT2-XL | 87.3 | 94.2 | 95.2 | 88.9 | 93.3 | | GPT-Neo-2.7B | 89.1 | 95.7 | 94.0 | 91.7 | 91.1 | | GPTJ-6B | 92.7 | 95.7 | 97.6 | 91.7 | 95.6 | | GPT-Neox-20B | 90.9 | 95.7 | 96.4 | 91.7 | 93.3 | | BLOOM | 92.7 | 94.2 | 95.2 | 88.9 | 95.6 | | BLOOM-7B1 | 92.7 | 97.1 | 98.8 | 91.7 | 95.6 | | BLOOM-3B | 94.5 | 95.7 | 95.2 | 83.3 | 93.3 | | BLOOM-1B7 | 92.7 | 95.7 | 94.0 | 86.1 | 91.1 | | BLOOM-1B1 | 90.9 | 97.1 | 95.2 | 86.1 | 93.3 | | OPT-175B | 89.1 | 92.8 | 94.0 | 91.7 | 86.7 | | OPT-66B | 87.3 | 94.2 | 95.2 | 91.7 | 93.3 | | OPT-30B | 89.1 | 94.2 | 97.6 | 94.4 | 93.3 | | OPT-13B | 94.5 | 95.7 | 96.4 | 94.4 | 95.6 | | OPT-6.7B | 92.7 | 97.1 | 95.2 | 91.7 | 93.3 | | OPT-2.7B | 89.1 | 95.7 | 95.2 | 88.9 | 91.1 | | OPT-1.3B | 89.1 | 94.2 | 95.2 | 86.1 | 93.3 | | Model | Date Swap | Entity Swap | Negation | Number Swap | Pronoun | |-----------------|-------------|---------------|------------|---------------|-----------| | T0-3B | 74.5 | 73.9 | 83.1 | 72.2 | 75.6 | | T0 | 78.2 | 72.5 | 88.0 | 63.9 | 66.7 | | flan-t5-xl | 76.4 | 73.9 | 79.5 | 75.0 | 64.4 | | flan-t5-xxl | 74.5 | 65.2 | 80.7 | 66.7 | 55.6 | | t5-lm-adapt-xl | 69.1 | 75.4 | 67.5 | 66.7 | 64.4 | | t5-lm-adapt-xxl | 72.7 | 68.1 | 72.3 | 69.4 | 55.6 | | GPT-Neo-1.3B | 63.6 | 76.8 | 68.7 | 63.9 | 71.1 | | GPT2-XL | 74.5 | 75.4 | 67.5 | 66.7 | 60.0 | | GPT-Neo-2.7B | 63.6 | 71.0 | 65.1 | 63.9 | 68.9 | | GPTJ-6B | 69.1 | 72.5 | 74.7 | 86.1 | 68.9 | | GPT-Neox-20B | 61.8 | 68.1 | 74.7 | 77.8 | 62.2 | | BLOOM | 69.1 | 68.1 | 74.7 | 75.0 | 60.0 | | BLOOM-7B1 | 74.5 | 73.9 | 74.7 | 69.4 | 75.6 | | BLOOM-3B | 74.5 | 76.8 | 65.1 | 72.2 | 66.7 | | BLOOM-1B7 | 65.5 | 69.6 | 57.8 | 66.7 | 66.7 | | BLOOM-1B1 | 70.9 | 68.1 | 67.5 | 66.7 | 68.9 | | OPT-175B | 65.5 | 68.1 | 75.9 | 77.8 | 64.4 | | OPT-66B | 61.8 | 68.1 | 71.1 | 69.4 | 68.9 | | OPT-30B | 72.7 | 66.7 | 78.3 | 72.2 | 68.9 | | OPT-13B | 74.5 | 73.9 | 71.1 | 69.4 | 64.4 | | OPT-6.7B | 69.1 | 79.7 | 78.3 | 63.9 | 60.0 | | OPT-2.7B | 65.5 | 73.9 | 63.9 | 58.3 | 62.2 | | OPT-1.3B | 65.5 | 72.5 | 69.9 | 69.4 | 66.7 | | Model | Date Swap | Entity Swap | Negation | Number Swap | Pronoun | |-----------------|-------------|---------------|------------|---------------|-----------| | T0-3B | 96.4 | 100.0 | 100.0 | 94.4 | 100.0 | | T0 | 96.4 | 100.0 | 100.0 | 88.9 | 95.6 | | flan-t5-xl | 98.2 | 100.0 | 100.0 | 91.7 | 97.8 | | flan-t5-xxl | 96.4 | 98.6 | 98.8 | 88.9 | 97.8 | | t5-lm-adapt-xl | 98.2 | 98.6 | 97.6 | 88.9 | 97.8 | | t5-lm-adapt-xxl | 96.4 | 100.0 | 100.0 | 94.4 | 100.0 | | GPT-Neo-1.3B | 90.9 | 100.0 | 100.0 | 91.7 | 100.0 | | GPT2-XL | 92.7 | 97.1 | 98.8 | 94.4 | 97.8 | | GPT-Neo-2.7B | 90.9 | 98.6 | 100.0 | 91.7 | 100.0 | | GPTJ-6B | 94.5 | 98.6 | 100.0 | 94.4 | 100.0 | | GPT-Neox-20B | 94.5 | 100.0 | 100.0 | 91.7 | 100.0 | | BLOOM | 96.4 | 98.6 | 100.0 | 94.4 | 100.0 | | BLOOM-7B1 | 94.5 | 98.6 | 98.8 | 94.4 | 100.0 | | BLOOM-3B | 96.4 | 100.0 | 100.0 | 88.9 | 100.0 | | BLOOM-1B7 | 94.5 | 98.6 | 98.8 | 94.4 | 95.6 | | BLOOM-1B1 | 94.5 | 100.0 | 98.8 | 91.7 | 97.8 | | OPT-175B | 94.5 | 98.6 | 100.0 | 94.4 | 95.6 | | OPT-66B | 94.5 | 98.6 | 100.0 | 94.4 | 100.0 | | OPT-30B | 94.5 | 98.6 | 100.0 | 94.4 | 100.0 | | OPT-13B | 94.5 | 98.6 | 100.0 | 94.4 | 100.0 | | OPT-6.7B | 96.4 | 100.0 | 100.0 | 94.4 | 100.0 | | OPT-2.7B | 94.5 | 100.0 | 100.0 | 94.4 | 100.0 | | OPT-1.3B | 94.5 | 100.0 | 100.0 | 94.4 | 100.0 | Model BART- BART- BLOOM- distil- distil- PEGASUS T5base large 560m BART PEGASUS large T0-3B 62.2 33.7 90.5 32.2 17.5 25.8 94.1 T0 64.9 18.6 85.7 23.3 14.3 29.0 76.5 FLAN-T5-xl 64.9 38.4 90.5 38.9 25.4 38.7 82.4 FLAN-T5-xxl 70.3 46.5 90.5 42.2 28.6 35.5 82.4 T5-LM-Adapt-xl 56.8 45.3 76.2 44.4 31.7 35.5 82.4 T5-LM-Adapt-xxl 59.5 45.3 71.4 45.6 34.9 38.7 76.5 GPT-Neo-1.3B 59.5 38.4 66.7 53.3 28.6 22.6 76.5 GPT2-XL 62.2 40.7 61.9 50.0 27.0 33.9 52.9 GPT-Neo-2.7B 56.8 41.9 57.1 52.2 28.6 33.9 76.5 GPTJ-6B 64.9 40.7 71.4 61.1 38.1 29.0 64.7 GPT-Neox-20B 73.0 36.0 61.9 58.9 33.3 32.3 64.7 BLOOM 56.8 41.9 71.4 51.1 27.0 25.8 70.6 BLOOM-7B1 56.8 34.9 52.4 50.0 30.2 27.4 70.6 BLOOM-3B 64.9 30.2 57.1 50.0 23.8 32.3 64.7 BLOOM-1B7 70.3 33.7 52.4 45.6 22.2 29.0 70.6 BLOOM-1B1 62.2 32.6 57.1 43.3 22.2 30.6 58.8 OPT-175B 59.5 41.9 66.7 52.2 34.9 25.8 76.5 OPT-66B 75.7 38.4 52.4 57.8 31.7 22.6 70.6 OPT-30B 62.2 39.5 52.4 55.6 38.1 27.4 70.6 OPT-13B 64.9 44.2 57.1 54.4 38.1 22.6 70.6 OPT-6.7B 73.0 38.4 52.4 58.9 34.9 17.7 70.6 OPT-2.7B 64.9 37.2 52.4 54.4 38.1 29.0 70.6 OPT-1.3B 62.2 40.7 61.9 53.3 28.6 27.4 58.8 | Model | BART- | BART- | BLOOM- | distil- | distil- | PEGASUS | T5- | |-----------------|---------|---------|----------|-----------|-----------|-----------|-------| | base | large | 560m | BART | PEGASUS | large | | | | T0-3B | 27.0 | 2.3 | 95.2 | 3.3 | 7.9 | 3.2 | 52.9 | | T0 | 51.4 | 9.3 | 95.2 | 6.7 | 4.8 | 8.1 | 58.8 | | FLAN-T5-xl | 27.0 | 2.3 | 95.2 | 2.2 | 7.9 | 8.1 | 52.9 | | FLAN-T5-xxl | 37.8 | 5.8 | 95.2 | 4.4 | 4.8 | 4.8 | 52.9 | | T5-LM-Adapt-xl | 32.4 | 7.0 | 38.1 | 11.1 | 17.5 | 12.9 | 29.4 | | T5-LM-Adapt-xxl | 40.5 | 5.8 | 47.6 | 7.8 | 15.9 | 16.1 | 41.2 | | GPT-Neo-1.3B | 40.5 | 7.0 | 42.9 | 16.7 | 6.3 | 11.3 | 41.2 | | GPT2-XL | 35.1 | 5.8 | 47.6 | 13.3 | 14.3 | 14.5 | 47.1 | | GPT-Neo-2.7B | 35.1 | 10.5 | 38.1 | 18.9 | 9.5 | 12.9 | 41.2 | | GPTJ-6B | 51.4 | 9.3 | 52.4 | 17.8 | 9.5 | 8.1 | 47.1 | | GPT-Neox-20B | 51.4 | 5.8 | 52.4 | 21.1 | 9.5 | 8.1 | 47.1 | | BLOOM | 51.4 | 10.5 | 66.7 | 20.0 | 9.5 | 12.9 | 58.8 | | BLOOM-7B1 | 43.2 | 5.8 | 57.1 | 20.0 | 15.9 | 9.7 | 47.1 | | BLOOM-3B | 35.1 | 9.3 | 52.4 | 21.1 | 9.5 | 14.5 | 35.3 | | BLOOM-1B7 | 32.4 | 10.5 | 47.6 | 22.2 | 15.9 | 9.7 | 35.3 | | BLOOM-1B1 | 27.0 | 11.6 | 47.6 | 22.2 | 12.7 | 16.1 | 23.5 | | OPT-175B | 56.8 | 7.0 | 66.7 | 20.0 | 11.1 | 9.7 | 47.1 | | OPT-66B | 54.1 | 5.8 | 66.7 | 20.0 | 12.7 | 9.7 | 47.1 | | OPT-30B | 48.6 | 7.0 | 61.9 | 18.9 | 9.5 | 9.7 | 52.9 | | OPT-13B | 51.4 | 5.8 | 61.9 | 17.8 | 7.9 | 9.7 | 58.8 | | OPT-6.7B | 51.4 | 4.7 | 47.6 | 15.6 | 12.7 | 12.9 | 58.8 | | OPT-2.7B | 45.9 | 4.7 | 47.6 | 18.9 | 12.7 | 11.3 | 41.2 | | OPT-1.3B | 43.2 | 5.8 | 52.4 | 17.8 | 12.7 | 9.7 | 41.2 | Model BART- BART- BLOOM- distil- distil- PEGASUS T5base large 560m BART PEGASUS large T0-3B 64.9 27.9 66.7 34.4 38.1 45.2 76.5 T0 64.9 18.6 81.0 22.2 22.2 32.3 82.4 FLAN-T5-xl 59.5 39.5 66.7 44.4 47.6 48.4 58.8 FLAN-T5-xxl 59.5 40.7 57.1 40.0 49.2 46.8 64.7 T5-LM-Adapt-xl 56.8 40.7 38.1 48.9 50.8 51.6 64.7 T5-LM-Adapt-xxl 59.5 41.9 42.9 43.3 47.6 51.6 58.8 GPT-Neo-1.3B 67.6 36.0 4.8 54.4 42.9 35.5 58.8 GPT2-XL 67.6 38.4 28.6 53.3 49.2 46.8 52.9 GPT-Neo-2.7B 64.9 37.2 9.5 56.7 46.0 43.5 58.8 GPTJ-6B 70.3 40.7 9.5 62.2 55.6 48.4 58.8 GPT-Neox-20B 73.0 31.4 19.0 55.6 46.0 45.2 58.8 BLOOM 67.6 45.3 14.3 44.4 41.3 40.3 70.6 BLOOM-7B1 62.2 40.7 9.5 53.3 42.9 40.3 64.7 BLOOM-3B 73.0 34.9 19.0 54.4 36.5 48.4 64.7 BLOOM-1B7 62.2 37.2 14.3 43.3 39.7 48.4 52.9 BLOOM-1B1 62.2 32.6 9.5 46.7 38.1 46.8 52.9 OPT-175B 67.6 40.7 9.5 54.4 49.2 38.7 70.6 OPT-66B 75.7 38.4 4.8 54.4 52.4 37.1 70.6 OPT-30B 67.6 43.0 14.3 52.2 46.0 38.7 58.8 OPT-13B 64.9 43.0 9.5 53.3 50.8 41.9 64.7 OPT-6.7B 73.0 38.4 4.8 58.9 52.4 37.1 52.9 OPT-2.7B 73.0 40.7 9.5 57.8 52.4 40.3 58.8 OPT-1.3B 64.9 43.0 4.8 52.2 44.4 43.5 47.1 | Model | BART- | BART- | BLOOM- | distil- | distil- | PEGASUS | T5- | |-----------------|---------|---------|----------|-----------|-----------|-----------|-------| | base | large | 560m | BART | PEGASUS | large | | | | T0-3B | 21.6 | 4.7 | 100.0 | 5.6 | 6.3 | 4.8 | 47.1 | | T0 | 48.6 | 9.3 | 100.0 | 10.0 | 6.3 | 9.7 | 64.7 | | FLAN-T5-xl | 27.0 | 7.0 | 100.0 | 4.4 | 6.3 | 11.3 | 52.9 | | FLAN-T5-xxl | 32.4 | 7.0 | 100.0 | 4.4 | 3.2 | 12.9 | 47.1 | | T5-LM-Adapt-xl | 32.4 | 12.8 | 95.2 | 15.6 | 14.3 | 11.3 | 47.1 | | T5-LM-Adapt-xxl | 32.4 | 10.5 | 90.5 | 11.1 | 11.1 | 11.3 | 58.8 | | GPT-Neo-1.3B | 37.8 | 9.3 | 85.7 | 23.3 | 6.3 | 11.3 | 35.3 | | GPT2-XL | 32.4 | 8.1 | 85.7 | 16.7 | 9.5 | 14.5 | 52.9 | | GPT-Neo-2.7B | 37.8 | 9.3 | 85.7 | 23.3 | 7.9 | 11.3 | 47.1 | | GPTJ-6B | 35.1 | 7.0 | 95.2 | 22.2 | 11.1 | 8.1 | 52.9 | | GPT-Neox-20B | 51.4 | 10.5 | 95.2 | 26.7 | 9.5 | 9.7 | 58.8 | | BLOOM | 40.5 | 12.8 | 95.2 | 17.8 | 7.9 | 9.7 | 64.7 | | BLOOM-7B1 | 40.5 | 9.3 | 90.5 | 23.3 | 9.5 | 11.3 | 52.9 | | BLOOM-3B | 37.8 | 10.5 | 90.5 | 23.3 | 11.1 | 12.9 | 41.2 | | BLOOM-1B7 | 40.5 | 12.8 | 85.7 | 25.6 | 11.1 | 12.9 | 41.2 | | BLOOM-1B1 | 32.4 | 16.3 | 81.0 | 23.3 | 9.5 | 12.9 | 47.1 | | OPT-175B | 51.4 | 9.3 | 95.2 | 24.4 | 6.3 | 12.9 | 64.7 | | OPT-66B | 43.2 | 11.6 | 95.2 | 21.1 | 7.9 | 11.3 | 64.7 | | OPT-30B | 45.9 | 10.5 | 95.2 | 23.3 | 4.8 | 12.9 | 64.7 | | OPT-13B | 48.6 | 9.3 | 95.2 | 20.0 | 6.3 | 9.7 | 64.7 | | OPT-6.7B | 45.9 | 9.3 | 95.2 | 23.3 | 9.5 | 11.3 | 64.7 | | OPT-2.7B | 37.8 | 12.8 | 95.2 | 20.0 | 7.9 | 11.3 | 58.8 | | OPT-1.3B | 37.8 | 12.8 | 95.2 | 20.0 | 4.8 | 9.7 | 47.1 | Model B BL HG L MS MI NS OD O PB PT R RE T TS T0-3B 1.4 3.9 1.3 2.1 5.1 4.5 23.7 39.3 8.7 3.4 0.0 23.2 2.6 4.7 3.7 T0 2.7 3.9 0.0 1.1 2.5 6.1 10.5 21.4 4.3 1.1 1.4 8.7 3.9 4.7 7.4 FLAN-T5-xl 1.4 3.9 1.3 0.0 3.8 3.0 25.0 28.6 0.0 2.3 1.4 23.2 5.3 6.2 5.6 FLAN-T5-xxl 2.7 2.6 1.3 1.1 2.5 3.0 14.5 35.7 0.0 0.0 1.4 15.9 6.6 4.7 1.9 T5-LM-Adapt-xl 5.4 2.6 0.0 0.0 0.0 3.0 18.4 35.7 0.0 0.0 0.0 20.3 2.6 3.1 1.9 T5-LM-Adapt-xxl 5.4 5.2 2.6 1.1 5.1 6.1 14.5 28.6 0.0 2.3 1.4 17.4 5.3 6.2 1.9 GPT-Neo-1.3B 1.4 1.3 0.0 1.1 3.8 4.5 35.5 32.1 2.2 1.1 2.7 20.3 2.6 3.1 0.0 GPT2-XL 1.4 2.6 2.6 1.1 2.5 6.1 44.7 14.3 0.0 2.3 2.7 40.6 2.6 0.0 1.9 GPT-Neo-2.7B 4.1 3.9 3.8 1.1 6.3 3.0 31.6 28.6 2.2 2.3 2.7 24.6 6.6 6.2 3.7 GPTJ-6B 4.1 5.2 5.1 2.1 5.1 6.1 25.0 14.3 2.2 3.4 6.8 20.3 6.6 6.2 3.7 GPT-Neox-20B 5.4 6.5 6.4 2.1 8.9 7.6 23.7 14.3 4.3 5.7 6.8 23.2 7.9 6.2 3.7 BLOOM 5.4 5.2 7.7 5.3 11.4 9.1 28.9 17.9 4.3 6.8 8.2 26.1 14.5 10.9 3.7 BLOOM-7B1 4.1 5.2 6.4 5.3 5.1 9.1 27.6 25.0 6.5 5.7 8.2 24.6 7.9 10.9 5.6 BLOOM-3B 5.4 5.2 3.8 3.2 3.8 4.5 28.9 28.6 2.2 4.5 4.1 20.3 5.3 7.8 3.7 BLOOM-1B7 2.7 2.6 2.6 1.1 3.8 3.0 27.6 32.1 2.2 2.3 2.7 23.2 5.3 4.7 1.9 BLOOM-1B1 2.7 2.6 1.3 0.0 5.1 4.5 31.6 32.1 2.2 1.1 4.1 27.5 7.9 4.7 0.0 OPT-175B 10.8 11.7 11.5 5.3 10.1 10.6 30.3 14.3 4.3 8.0 9.6 20.3 13.2 10.9 7.4 OPT-66B 9.5 9.1 9.0 3.2 8.9 6.1 19.7 10.7 4.3 5.7 8.2 15.9 9.2 7.8 5.6 OPT-30B 14.9 10.4 9.0 4.2 10.1 10.6 25.0 10.7 6.5 9.1 9.6 17.4 11.8 9.4 7.4 OPT-13B 6.8 6.5 5.1 2.1 6.3 7.6 23.7 14.3 2.2 4.5 8.2 20.3 7.9 7.8 3.7 OPT-6.7B 8.1 7.8 9.0 4.2 7.6 9.1 25.0 17.9 6.5 6.8 8.2 21.7 10.5 9.4 5.6 OPT-2.7B 6.8 5.2 5.1 1.1 3.8 6.1 26.3 17.9 4.3 4.5 5.5 20.3 6.6 7.8 1.9 OPT-1.3B 9.5 5.2 3.8 1.1 3.8 6.1 23.7 17.9 2.2 2.3 4.1 15.9 5.3 6.2 1.9 Table 29: The performance of the models on CNN/DM with factually consistent model-generated alternative-choices using avg. LL as the scoring function. The models are BanditSumm (B), BERT_LSTM_PN_RL (BL), Heter-Graph (HG), Lead3 (L), MatchSumm (MS), MI-unsup (MI), NeuSumm (NS), Oracle (discourse) (OD), Oracle (O), Pacsum (bert) (PB), Pacsum (tfidf) (PT), Refresh (R), RNN_Ext_RL (RE), Textrank (T), Textrank (st) (TS) | Model | B | BL | HG | L | MS | MI | NS | OD | O | PB | PT | R | RE | T | TS | |-----------------|-----|------|------|-----|------|------|------|------|-----|------|------|-----|------|-----|------| | T0-3B | 1.4 | 0.0 | 0.0 | 1.1 | 1.3 | 1.5 | 13.2 | 14.3 | 2.2 | 0.0 | 1.4 | 8.7 | 5.3 | 3.1 | 3.7 | | T0 | 1.4 | 0.0 | 0.0 | 1.1 | 0.0 | 3.0 | 3.9 | 10.7 | 4.3 | 0.0 | 1.4 | 1.4 | 6.6 | 3.1 | 3.7 | | FLAN-T5-xl | 1.4 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 5.3 | 0.0 | 4.3 | 0.0 | 0.0 | 5.8 | 0.0 | 4.7 | 3.7 | | FLAN-T5-xxl | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.5 | 1.3 | 3.6 | 2.2 | 0.0 | 0.0 | 1.4 | 0.0 | 3.1 | 3.7 | | T5-LM-Adapt-xl | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 5.3 | 7.1 | 2.2 | 0.0 | 1.4 | 5.8 | 2.6 | 3.1 | 1.9 | | T5-LM-Adapt-xxl | 1.4 | 0.0 | 0.0 | 0.0 | 0.0 | 1.5 | 1.3 | 3.6 | 2.2 | 0.0 | 1.4 | 1.4 | 5.3 | 3.1 | 0.0 | | GPT-Neo-1.3B | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 2.6 | 0.0 | 2.2 | 0.0 | 0.0 | 4.3 | 0.0 | 1.6 | 0.0 | | GPT2-XL | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.5 | 3.9 | 0.0 | 2.2 | 0.0 | 0.0 | 4.3 | 0.0 | 1.6 | 0.0 | | GPT-Neo-2.7B | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 3.9 | 0.0 | 2.2 | 0.0 | 0.0 | 4.3 | 0.0 | 1.6 | 0.0 | | GPTJ-6B | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 2.6 | 0.0 | 2.2 | 0.0 | 0.0 | 1.4 | 0.0 | 1.6 | 0.0 | | GPT-Neox-20B | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 2.2 | 0.0 | 0.0 | 0.0 | 0.0 | 1.6 | 0.0 | | BLOOM | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.3 | 0.0 | 2.2 | 0.0 | 0.0 | 0.0 | 0.0 | 1.6 | 0.0 | | BLOOM-7B1 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.5 | 3.9 | 0.0 | 2.2 | 0.0 | 0.0 | 4.3 | 0.0 | 1.6 | 1.9 | | BLOOM-3B | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.5 | 5.3 | 0.0 | 2.2 | 0.0 | 0.0 | 5.8 | 0.0 | 1.6 | 1.9 | | BLOOM-1B7 | 1.4 | 0.0 | 0.0 | 0.0 | 0.0 | 1.5 | 2.6 | 3.6 | 2.2 | 0.0 | 0.0 | 4.3 | 0.0 | 0.0 | 0.0 | | BLOOM-1B1 | 2.7 | 1.3 | 0.0 | 1.1 | 0.0 | 1.5 | 2.6 | 0.0 | 2.2 | 0.0 | 0.0 | 5.8 | 0.0 | 1.6 | 1.9 | | OPT-175B | 1.4 | 0.0 | 0.0 | 0.0 | 0.0 | 1.5 | 1.3 | 0.0 | 2.2 | 0.0 | 0.0 | 1.4 | 0.0 | 1.6 | 0.0 | | OPT-66B | 1.4 | 0.0 | 0.0 | 0.0 | 0.0 | 1.5 | 3.9 | 0.0 | 2.2 | 0.0 | 0.0 | 2.9 | 0.0 | 1.6 | 0.0 | | OPT-30B | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.5 | 3.9 | 0.0 | 2.2 | 0.0 | 0.0 | 2.9 | 0.0 | 1.6 | 0.0 | | OPT-13B | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.5 | 5.3 | 0.0 | 2.2 | 0.0 | 0.0 | 2.9 | 0.0 | 1.6 | 0.0 | | OPT-6.7B | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.5 | 6.6 | 0.0 | 2.2 | 0.0 | 0.0 | 1.4 | 0.0 | 1.6 | 0.0 | | OPT-2.7B | 1.4 | 0.0 | 0.0 | 0.0 | 0.0 | 1.5 | 6.6 | 0.0 | 2.2 | 0.0 | 0.0 | 4.3 | 0.0 | 1.6 | 0.0 | | OPT-1.3B | 1.4 | 0.0 | 0.0 | 0.0 | 0.0 | 1.5 | 6.6 | 0.0 | 2.2 | 0.0 | 0.0 | 4.3 | 0.0 | 1.6 | 0.0 | Model B BL HG L MS MI NS OD O PB PT R RE T TS T0-3B 1.4 1.3 0.0 0.0 0.0 0.0 1.3 21.4 6.5 0.0 0.0 1.4 0.0 4.7 1.9 T0 1.4 0.0 0.0 0.0 0.0 0.0 0.0 14.3 6.5 0.0 0.0 0.0 0.0 4.7 1.9 FLAN-T5-xl 0.0 1.3 0.0 0.0 0.0 0.0 1.3 10.7 4.3 0.0 0.0 0.0 0.0 4.7 1.9 FLAN-T5-xxl 1.4 0.0 0.0 0.0 0.0 0.0 0.0 14.3 4.3 0.0 0.0 0.0 0.0 3.1 1.9 T5-LM-Adapt-xl 0.0 0.0 0.0 0.0 0.0 0.0 0.0 17.9 4.3 0.0 0.0 0.0 0.0 4.7 1.9 T5-LM-Adapt-xxl 0.0 0.0 0.0 0.0 0.0 0.0 0.0 14.3 4.3 0.0 0.0 0.0 0.0 3.1 1.9 GPT-Neo-1.3B 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3.6 4.3 0.0 0.0 0.0 0.0 1.6 1.9 GPT2-XL 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3.6 2.2 0.0 0.0 0.0 0.0 1.6 1.9 GPT-Neo-2.7B 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3.6 4.3 0.0 0.0 0.0 0.0 0.0 1.9 GPTJ-6B 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3.6 4.3 0.0 0.0 0.0 0.0 1.6 1.9 GPT-Neox-20B 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3.6 4.3 0.0 0.0 0.0 0.0 0.0 1.9 BLOOM 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3.6 4.3 0.0 0.0 0.0 0.0 1.6 1.9 BLOOM-7B1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3.6 4.3 0.0 0.0 0.0 0.0 1.6 1.9 BLOOM-3B 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3.6 4.3 0.0 0.0 0.0 0.0 1.6 1.9 BLOOM-1B7 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3.6 4.3 0.0 0.0 0.0 0.0 1.6 1.9 BLOOM-1B1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3.6 4.3 0.0 0.0 0.0 0.0 1.6 1.9 OPT-175B 1.4 0.0 0.0 0.0 0.0 0.0 0.0 3.6 4.3 0.0 0.0 0.0 0.0 3.1 1.9 OPT-66B 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3.6 4.3 0.0 0.0 0.0 0.0 1.6 1.9 OPT-30B 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3.6 4.3 0.0 0.0 0.0 0.0 3.1 1.9 OPT-13B 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3.6 4.3 0.0 0.0 0.0 0.0 3.1 1.9 OPT-6.7B 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3.6 4.3 0.0 0.0 0.0 0.0 1.6 1.9 OPT-2.7B 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3.6 4.3 0.0 0.0 0.0 0.0 1.6 1.9 OPT-1.3B 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3.6 4.3 0.0 0.0 0.0 0.0 1.6 1.9 Model B BL HG L MS MI NS OD O PB PT R RE T TS T0-3B 31.1 24.7 42.3 25.3 44.3 47.0 98.7 75.0 21.7 30.7 35.6 88.4 64.5 31.2 29.6 T0 35.1 20.8 26.9 13.7 26.6 45.5 85.5 57.1 23.9 23.9 28.8 76.8 47.4 32.8 35.2 FLAN-T5-xl 31.1 26.0 37.2 21.1 32.9 48.5 94.7 53.6 19.6 30.7 28.8 91.3 56.6 35.9 33.3 FLAN-T5-xxl 20.3 11.7 16.7 8.4 15.2 36.4 76.3 46.4 13.0 12.5 17.8 55.1 38.2 15.6 20.4 T5-LM-Adapt-xl 52.7 41.6 28.2 12.6 22.8 60.6 94.7 57.1 17.4 21.6 23.3 82.6 48.7 20.3 22.2 T5-LM-Adapt-xxl 47.3 36.4 14.1 6.3 13.9 57.6 82.9 32.1 10.9 11.4 15.1 68.1 40.8 15.6 22.2 GPT-Neo-1.3B 31.1 24.7 9.0 2.1 8.9 36.4 85.5 25.0 2.2 9.1 17.8 63.8 19.7 14.1 16.7 GPT2-XL 35.1 27.3 7.7 7.4 6.3 50.0 89.5 25.0 0.0 11.4 16.4 69.6 27.6 12.5 16.7 GPT-Neo-2.7B 35.1 28.6 5.1 1.1 7.6 43.9 85.5 21.4 0.0 8.0 15.1 55.1 26.3 10.9 16.7 GPTJ-6B 27.0 23.4 9.0 1.1 8.9 39.4 73.7 17.9 2.2 6.8 12.3 44.9 21.1 12.5 14.8 GPT-Neox-20B 28.4 24.7 10.3 4.2 10.1 31.8 72.4 21.4 4.3 9.1 13.7 44.9 27.6 18.8 16.7 BLOOM 18.9 14.3 7.7 0.0 5.1 27.3 57.9 21.4 0.0 4.5 12.3 36.2 25.0 9.4 14.8 BLOOM-7B1 31.1 22.1 6.4 3.2 6.3 39.4 80.3 21.4 0.0 9.1 13.7 46.4 23.7 12.5 14.8 BLOOM-3B 41.9 31.2 9.0 3.2 10.1 45.5 80.3 25.0 0.0 8.0 13.7 60.9 23.7 10.9 14.8 BLOOM-1B7 36.5 28.6 7.7 2.1 7.6 43.9 82.9 28.6 2.2 4.5 15.1 58.0 21.1 6.2 9.3 BLOOM-1B1 36.5 28.6 7.7 5.3 10.1 47.0 84.2 25.0 2.2 9.1 16.4 65.2 26.3 14.1 14.8 OPT-175B 45.9 33.8 11.5 1.1 8.9 48.5 78.9 25.0 2.2 10.2 12.3 55.1 25.0 14.1 16.7 OPT-66B 44.6 33.8 10.3 4.2 6.3 48.5 82.9 21.4 4.3 12.5 16.4 49.3 28.9 17.2 16.7 OPT-30B 44.6 31.2 7.7 3.2 6.3 47.0 81.6 21.4 2.2 9.1 15.1 56.5 22.4 14.1 16.7 OPT-13B 47.3 33.8 10.3 2.1 8.9 48.5 86.8 25.0 8.7 11.4 16.4 59.4 28.9 17.2 18.5 OPT-6.7B 44.6 33.8 11.5 4.2 8.9 54.5 89.5 28.6 6.5 12.5 19.2 62.3 27.6 20.3 20.4 OPT-2.7B 45.9 36.4 14.1 3.2 8.9 50.0 86.8 21.4 6.5 14.8 19.2 63.8 31.6 17.2 20.4 OPT-1.3B 45.9 40.3 10.3 3.2 8.9 50.0 85.5 17.9 2.2 12.5 16.4 63.8 21.1 15.6 18.5 Model BART- BART- BLOOM- distil- distil- PEGASUS T5base large 560m BART PEGASUS large T0-3B 61.1 37.9 96.0 35.1 38.7 30.6 94.0 T0 55.7 19.6 91.0 20.2 19.7 15.1 92.5 FLAN-T5-xl 64.1 40.8 98.7 38.3 40.7 34.5 92.8 FLAN-T5-xxl 67.8 47.6 99.0 42.9 44.6 41.8 93.2 T5-LM-Adapt-xl 66.5 60.9 90.8 57.1 61.1 53.4 86.3 T5-LM-Adapt-xxl 70.6 61.6 95.4 56.3 59.0 53.0 87.0 GPT-Neo-1.3B 71.3 67.9 79.9 72.4 64.8 66.2 80.3 GPT2-XL 67.8 63.5 84.7 64.4 61.6 60.3 78.9 GPT-Neo-2.7B 71.9 65.9 87.0 67.3 65.4 64.8 81.2 GPTJ-6B 78.6 69.3 91.8 71.5 64.5 64.8 84.1 GPT-Neox-20B 76.5 64.7 89.3 70.5 64.1 61.9 83.6 BLOOM 72.1 65.0 92.7 65.1 62.9 59.6 85.1 BLOOM-7B1 71.5 64.7 86.4 66.8 63.6 63.7 83.0 BLOOM-3B 70.8 68.8 85.7 68.5 65.0 66.2 80.7 BLOOM-1B7 68.3 67.1 82.6 68.5 65.0 61.6 78.3 BLOOM-1B1 66.5 63.5 80.7 66.1 65.4 63.2 73.9 OPT-175B 78.8 66.4 91.0 67.8 65.2 63.2 89.4 OPT-66B 76.7 66.7 88.5 67.6 64.5 61.6 88.0 OPT-30B 78.4 65.0 89.3 68.5 63.2 61.0 87.2 OPT-13B 76.5 63.0 89.1 65.4 64.1 61.2 86.5 OPT-6.7B 73.9 60.6 86.2 65.1 63.6 60.0 85.9 OPT-2.7B 72.1 62.8 84.9 67.1 63.4 62.1 83.2 OPT-1.3B 71.3 63.3 81.6 62.7 61.6 62.8 81.2 | Model | BART- | BART- | BLOOM- | distil- | distil- | PEGASUS | T5- | |-----------------|---------|---------|----------|-----------|-----------|-----------|-------| | base | large | 560m | BART | PEGASUS | large | | | | T0-3B | 19.7 | 1.2 | 87.4 | 1.2 | 2.1 | 3.0 | 76.2 | | T0 | 33.9 | 5.3 | 80.3 | 5.4 | 5.7 | 3.2 | 84.1 | | FLAN-T5-xl | 19.2 | 2.4 | 85.7 | 4.9 | 3.4 | 3.4 | 74.5 | | FLAN-T5-xxl | 26.3 | 5.3 | 86.8 | 5.6 | 5.5 | 3.7 | 78.5 | | T5-LM-Adapt-xl | 19.7 | 9.7 | 40.9 | 12.4 | 11.7 | 15.8 | 51.1 | | T5-LM-Adapt-xxl | 23.8 | 8.9 | 51.2 | 12.0 | 10.1 | 9.6 | 61.3 | | GPT-Neo-1.3B | 26.3 | 10.9 | 31.4 | 21.2 | 14.2 | 13.7 | 50.5 | | GPT2-XL | 28.3 | 9.7 | 39.6 | 16.1 | 13.3 | 11.2 | 57.8 | | GPT-Neo-2.7B | 32.0 | 10.6 | 36.5 | 20.5 | 12.8 | 12.1 | 58.0 | | GPTJ-6B | 35.2 | 7.0 | 43.2 | 18.5 | 9.8 | 10.5 | 66.7 | | GPT-Neox-20B | 39.1 | 8.5 | 46.3 | 20.0 | 9.6 | 10.5 | 71.4 | | BLOOM | 42.8 | 8.5 | 50.9 | 20.7 | 9.8 | 10.7 | 72.5 | | BLOOM-7B1 | 32.6 | 10.9 | 43.0 | 20.7 | 13.3 | 13.9 | 60.9 | | BLOOM-3B | 30.5 | 13.8 | 39.8 | 19.8 | 18.3 | 18.7 | 51.3 | | BLOOM-1B7 | 27.0 | 14.7 | 36.9 | 22.9 | 19.2 | 21.5 | 44.1 | | BLOOM-1B1 | 24.8 | 17.1 | 35.2 | 24.9 | 21.7 | 24.7 | 40.6 | | OPT-175B | 48.8 | 8.7 | 56.0 | 20.7 | 9.8 | 7.8 | 78.9 | | OPT-66B | 44.3 | 8.2 | 50.7 | 19.8 | 9.2 | 7.3 | 77.6 | | OPT-30B | 45.6 | 7.7 | 50.7 | 20.7 | 9.6 | 8.4 | 76.6 | | OPT-13B | 41.0 | 8.7 | 47.8 | 18.8 | 9.4 | 8.7 | 73.7 | | OPT-6.7B | 37.1 | 8.0 | 43.4 | 17.8 | 8.2 | 8.7 | 69.6 | | OPT-2.7B | 33.7 | 8.7 | 39.6 | 21.0 | 10.3 | 10.5 | 67.7 | | OPT-1.3B | 29.8 | 8.5 | 37.7 | 17.6 | 11.2 | 10.7 | 62.3 | | Model | BART- | BART- | BLOOM- | distil- | distil- | PEGASUS | T5- | |-----------------|---------|---------|----------|-----------|-----------|-----------|-------| | base | large | 560m | BART | PEGASUS | large | | | | T0-3B | 48.8 | 26.1 | 83.2 | 27.3 | 29.7 | 27.4 | 91.1 | | T0 | 53.8 | 16.4 | 91.2 | 19.3 | 18.1 | 16.0 | 91.9 | | FLAN-T5-xl | 46.2 | 25.8 | 82.6 | 30.2 | 31.1 | 29.0 | 88.6 | | FLAN-T5-xxl | 54.6 | 30.9 | 85.7 | 34.4 | 36.6 | 33.6 | 89.9 | | T5-LM-Adapt-xl | 59.2 | 45.2 | 42.6 | 48.3 | 52.6 | 48.9 | 82.8 | | T5-LM-Adapt-xxl | 60.5 | 42.5 | 54.7 | 48.3 | 48.7 | 43.6 | 84.5 | | GPT-Neo-1.3B | 64.8 | 56.8 | 21.0 | 65.9 | 59.5 | 58.4 | 75.4 | | GPT2-XL | 61.8 | 49.0 | 33.3 | 57.1 | 53.8 | 54.1 | 74.9 | | GPT-Neo-2.7B | 63.9 | 51.7 | 23.9 | 60.2 | 55.1 | 55.7 | 76.2 | | GPTJ-6B | 70.0 | 49.0 | 28.9 | 66.6 | 54.7 | 54.1 | 80.7 | | GPT-Neox-20B | 68.5 | 51.0 | 29.4 | 65.6 | 55.8 | 53.4 | 82.6 | | BLOOM | 65.2 | 51.0 | 45.1 | 58.5 | 55.8 | 54.3 | 83.0 | | BLOOM-7B1 | 64.8 | 53.4 | 30.6 | 61.2 | 56.8 | 56.6 | 79.1 | | BLOOM-3B | 67.6 | 56.0 | 34.0 | 66.1 | 58.1 | 60.0 | 78.1 | | BLOOM-1B7 | 62.9 | 53.6 | 25.2 | 62.9 | 59.3 | 59.1 | 74.5 | | BLOOM-1B1 | 59.2 | 50.2 | 29.4 | 61.7 | 55.8 | 57.3 | 71.2 | | OPT-175B | 71.9 | 50.0 | 39.8 | 61.5 | 55.8 | 53.7 | 85.7 | | OPT-66B | 68.0 | 53.6 | 28.5 | 58.8 | 54.0 | 54.3 | 84.3 | | OPT-30B | 69.5 | 48.3 | 33.8 | 59.8 | 53.3 | 54.1 | 83.2 | | OPT-13B | 66.7 | 48.8 | 31.2 | 58.0 | 54.5 | 53.4 | 82.2 | | OPT-6.7B | 64.8 | 47.8 | 26.2 | 59.8 | 51.0 | 55.9 | 82.4 | | OPT-2.7B | 63.5 | 50.7 | 24.5 | 59.3 | 53.1 | 55.3 | 81.0 | | OPT-1.3B | 63.5 | 50.0 | 22.6 | 57.1 | 51.9 | 55.7 | 77.0 | Model BART- BART- BLOOM- distil- distil- PEGASUS T5base large 560m BART PEGASUS large T0-3B 28.5 4.8 98.5 4.9 6.2 5.9 78.3 T0 42.8 10.4 98.7 8.3 7.3 5.9 84.9 FLAN-T5-xl 30.5 8.9 98.7 6.8 7.8 8.7 74.5 FLAN-T5-xxl 40.0 12.1 99.2 10.2 11.2 9.1 79.1 T5-LM-Adapt-xl 39.1 29.7 97.3 26.3 26.1 27.6 58.2 T5-LM-Adapt-xxl 42.1 24.2 97.7 23.2 20.1 21.2 65.8 GPT-Neo-1.3B 44.3 31.2 96.2 36.3 28.6 27.6 56.7 GPT2-XL 45.1 28.0 96.2 31.5 24.7 24.0 61.7 GPT-Neo-2.7B 48.2 28.3 96.0 33.9 25.4 26.5 61.3 GPTJ-6B 52.9 25.8 97.9 33.2 21.1 21.7 68.1 GPT-Neox-20B 54.6 24.6 97.9 33.9 20.4 20.1 72.7 BLOOM 54.0 26.1 98.1 32.4 23.6 22.1 73.7 BLOOM-7B1 49.2 30.2 97.5 33.4 28.8 29.7 62.1 BLOOM-3B 44.3 33.8 96.4 34.6 31.6 34.7 57.8 BLOOM-1B7 45.1 34.8 96.0 37.8 32.7 34.7 52.2 BLOOM-1B1 44.1 37.7 94.8 39.5 34.6 37.4 51.3 OPT-175B 59.0 23.4 98.3 30.5 17.4 16.4 80.5 OPT-66B 57.0 24.2 98.3 30.2 19.2 14.4 77.6 OPT-30B 55.7 22.9 97.9 30.2 18.3 16.0 77.2 OPT-13B 51.8 23.2 98.1 28.8 18.5 17.8 75.4 OPT-6.7B 52.7 23.7 97.1 29.3 18.1 16.7 71.4 OPT-2.7B 49.9 26.1 97.3 30.2 19.7 19.9 67.3 OPT-1.3B 45.8 26.6 97.1 30.5 22.4 23.1 62.5 Table 35: The performance of the models on XSum with FIB alternative-choices using LL as the scoring function. Model B BL HG L MS MI NS OD O PB PT R RE T TS T0-3B 11.5 0.0 9.1 20.0 4.8 11.8 20.8 51.4 13.0 0.0 0.0 25.8 4.2 13.9 15.2 T0 7.7 0.0 4.5 0.0 4.8 8.8 12.5 37.5 9.3 0.0 0.0 9.7 0.0 8.3 8.7 FLAN-T5-xl 11.5 0.0 9.1 0.0 4.8 8.8 25.0 37.5 13.0 0.0 3.7 25.8 8.3 13.9 17.4 FLAN-T5-xxl 11.5 0.0 9.1 0.0 4.8 8.8 16.7 37.5 7.4 0.0 0.0 19.4 8.3 8.3 17.4 T5-LM-Adapt-xl 7.7 0.0 4.5 0.0 4.8 8.8 20.8 37.5 9.3 0.0 0.0 25.8 0.0 5.6 8.7 T5-LM-Adapt-xxl 11.5 0.0 9.1 0.0 4.8 14.7 20.8 30.6 7.4 0.0 0.0 9.7 8.3 8.3 10.9 GPT-Neo-1.3B 0.0 4.3 4.5 0.0 4.8 2.9 29.2 25.0 3.7 0.0 0.0 16.1 4.2 2.8 4.3 GPT2-XL 3.8 0.0 0.0 0.0 4.8 2.9 33.3 27.8 3.7 0.0 0.0 25.8 4.2 2.8 4.3 GPT-Neo-2.7B 7.7 0.0 9.1 0.0 4.8 5.9 29.2 23.6 3.7 0.0 0.0 16.1 8.3 5.6 8.7 GPTJ-6B 0.0 4.3 0.0 0.0 4.8 2.9 29.2 22.2 3.7 0.0 0.0 9.7 4.2 5.6 6.5 GPT-Neox-20B 11.5 0.0 9.1 20.0 9.5 5.9 20.8 23.6 5.6 0.0 0.0 16.1 8.3 5.6 8.7 BLOOM 11.5 4.3 9.1 0.0 9.5 5.9 16.7 19.4 5.6 8.3 0.0 12.9 8.3 5.6 4.3 BLOOM-7B1 7.7 8.7 0.0 0.0 4.8 2.9 20.8 25.0 9.3 0.0 0.0 12.9 8.3 5.6 10.9 BLOOM-3B 3.8 4.3 0.0 0.0 4.8 2.9 16.7 20.8 5.6 0.0 0.0 9.7 4.2 5.6 8.7 BLOOM-1B7 3.8 4.3 4.5 0.0 4.8 2.9 20.8 25.0 7.4 0.0 0.0 12.9 8.3 2.8 6.5 BLOOM-1B1 3.8 4.3 9.1 20.0 4.8 5.9 25.0 23.6 7.4 8.3 3.7 16.1 12.5 5.6 8.7 OPT-175B 7.7 4.3 9.1 40.0 9.5 8.8 12.5 23.6 5.6 8.3 0.0 9.7 8.3 8.3 10.9 OPT-66B 7.7 4.3 9.1 0.0 9.5 5.9 12.5 20.8 7.4 0.0 0.0 6.5 8.3 8.3 8.7 OPT-30B 7.7 4.3 9.1 0.0 4.8 5.9 16.7 19.4 5.6 0.0 0.0 9.7 8.3 8.3 8.7 OPT-13B 7.7 0.0 9.1 0.0 9.5 5.9 16.7 26.4 3.7 0.0 0.0 12.9 8.3 5.6 6.5 OPT-6.7B 7.7 4.3 4.5 20.0 4.8 5.9 16.7 23.6 5.6 8.3 0.0 6.5 12.5 5.6 10.9 OPT-2.7B 7.7 4.3 4.5 0.0 4.8 8.8 16.7 25.0 3.7 0.0 0.0 12.9 8.3 5.6 8.7 OPT-1.3B 3.8 4.3 4.5 0.0 4.8 5.9 20.8 19.4 5.6 0.0 0.0 12.9 8.3 2.8 4.3 Model B BL HG L MS MI NS OD O PB PT R RE T TS T0-3B 0.0 0.0 0.0 0.0 0.0 5.9 12.5 33.3 7.4 0.0 3.7 25.8 0.0 8.3 17.4 T0 0.0 0.0 0.0 0.0 0.0 2.9 8.3 23.6 9.3 0.0 3.7 12.9 0.0 5.6 13.0 FLAN-T5-xl 0.0 0.0 0.0 0.0 0.0 8.8 12.5 25.0 5.6 0.0 0.0 12.9 0.0 8.3 15.2 FLAN-T5-xxl 0.0 0.0 0.0 0.0 0.0 2.9 4.2 18.1 3.7 0.0 0.0 6.5 0.0 5.6 15.2 T5-LM-Adapt-xl 0.0 0.0 0.0 0.0 0.0 2.9 4.2 18.1 7.4 0.0 3.7 9.7 0.0 2.8 13.0 T5-LM-Adapt-xxl 0.0 0.0 0.0 0.0 0.0 2.9 4.2 12.5 5.6 0.0 3.7 6.5 0.0 2.8 13.0 GPT-Neo-1.3B 0.0 0.0 0.0 0.0 0.0 0.0 4.2 4.2 0.0 0.0 0.0 0.0 0.0 2.8 2.2 GPT2-XL 0.0 0.0 0.0 0.0 0.0 0.0 4.2 5.6 3.7 0.0 0.0 6.5 0.0 2.8 4.3 GPT-Neo-2.7B 0.0 0.0 0.0 0.0 0.0 0.0 4.2 5.6 0.0 0.0 0.0 3.2 0.0 2.8 2.2 GPTJ-6B 0.0 0.0 0.0 0.0 0.0 0.0 4.2 5.6 1.9 0.0 0.0 0.0 0.0 2.8 4.3 GPT-Neox-20B 0.0 0.0 0.0 0.0 0.0 0.0 0.0 5.6 1.9 0.0 0.0 0.0 0.0 2.8 4.3 BLOOM 0.0 0.0 0.0 0.0 0.0 0.0 0.0 4.2 1.9 0.0 0.0 0.0 0.0 2.8 6.5 BLOOM-7B1 0.0 0.0 0.0 0.0 0.0 0.0 4.2 8.3 3.7 0.0 0.0 0.0 0.0 2.8 6.5 BLOOM-3B 0.0 0.0 0.0 0.0 0.0 2.9 4.2 4.2 1.9 0.0 0.0 0.0 0.0 2.8 6.5 BLOOM-1B7 0.0 0.0 0.0 0.0 0.0 0.0 4.2 6.9 0.0 0.0 0.0 0.0 0.0 2.8 6.5 BLOOM-1B1 0.0 0.0 0.0 0.0 0.0 2.9 4.2 5.6 1.9 0.0 0.0 3.2 0.0 2.8 6.5 OPT-175B 0.0 0.0 0.0 0.0 0.0 2.9 4.2 4.2 1.9 0.0 0.0 3.2 0.0 2.8 4.3 OPT-66B 0.0 0.0 0.0 0.0 0.0 2.9 4.2 5.6 1.9 0.0 0.0 3.2 0.0 2.8 2.2 OPT-30B 0.0 0.0 0.0 0.0 0.0 0.0 4.2 5.6 1.9 0.0 0.0 3.2 0.0 2.8 2.2 OPT-13B 0.0 0.0 0.0 0.0 0.0 0.0 4.2 4.2 1.9 0.0 0.0 3.2 0.0 2.8 2.2 OPT-6.7B 0.0 0.0 0.0 0.0 0.0 2.9 4.2 8.3 1.9 0.0 0.0 3.2 0.0 2.8 2.2 OPT-2.7B 0.0 0.0 0.0 0.0 0.0 2.9 4.2 6.9 1.9 0.0 0.0 3.2 0.0 2.8 4.3 OPT-1.3B 0.0 0.0 0.0 0.0 0.0 2.9 4.2 4.2 0.0 0.0 0.0 6.5 0.0 2.8 2.2 Model B BL HG L MS MI NS OD O PB PT R RE T TS T0-3B 0.0 0.0 0.0 0.0 4.8 0.0 8.3 33.3 13.0 0.0 0.0 9.7 0.0 2.8 2.2 T0 0.0 0.0 0.0 0.0 4.8 0.0 0.0 22.2 13.0 0.0 0.0 3.2 0.0 2.8 4.3 FLAN-T5-xl 0.0 0.0 0.0 0.0 4.8 0.0 8.3 25.0 13.0 0.0 0.0 12.9 0.0 0.0 2.2 FLAN-T5-xxl 0.0 0.0 0.0 0.0 4.8 0.0 8.3 20.8 11.1 0.0 0.0 3.2 0.0 0.0 4.3 T5-LM-Adapt-xl 0.0 0.0 0.0 0.0 4.8 0.0 4.2 25.0 11.1 0.0 0.0 3.2 0.0 0.0 2.2 T5-LM-Adapt-xxl 0.0 0.0 0.0 0.0 4.8 0.0 0.0 22.2 9.3 0.0 0.0 3.2 0.0 0.0 2.2 GPT-Neo-1.3B 0.0 0.0 0.0 0.0 4.8 0.0 4.2 16.7 5.6 0.0 0.0 0.0 0.0 0.0 0.0 GPT2-XL 0.0 0.0 0.0 0.0 4.8 0.0 0.0 13.9 5.6 0.0 0.0 6.5 0.0 0.0 0.0 GPT-Neo-2.7B 0.0 0.0 0.0 0.0 4.8 0.0 0.0 11.1 5.6 0.0 0.0 0.0 0.0 0.0 0.0 GPTJ-6B 0.0 0.0 0.0 0.0 0.0 0.0 4.2 11.1 3.7 0.0 0.0 0.0 0.0 0.0 0.0 GPT-Neox-20B 0.0 0.0 0.0 0.0 4.8 0.0 0.0 8.3 5.6 0.0 0.0 3.2 0.0 0.0 0.0 BLOOM 0.0 0.0 0.0 0.0 4.8 0.0 0.0 9.7 3.7 0.0 0.0 3.2 0.0 0.0 0.0 BLOOM-7B1 0.0 0.0 0.0 0.0 4.8 0.0 4.2 11.1 5.6 0.0 0.0 0.0 0.0 0.0 0.0 BLOOM-3B 0.0 0.0 0.0 0.0 4.8 0.0 0.0 12.5 5.6 0.0 0.0 0.0 0.0 0.0 0.0 BLOOM-1B7 0.0 0.0 0.0 0.0 4.8 0.0 0.0 16.7 3.7 0.0 0.0 0.0 0.0 0.0 0.0 BLOOM-1B1 0.0 0.0 0.0 0.0 4.8 0.0 4.2 15.3 5.6 0.0 0.0 0.0 0.0 0.0 0.0 OPT-175B 0.0 0.0 0.0 0.0 4.8 0.0 0.0 11.1 5.6 0.0 0.0 3.2 0.0 0.0 0.0 OPT-66B 0.0 0.0 0.0 0.0 4.8 0.0 0.0 9.7 5.6 0.0 0.0 0.0 0.0 0.0 0.0 OPT-30B 0.0 0.0 0.0 0.0 4.8 0.0 0.0 12.5 7.4 0.0 0.0 0.0 0.0 0.0 0.0 OPT-13B 0.0 0.0 0.0 0.0 4.8 0.0 0.0 13.9 5.6 0.0 0.0 0.0 0.0 0.0 0.0 OPT-6.7B 0.0 0.0 0.0 0.0 4.8 0.0 0.0 9.7 5.6 0.0 0.0 0.0 0.0 0.0 0.0 OPT-2.7B 0.0 0.0 0.0 0.0 4.8 0.0 0.0 13.9 5.6 0.0 0.0 0.0 0.0 0.0 0.0 OPT-1.3B 0.0 0.0 0.0 0.0 4.8 0.0 0.0 16.7 7.4 0.0 0.0 0.0 0.0 0.0 0.0 Model B BL HG L MS MI NS OD O PB PT R RE T TS T0-3B 26.9 21.7 45.5 40.0 42.9 61.8 75.0 81.9 29.6 33.3 48.1 74.2 54.2 44.4 54.3 T0 15.4 21.7 31.8 0.0 38.1 58.8 62.5 65.3 16.7 8.3 40.7 61.3 37.5 47.2 50.0 FLAN-T5-xl 23.1 30.4 31.8 20.0 47.6 61.8 75.0 68.1 18.5 16.7 40.7 74.2 58.3 47.2 56.5 FLAN-T5-xxl 7.7 17.4 18.2 0.0 23.8 47.1 50.0 68.1 13.0 0.0 18.5 45.2 29.2 41.7 47.8 T5-LM-Adapt-xl 38.5 47.8 36.4 0.0 38.1 64.7 70.8 65.3 14.8 16.7 29.6 61.3 37.5 41.7 47.8 T5-LM-Adapt-xxl 34.6 26.1 9.1 0.0 23.8 55.9 54.2 52.8 13.0 0.0 11.1 48.4 20.8 27.8 37.0 GPT-Neo-1.3B 11.5 13.0 9.1 0.0 14.3 41.2 62.5 19.4 3.7 0.0 7.4 45.2 12.5 16.7 23.9 GPT2-XL 23.1 17.4 9.1 0.0 19.0 47.1 66.7 25.0 9.3 0.0 18.5 54.8 29.2 22.2 28.3 GPT-Neo-2.7B 15.4 17.4 4.5 0.0 14.3 44.1 50.0 19.4 5.6 0.0 7.4 38.7 16.7 16.7 23.9 GPTJ-6B 7.7 21.7 4.5 0.0 14.3 41.2 41.7 25.0 3.7 0.0 7.4 35.5 4.2 13.9 23.9 GPT-Neox-20B 0.0 13.0 4.5 0.0 14.3 47.1 50.0 26.4 3.7 0.0 14.8 32.3 4.2 25.0 28.3 BLOOM 7.7 13.0 4.5 0.0 14.3 38.2 29.2 20.8 5.6 0.0 3.7 29.0 16.7 13.9 21.7 BLOOM-7B1 19.2 17.4 0.0 0.0 9.5 44.1 45.8 22.2 7.4 0.0 11.1 41.9 16.7 19.4 26.1 BLOOM-3B 23.1 13.0 4.5 0.0 19.0 44.1 50.0 22.2 3.7 0.0 3.7 41.9 16.7 16.7 23.9 BLOOM-1B7 19.2 17.4 9.1 0.0 9.5 44.1 41.7 26.4 3.7 0.0 3.7 41.9 12.5 16.7 21.7 BLOOM-1B1 23.1 26.1 4.5 0.0 19.0 44.1 54.2 22.2 5.6 0.0 11.1 41.9 25.0 25.0 23.9 OPT-175B 23.1 34.8 4.5 0.0 19.0 50.0 45.8 19.4 3.7 0.0 7.4 32.3 16.7 16.7 21.7 OPT-66B 30.8 30.4 4.5 0.0 19.0 47.1 54.2 22.2 7.4 0.0 11.1 38.7 12.5 19.4 30.4 OPT-30B 26.9 34.8 4.5 0.0 14.3 50.0 45.8 20.8 3.7 0.0 3.7 35.5 12.5 13.9 26.1 OPT-13B 30.8 30.4 4.5 0.0 19.0 47.1 54.2 29.2 5.6 0.0 11.1 38.7 25.0 16.7 23.9 OPT-6.7B 30.8 39.1 4.5 0.0 19.0 50.0 58.3 26.4 7.4 0.0 18.5 38.7 29.2 27.8 26.1 OPT-2.7B 23.1 26.1 4.5 0.0 28.6 50.0 58.3 33.3 7.4 0.0 11.1 45.2 16.7 19.4 26.1 OPT-1.3B 26.9 30.4 9.1 0.0 19.0 44.1 62.5 25.0 7.4 0.0 7.4 45.2 16.7 16.7 23.9 Model Scoring BART- BART- BLOOM- distil- distil- PEGASUS T5- Function base large 560m BART PEGASUS large BART-base Avg. PMI 24.4 42.5 95.4 34.4 45.1 42.2 83.0 BART-base Avg. LL 0.0 2.2 97.1 0.5 3.4 5.5 50.1 BART-base PMI 17.7 26.6 64.8 27.1 35.0 34.7 77.4 BART-base LL 0.6 8.9 99.6 2.0 8.9 13.5 54.5 BART-large Avg. PMI 63.5 24.4 96.0 29.5 39.4 32.2 94.2 BART-large Avg. LL 32.8 0.0 96.9 4.4 2.5 3.0 77.0 BART-large PMI 52.9 17.9 62.3 26.8 32.3 29.2 91.1 BART-large LL 42.8 1.0 99.6 7.3 4.8 5.7 77.6 BLOOM-560m Avg. PMI 55.9 44.7 52.8 53.9 45.8 46.1 72.0 BLOOM-560m Avg. LL 18.6 6.0 0.4 11.7 6.6 7.5 50.9 BLOOM-560m PMI 49.5 36.5 10.7 48.3 40.7 42.2 68.9 BLOOM-560m LL 32.2 16.7 37.3 21.5 12.8 14.8 57.8 distil-BART Avg. PMI 51.0 24.2 94.5 16.6 35.7 30.8 93.4 distil-BART Avg. LL 11.0 0.0 97.7 0.0 2.1 4.3 72.5 distil-BART PMI 44.7 18.6 52.8 18.8 30.9 26.5 88.6 distil-BART LL 20.7 1.7 99.6 0.0 4.6 7.3 73.1 distil-PEGASUS Avg. PMI 62.9 34.1 97.3 32.4 19.7 18.9 94.8 distil-PEGASUS Avg. LL 16.4 1.9 88.9 2.0 0.0 0.7 74.1 distil-PEGASUS PMI 51.4 22.7 77.8 26.6 17.2 17.1 92.3 distil-PEGASUS LL 27.0 5.6 98.5 3.9 0.2 1.8 76.2 PEGASUS Avg. PMI 72.4 44.9 97.1 42.9 36.4 22.8 96.9 PEGASUS Avg. LL 29.4 1.7 87.8 2.9 0.5 0.0 84.3 PEGASUS PMI 65.4 29.7 79.9 37.3 26.8 19.2 94.2 PEGASUS LL 38.9 5.8 99.0 7.8 2.3 0.2 85.3 T5-large Avg. PMI 43.2 50.7 93.5 46.1 51.5 49.8 31.7 T5-large Avg. LL 8.6 12.3 94.8 10.2 13.3 18.9 0.2 T5-large PMI 34.1 34.5 59.3 36.3 42.1 42.0 27.7 T5-large LL 28.5 31.9 99.2 26.1 28.4 34.2 4.1 Table 40: The performance of the models on XSum using the same models to generate the factually inconsistent summary. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 7 A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract: Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3 ✓ B1. Did you cite the creators of artifacts you used? Section 3 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Section 3 ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section 3 ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? The articles were from another dataset, which we think was reasonably cleaned. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 3 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 3.1 ## C ✓ **Did You Run Computational Experiments?** Section 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 4.1 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4.1 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 4.1 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 4.1 D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Section 3 ✗ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Annotators were authors. ✗ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Annotators were authors. ✗ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Annotators were authors. ✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Annotators were authors. ✗ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Annotators were authors.
tran-etal-2023-text
Text Generation Model Enhanced with Semantic Information in Aspect Category Sentiment Analysis
https://aclanthology.org/2023.findings-acl.323
Aspect Category Sentiment Analysis (ACSA) is one of the main subtasks of sentiment analysis, which aims at predicting polarity over a given aspect category. Recently, generative methods emerge as an efficient way to utilize a pre-trained language model for solving ACSA. However, those methods fail to model relations of target words and opinion words in a sentence including multiple aspects. To tackle this problem, this paper proposes a method to incorporate Abstract Meaning Representation (AMR), which describes semantic representation of a sentence as a directed graph, into a text generation model. Furthermore, two regularizers are designed to guide cross attention weights allocation over AMR graphs. One is the identical regularizer that constrains attention weights of aligned nodes, the other is the entropy regularizer that helps the decoder generate tokens by heavily considering only a few related nodes in the AMR graph. Experimental results on three datasets show that the proposed method outperforms state-of-the-art methods, proving the effectiveness of our model.
# Text Generation Model Enhanced With Semantic Information In Aspect Category Sentiment Analysis Tu Dinh Tran Kiyoaki Shirai Natthawut Kertkeidkachorn Japan Advanced Institute of Science and Technology 1-1 Asahidai, Nomi, Ishikawa 923-1292, Japan {s2110422, kshirai, natt}@jaist.ac.jp ## Abstract Aspect Category Sentiment Analysis (ACSA) is one of the main subtasks of sentiment analysis, which aims at predicting polarity over a given aspect category. Recently, generative methods have emerged as an efficient way to use a pre-trained language model for ACSA. However, those methods fail to model relations between target words and opinion words in a sentence including multiple aspects. To tackle this problem, this paper proposes a method to incorporate Abstract Meaning Representation (AMR), which describes the semantic representation of a sentence as a directed graph, into a text generation model. Furthermore, two regularizers are designed to guide the allocation of cross attention weights over AMR graphs. One is the identical regularizer, which constrains the attention weights of aligned nodes, the other is the entropy regularizer, which helps the decoder generate tokens by only giving a high degree of consideration to a few related nodes in the AMR graph. Experimental results on three datasets show that the proposed method outperforms state-of-the-art methods, proving the effectiveness of our model. ## 1 Introduction Aspect based sentiment analysis is an important task, which analyzes sentiments regarding an aspect of a product or a service. This task includes many subtasks, such as Aspect Category Detection (ACD) and Aspect Category Sentiment Analysis (ACSA). ACD is the task of detecting aspect categories while ACSA concentrates on predicting the polarity of given aspect categories. This study focuses on ACSA only. Figure 1 shows an example of ACSA task, where *negative* and *positive* are the polarities of the two provided categories *service* and *food*. The conventional approaches carry out ACSA as a classification task. Wang et al. (2016) and Cheng et al. (2017) use an attention mechanism ![0_image_0.png](0_image_0.png) for discovering aspect-related words. To achieve better representations, pre-trained language models such as BERT (Devlin et al., 2019) are used (Sun et al., 2019; Jiang et al., 2019). Although achieving competitive results, the fine-tuning of a pre-trained language model for ACSA suffers from two drawbacks: the difference between fine-tuning and pre-training tasks and the gap between newly initialized classification layers and the pre-trained model. Such inconsistency is often harmful to the training of an outstanding classifier for ACSA. To solve the above problems, Liu et al. (2021) propose to transform the sentiment classification task into text generation, which better leverages the power of pre-trained language models following the seq2seq framework like BART (Lewis et al., 2020). However, the naive text generation method cannot fully capture relations between opinion words and target words in sentences containing multiple aspects. Abstract Meaning Representation (AMR, Banarescu et al., 2013), which is a semantic representation of a sentence in the form of rooted, labeled, directed and acyclic graphs, can model the relations between the target words and associated opinion words. For example, in Figure 1, the relation be5256 tween "*staff* " and "*rude*" is captured by a directed edge between two nodes "*staff* " and "*rude-01*". In addition, the AMR graph also provides the high level semantic information, which means words with the same meaning, like "but", "*although*" and "*nevertheless*", could be represented by the same node "*contrast-01"*. This paper investigates the potential of combining AMR with the naive text generation model to perform ACSA. Furthermore, we also design two regularizers for guiding cross attentions over the AMR graph of the decoder. We observe that words (in a sentence) and nodes (in an AMR graph) that are semantically similar should be paid the same amounts of attention. Therefore, we minimize the difference of two cross attentions over aligned words and AMR nodes. Moreover, the decoded tokens should only be attentive to related AMR nodes. To achieve that, we minimize the entropy of the cross attentions over the AMR graph in the decoder layers. We evaluate our model using the Rest14, Rest14hard and MAMS benchmark datasets. The results show that our model is better than the baselines and achieves a state-of-the-art performance. Our contributions can be summarized as follows: - We propose a model that incorporates an AMR graph encoder within a seq2seq framework for capturing the relations between the target words and opinion words and using this semantic information for ACSA. To the best of our knowledge, this is the first attempt to explore how to use AMR in ACSA. - We propose two regularizers to improve the cross attention mechanism over the AMR graph using AMR alignments and information entropy. - We demonstrate the effectiveness of our proposed method through experiments on three datasets. ## 2 Related Work Aspect Category Sentiment Analysis Numerous attempts have been made to improve ACSA. Wang et al. (2016) propose an LSTM-based model combined with an attention mechanism to attend for suitable words with given aspects. Ruder et al. (2016) capture inter-sentence relations within a review using a hierarchical bidirectional LSTM model. Xue and Li (2018) extract features using CNN and output features related to the categories of aspects by using a gated mechanism. Xing et al. (2019), Liang et al. (2019) and Zhu et al. (2019) incorporate aspect category information into a sentence decoder for generating representations specific to both the aspect and its context. Sun et al. (2019) construct auxiliary sentences from aspects for performing ACSA as a sentence-pair classification task. Jiang et al. (2019) propose a new capsule network which captures relationship between the aspects and the contexts. Li et al. (2020b) aggregate the sentiments of the words that indicate an aspect category, so as to predict the sentiment of this category. Liu et al. (2021) use a template-based method to perform ACSA as a generation task; this can leverage the knowledge of a pre-trained language model. Shan et al. (2022) use additional syntactic information to enhance the sentiment features. Liu et al. (2023) utilize commonsense knowledge graph and data augmentation to overcome the shortage of training data. To avoid error propagation, joint models which perform ACSA and ACD simultaneously have been proposed. Schmitt et al. (2018) propose two models with LSTM and CNN, which output an aspect category and its corresponding polarity at the same time. Hu et al. (2019) apply orthogonal and sparseness constraints on attention weights. Wang et al. (2019) design an AS-Capsules model to explore the correlation of aspects with sentiments through share modules. Li et al. (2020a) propose a joint model with a shared sentiment prediction layer. AMR With the development of AMR parsers, AMR-to-text generation models and larger parallel datasets, AMR has been applied successfully to many downstream text generation tasks. For example, it has been integrated into a machine translation model as additional information for the source side (Song et al., 2019; Nguyen et al., 2021; Xu et al., 2021). In text summarization, several researchers transform AMR representations of sentences into an AMR graph of a summary and generate a text summary from the extracted subgraph (Liu et al., 2015; Dohare and Karnick, 2017; Hardy and Vlachos, 2018; Inácio and Pardo, 2021). However, as explained in Section 1, there has been no attempt to apply AMR to ACSA. ## 3 Text Generation Model For Acsa This section presents an overview of the text generation model for ACSA proposed by Liu et al. (2021), since our proposed model is based on it. The model follows the seq2seq framework that converts a review sentence to a target sentence indicating the polarity of the aspect. It fine-tunes the pre-trained BART model for the text generation task. The model takes a review sentence X = {x1, x2*, ..., x*|X|} = x1:|X| as an input and generates a target sentence Y = {y1, y2*, ..., y*|Y |} = y1:|Y |, where |X| and |Y | are the number of tokens in the source and target sentence respectively. ## 3.1 Target Sentence Creation The target sentence is formed by filling an aspect category and a sentiment word into a predefined template. We denote the set of aspect categories by A = {a1, a2*, ..., a*|A|} and the set of sentiment words by S = {s1, s2*, ..., s*|S|}. The template is defined manually like "*The sentiment polarity of [ASPECT_CATEGORY] is [SENTIMENT_WORD]*". For each review sentence X whose corresponding aspect category is ap and sentiment polarity is st, we fill the slots in the template and get the target sentence "*The sentiment polarity of* ⟨ap⟩ is ⟨st⟩" (E.g., "*The sentiment polarity of service is negative*"). ## 3.2 Training And Inference For the training, given a pair of sentences (X, Y), the method fetches the input sentence X into the encoder to get vector presentation h enc of X as in Equation (1). In the decoder, the hidden vector at a time step j is calculated using h enc and the hidden vectors of the previous time steps, as in Equation (2). $$h^{e n c}=\mathrm{Encoder}(x_{1:|X|})\qquad\qquad(1)$$ $$h_{j}^{d e c}=\mathrm{Decoder}(h^{e n c},h_{1:j-1}^{d e c})\qquad\qquad(2)$$ The conditional probability of the output token yj is: $$P(y_{j}|y_{1:j-1},x_{1:|X|})=\mathrm{softmax}(\mathbf{W}h_{j}^{d e c}+\mathbf{b}),\ (3)$$ where W ∈ R dh*×|V|* and b ∈ R|V|, |V| represents the vocabulary size. The loss function of this model is the following Cross Entropy: $${\mathcal{L}}_{c e}=-\sum_{j=1}^{|Y|}\log P(y_{j}|y_{1:j-1},x_{1:|X|}).\quad\quad(4)$$ For inference, we calculate the probabilities of all possible target sentences with different sentiment polarity classes using the trained model and choose the one with the highest probability. For an input sentence X, aspect category ap and sentiment polarity st, the probability of a target sentence Yap,st = {y1, y2*, ..., y*m} is calculated as follows: $$f(\mathbf{Y}_{a_{p},s_{t}})=\sum_{j=1}^{m}\log P(y_{j}|y_{1:j-1},\mathbf{X})\qquad(5)$$ ## 4 Proposed Method Figure 2 shows our proposed model, which follows general text generation methods (Liu et al., 2021). To encode semantic information from an AMR graph, we use a graph encoder module (Subsection 4.1). We incorporate that information by adding a new cross attention layer to the decoder (Subsection 4.2). We also introduce two types of regularizers to guide the attention score of the new cross attention layer (Subsection 4.3). In addition, Subsection 4.4 introduces the loss function of the model, and 4.5 presents the pre-training procedure to overcome the difficulty of training newly initialized layers and a pre-trained language model. ## 4.1 Amr Encoder The AMR encoder adopts Graph Attention Networks (GAT, Velickovic et al., 2018). For a given input sequence X = {x1, x2*, ...x*|X|}, we construct a corresponding AMR graph G = (V, E) from the pre-trained AMR parser (Bai et al., 2022), where V = {v1, v2*, ..., v*|V |} is the set of nodes and E ∈ R|V |×|V |is the adjacency matrix presenting the relations between the nodes. We treat the AMR graph as an undirected graph, which means eij = eji = 1 if the two nodes vi and vj are connected, otherwise 0. Given a graph G = (V, E) and node vi ∈ V, we can obtain h′i , the hidden state of node vi, as follows: llows: $$h_{i}^{\prime}=\sigma(\sum_{j\in\mathcal{N}_{i}}\alpha_{ij}\mathbf{W}h_{j})$$ $$\alpha_{ij}=\frac{\exp(\sigma(\mathbf{a}^{T}[\mathbf{W}h_{i}\|\mathbf{W}h_{j}]))}{\sum_{k\in\mathcal{N}_{i}}\exp(\sigma(\mathbf{a}^{T}[\mathbf{W}h_{i}\|\mathbf{W}h_{k}]))},$$ (6) (6) $\frac{1}{2}$ (7) . , (7) where a Tand W are trainable parameters, σ is the LeakyRELU function, ∥ denotes the concatenation of two vectors, Niis the set of neighbor nodes of vi in G, and hiis the initial representation of vi. Note that a node (word) consists of several subwords in general. Using the embedding of the AMR parser, hiis defined as the average of the subword vectors. ![3_image_0.png](3_image_0.png) Applying the multi-head attention mechanism from the Transformer architecture (Vaswani et al., 2017), we obtain the updated representation of node vi: $$h_{i}^{\prime}=\prod_{k=1}^{K}\sigma(\sum_{j\in{\cal N}_{i}}\alpha_{i j}^{k}{\bf W}^{k}h_{j}),\qquad\quad(8)$$ , where K is the number of attention heads, α k ij are the attention coefficients of the k-th head, Wkis the weight matrix at the k-th head, and ∥ stands for the concatenation of multiple vectors. ## 4.2 Decoder After obtaining the graph information, we feed it into each decoder layer by adding a new cross attention module for AMR referred to as "AMR Cross Attention" in Figure 2. We write h′for the representations of the AMR nodes obtained from GAT, x is the vector representation of the input sentence and y lis the output of l-th decoder layer. The output of the (l+1)-th decoder layer, y l+1, is obtained as follows: $$\begin{array}{l}{{\dot{y}^{l}=\mathrm{LN}(y^{l}+\mathrm{SelfAttn}(y^{l}))}}\\ {{\dot{y}^{l}=\mathrm{LN}(\dot{y}^{l}+\mathrm{CrossAttn}(\dot{y}^{l},x))}}\\ {{\ddot{y}^{l}=\mathrm{LN}(\ddot{y}^{l}+\mathrm{CrossAttn}(\ddot{y}^{l},h^{\prime}))}}\\ {{y^{l+1}=\mathrm{LN}(\stackrel{\cdots}{y}^{l}+\mathrm{FFN}(\stackrel{\cdots}{y}^{l})),}}\end{array}$$ l)) (9) l, x)) (10) l, h′)) (11) where LN is the layer normalization function, SelfAttn is the self-attention module, CrossAttn is the cross-attention module, and FFN is the feedforward neural network. Training a deep model like Transformer is really hard and even harder with one more crossattention module. To overcome this difficulty, we employ ReZero (Bachlechner et al., 2021) as the AMR cross attention module instead of the normal residual module. This method is implemented as follows: $$\tilde{y}^{l}=\ddot{y}^{l}+\alpha{\mathcal{F}}(\ddot{y}^{l}),\qquad\qquad(13)$$ where F denotes non-trivial functions and α is a trainable parameter which helps moderate the updating of the AMR cross attention. ## 4.3 Amr Cross Attention Regularizers To incorporate the semantic information from the AMR graph more effectively, we propose two regularizers over the attention scores of the AMR cross attention module. Identical Regularizer Intuitively, a word in a sentence and its aligned node in the AMR graph should receive the same attention as they are supposed to represent similar semantic information. Two transformation matrices for the cross attention matrix over each of the source (input) sentences and the AMR graphs, alignsrc ∈ R|X|×|P| and alignamr ∈ R|V |×|P|, respectively, are defined in Equations (14) and (15), where |P| is the number of aligned pairs of words and nodes. $$align_{src}[i,k]=\begin{cases}\frac{1}{|T_{i}|}&\text{if token$x_{i}$belongs to an}\\ &\text{aligned word at position$k$}\\ 0&\text{otherwise}\end{cases}\tag{14}$$ $$align_{arm}[j,k]=\begin{cases}1&\text{if node$v_{j}$is aligned}\\ &\text{at position$k$}\\ 0&\text{otherwise}\end{cases}\tag{15}$$ Here, Ti denotes a set of subwords in the aligned word. With these matrices and two given cross attention matrices Asrc ∈ R|Y |×|X|, Ai_amr ∈ R|Y |×|V | over the review sentence and the AMR graph, respectively, the identical regularizer is formulated as follows: $$\mathcal{L}_{ir}=\sum_{i=1}^{L}\frac{1}{L}\|A_{src}^{i}\cdot align_{src}-A_{i\_amr}^{i}\cdot align_{amr}\|_{F},\tag{16}$$ where ∥∥F denotes the Frobenius norm and L is the number of the decoder layers. The matrix Asrc is obtained from an oracle fine-tuned text generation model. Also, the matrix Ai_amr is obtained by fetching the same input with the regular cross attention layer over the source sentence, which is indicated by the yellow line in Figure 2. Entropy Regularizer We expect that our model concentrates on a few important nodes. This means that the cross attention distribution of the tokens over the AMR nodes is supposed to be skewed. Therefore, we try to minimize the information entropy (Shannon, 1948) of the attention scores of the tokens over the AMR nodes. We first calculate the mean of the cross attention score of the token i at the node j over H attention heads as follows: $${\tilde{a}}_{i j}={\frac{1}{\mathcal{H}}}\sum_{h=1}^{\mathcal{H}}a_{i j h}$$ $$(17)$$ aijh (17) Then, the entropy of the l-th decoder layer is calculated over |V | nodes and |Y | output tokens: $$H^{l}=-\frac{1}{|Y|}\sum_{i}\sum_{j}\tilde{a}_{i j}\log\tilde{a}_{i j}$$ $$(18)$$ a˜ij log ˜aij (18) The entropy regularizer is defined as the mean entropy of the L decoder layers: $${\mathcal{L}}_{e r}={\frac{1}{L}}\sum_{l=1}^{L}H^{l}$$ $$(19)$$ $$(20)$$ Hl(19) 4.4 Loss Function For training the proposed model, the loss function is the sum of the normal cross entropy loss and the aforementioned two regularizers: $${\mathcal{L}}={\mathcal{L}}_{c e}+\lambda_{1}{\mathcal{L}}_{i r}+\lambda_{2}{\mathcal{L}}_{e r},$$ where λ1 is the scaling factors of the identical regularizer and λ2 is that of the entropy regularizer. ## 4.5 Pre-Training It is hard to fine-tune our model, which consists of randomly initialized modules like the AMR graph encoder and the AMR cross attention together with the pre-trained BART. Following (Bataa and Wu, 2019) and (Gheini et al., 2021), which show the positive effect of pre-training a language model with in-domain data and fine-tuning a cross attention layer, after initializing the whole model, we train it with text denoising tasks using review sentences. Following BART, we add noise into the input sentences using the following three methods: - **Token Masking**: random tokens are sampled and replaced by [*MASK*] token. - **Word Deletion**: instead of deleting subwords like BART, the whole of text spans is deleted in this method. - **Text Infilling**: random text spans are replaced by [*MASK*] using a Poisson distribution. Algorithm 1 shows the pseudocode of the text corruption algorithm that adds noise by the above three methods. | Algorithm 1 Text corruption algorithm Input: review sentence X = {x1, x2, ...xn} Output: corrupted review sentence X' = {x ′ 1, x′ 2, ...x′ m} ptoken ← 0.15 - probability of replacing one token by [MASK] pword ← 0.3 - probability of replacing text spans by [MASK] λP ossion ← 3 - value for λ parameter in Possion distribution 1: p ← gen_random[0, 1] 2: if p < 1 3 then 3: X' ← mask_tokens(X, ptoken) 4: else if p < 2 3 then 5: X' ← mask_text_spans(X, pword, λP ossion) - | Dataset | #Pos | #Neg | #Neu | |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------|--------|--------|--------| | Train | 1855 | 733 | 430 | | | Rest14 | Dev | 324 | 106 | 70 | | Test | 657 | 222 | 94 | | | Rest14-hard | Test | 21 | 20 | 12 | | Train | 1929 | 2084 | 3077 | | | MAMS-ACSA | Dev | 241 | 259 | 388 | | Test | 245 | 363 | 393 | | | Table 1: Statistics of datasets. | | | | | The entropy regularizer is also taken into account in this pre-training step. That is, the loss function of the pre-training is defined as: $${\mathcal{L}}={\mathcal{L}}_{c e}+\lambda_{3}{\mathcal{L}}_{e r},$$ L = Lce + λ3Ler, (21) where λ3 is the scaling factor for the entropy regularizer. ## 5 Experiments 5.1 Dataset The following three datasets are used in our experiments. - **Rest14**: This dataset consists of reviews in the restaurant domain, which is included in the Semeval-2014 task (Pontiki et al., 2014). Samples labeled with "conflict" are removed, so the remaining samples have the labels "positive", "negative" and "neutral". In addition, we follow the splitting of the development set suggested by Tay et al. (2018) for the sake of a fair comparison. - **Rest14-hard**: Xue and Li (2018) construct this dataset for better evaluating a model on sentences with multiple aspects. The training set and development set are the same as those of Rest14. - **MAMS-ACSA**: For the same purpose as the Rest14-hard, Jiang et al. (2019) propose a larger dataset for ACSA in which each sentence contains at least two different aspects. Their details are shown in Table 1. ## 5.2 Baselines We compare our method with multiple baselines: - **GCAE** (Xue and Li, 2018): employs CNN model with gating mechanism to selectively output the sentiment polarity related to a given aspect. - **AS-Capsules** (Wang et al., 2019): exploits the correlation between aspects and corresponding sentiments through a capsule-based model. - **CapsNet** (Jiang et al., 2019): is the capsule network based model to learn the relations between the aspects and the contexts. - **CapsNet-BERT** (Jiang et al., 2019): is the CapsNet model based on the pre-trained BERT. - **BERT-pair-QA-B** (Sun et al., 2019): performs ACSA as the sentence pair classification task by fine-tuning of the pre-trained BERT. - **AC-MIMLLN** (Li et al., 2020b): predicts the polarity of a given aspect by combining the sentiments of the words indicating the aspect. - **AC-MIMLLN-BERT** (Li et al., 2020b): is the AC-MIMLLN model based on the pretrained BERT. - **BART generation** (Liu et al., 2021): performs ACSA by a text generation model with the pre-trained BART. It is almost equivalent to our model without AMR. - **BART generation with pre-training**: is the BART generation model combined with our pre-training method except for applying entropy regularization. ## 5.3 Implementation Details The template used for constructing the target sentences in our experiments is "*Quality of [ASPECT_CATEGORY] is [SENTIMENT_WORD]*". The *[ASPECT_CATEGORY]* is filled by the aspect word, while the *[SENTIMENT_WORD]* is filled by one of {*excellent, awful, f ine*} which corresponds to {*positive, negative, neutral*} respectively. For AMR parsing, we use the pre-trained model of AMRBART1(Bai et al., 2022). In addition, LEAMR2(Blodgett and Schneider, 2021) is adopted to align the words in the input sentence and the nodes in the AMR graph. In the pre-training step, we initialize the parameters of BART using the checkpoint of BART base3. Unlike the parameters of BART, the parameters of the AMR graph encoder and the AMR cross attention modules are newly initialized with the uniform distribution. After pre-training, the last checkpoint is used for fine-tuning the ACSA model. The Adam optimizer (Kingma and Ba, 2015) is used for optimizing the model. The original parameters of BART's encoder and decoder are trained with a learning rate 2e-5 while the learning rate is set to 3e-5 for the parameters in the AMR graph encoder and the AMR cross attention modules. We set the number of the attention heads of AMR encoder to 6, the number of AMR cross attention heads to 6, the batch size is 16 and the dropout value 0.1. The initial value for the ReZero weight α is 1. The regularization coefficients λ1 and λ2 are set to (0.075, 0.1), (0.075, 0.1) and (0.025, 0.0075) for the three datasets, while λ3 is always set to 5e-3. All hyperparameters are tuned based on the accuracy on the development set. ## 5.4 Experimental Results The results of the experiments are presented in Table 2. The models were trained and evaluated five times with different initializations of the parameters. The table shows the average and standard deviation of the accuracy of five trials using the format "mean (±std)". First, our model outperforms all baselines on the three datasets, which indicates the necessity of incorporating the semantic information into the text generation model for ACSA. Second, compared with the models that learn relations between the aspect and the context like CapsNet, AC-MIMLLN, BERT-pair-QA-B and BART generation, the dominance of our model proves that exploiting the AMR graph to learn relations between words is a better way to capture contextual information. The fact that our model also outperforms BART generation with the pre-training further supports the effectiveness of the AMR. Third, the competitive results over the Rest14-hard and MAMS datasets show the effectiveness of the identical and entropy regularizers in enabling the model to concentrate on the correct aspect-related nodes, which is essential for identification of the polarity over multiple aspects. ## 5.5 Ablation Study To further investigate the effects of the different modules in our model, we conducted ablation studies. The results are presented in Table 3. First, it is found that the removal of the identical regularizer downgrades the performance, which indicates the importance of precisely capturing the semantic information. Second, we also notice that the models without the entropy regularizer perform poorly with reduction by 0.8, 1.1 and 0.4 percentage points in the accuracy on Rest14, Rest14-hard and MAMS, respectively. This shows that the entropy regularizer is essential to prevent models from attending to unnecessary AMR nodes. In addition, removing both regularizers degrades the performance more than removing each of the regularizer, which confirms the essential roles of these regulairzers in performing ACSA. Third, removing the pre-training procedure hurts the performance badly, which leads to decreases by 1.4, 7.5 and 1.5 percentage points on the three datasets respectively. This indicates the big gap between the newly initialized modules and the pre-trained model and the necessity of the pre-training step for overcoming this problem. In summary, the ablation studies show that each component contributes to the entire model. The contribution of the pre-training step is the greatest, while those of the identical and entropy regularizers are comparable to each other. ## 6 Analysis 6.1 Case Study To further examine how the semantic information of AMR and two regularizers work well in ACSA, a few examples are shown as a case study. Table 4 compares our model with the state-of-the-art method "BART generation". The symbols P, N Model Rest14 Rest14-hard MAMS GCAE (Xue and Li, 2018) 81.3(±0.883) 54.7(±4.92) 72.1† AS-Capsules (Wang et al., 2019) 82.2(±0.414) 60.8(±2.77) 75.1(±0.473) CapsNet (Jiang et al., 2019) 81.2(±0.631) 54.0(±0.924) 74.0† AC-MIMLLN (Li et al., 2020b) 81.6(±0.715) 65.3(±2.26) 76.4(±0.704) BERT-pair-QA-B (Sun et al., 2019) 87.5(±1.18) 69.4(±4.37) 79.1(±0.973) CapsNet-BERT (Jiang et al., 2019) 86.6(±0.943) 51.3(±1.41) 79.5† AC-MIMLLN-BERT (Li et al., 2020b) 89.3(±0.720) 74.7(±3.29) 81.2(±0.606) BART generation (Liu et al., 2021) 90.5(±0.315) 77.4(±2.16) 83.1(±0.478) BART generation with pre-training 90.6(±0.517) 75.5(±3.77) 83.6(±0.847) Our model 91.2(±0.258) 78.1(±2.53) **84.6(±0.453)** Table 2: Accuracy (%) of ACSA models. † refers to citation from Jiang et al. (2019). Model Rest14 Rest14-hard MAMS Our model 91.2(±0.258) 78.1(±2.53) **84.6(±0.453)** w/o identical regularizer 91.0(±0.424) 77.4(±1.89) 84.0(±0.320) w/o entropy regularizer 90.4(±0.162) 77.0(±1.68) 84.2(±1.10) w/o entropy and identical regularizer 90.3(±0.426) 74.3(±1.69) 83.8(±0.638) w/o pre-training 89.8(±0.217) 70.6 (±1.03) 83.1(±0.618) Table 3: Ablation study. and O represent the positive, negative and neutral class respectively. The first example, "*I never had* an orange donut before so I gave it a shot", has no explicit sentiment expression. With the help of semantic information and two regularizers, our model can correctly predict the true label while BART generation cannot. The second and third examples contain multiple aspects, which can affect each other's predictions. In the second example, the BART generation model may capture the positive sentiment toward the aspect word "*atmosphere*" for anticipating the sentiment of the different aspect "*service*', which leads to outputting the wrong label. Another incorrect prediction by this baseline is shown in the third example, where the polarities of "*food*" and "*staff* " are mistakenly swapped. In contrast, our model pays attention to only the aspect-related AMR nodes, resulting in the correct predictions in both examples. However, our model also faces a difficulty in some cases. In the last example, it wrongly predicts the sentiment polarity for "*miscellaneous*" because it is really hard to capture aspect-related AMR nodes for a coarse-grained aspect class like "*miscellaneous*". ## 6.2 Attention Visualization To study the effectiveness of the two regularizers in guiding the AMR cross attention collocation, we illustrate the cross attention matrix produced by our full model and the model without two regularizers in Figure 3. The review sentence is "The food was good overall, but unremarkable given the price.". The polarity label of the aspect category "*food*" is positive and the polarity of the aspect category "*price*" is negative. The model without two regularizers has dense attention matrices that might be noise for prediction of the polarity. In contrast, the attention matrices of our full model are sparse. For example, as for the food category, the word "*food*" and "*excellent*" in the target sentence pay much attention or more attention than the model without the regularizers to the nodes "*food*" and "*good-02*" in the AMR graph. Similarly, as for the price category, "*price*" in the target sentence pays a great deal of attention to the node "*price-01*" in the AMR graph, while "*awful*" pays less attention to "*remarkable-02*" than the model without the regularizers. Those cases indicate that our attention mechanism works well even when a review sentence contains multiple aspects. ## 7 Conclusions In this paper we proposed a model which integrated the semantic information from the Abstract Meaning Representation (AMR) and the text generation method for the task of Aspect Category Sentiment Analysis (ACSA). Moreover, to more precisely cap- | Sentence | Aspect Category | BART Generation | Our Model | Label | |------------------------------------------------------------------------------------------|---------------------------|-------------------|-------------|-----------| | I never had an orange donut before so I | {food} | (P) | (O) | (O) | | gave it a shot. The atmosphere was wonderful, however | {ambiance, service, food} | (P, P, N) | (P, N, N) | (P, N, N) | | the service and food were not. There are several specials that change | {food, staff} | (O, P) | (P, O) | (P, O) | | daily, which the servers recite from memory. The place was busy and had a bohemian feel. | {place, miscellaneous} | (P, P) | (N, P) | (N, O) | ![8_image_0.png](8_image_0.png) ture the semantic correlations between the target words and the AMR nodes, we proposed two regularizers: the identical and entropy regularizers, over the AMR cross attention modules. The exper- Table 4: Case studies of our model compared with state-of-the-art method. imental results on three datasets showed that our model outperformed all baselines. ## 8 Limitations Currently, our model only exploits the direct relations between nodes in the AMR graph. In other words, only one-hop neighborhoods can be considered. However, there are a few cases where an opinion word and a related aspect word can be in a k-hop neighborhood. In the future, we will design a model that can capture long distance relations in the AMR graph. Another limitation is that the errors of the pre-trained AMR parsers and AMR alignment models are propagated to the model as a whole. What is required is to improve the performance of those modules. ## References Thomas Bachlechner, Bodhisattwa Prasad Majumder, Henry Mao, Gary Cottrell, and Julian McAuley. 2021. Rezero is all you need: fast convergence at large depth. In *Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence*, volume 161 of *Proceedings of Machine Learning Research*, pages 1352–1361. PMLR. Xuefeng Bai, Yulong Chen, and Yue Zhang. 2022. Graph pre-training for AMR parsing and generation. In *Proceedings of the 60th Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 6001–6015, Dublin, Ireland. Association for Computational Linguistics. Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract Meaning Representation for sembanking. In *Proceedings of the 7th Linguistic* Annotation Workshop and Interoperability with Discourse, pages 178–186, Sofia, Bulgaria. Association for Computational Linguistics. Enkhbold Bataa and Joshua Wu. 2019. An investigation of transfer learning-based sentiment analysis in Japanese. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 4652–4657, Florence, Italy. Association for Computational Linguistics. Austin Blodgett and Nathan Schneider. 2021. Probabilistic, structure-aware algorithms for improved variety, accuracy, and coverage of AMR alignments. In *Proceedings of the 59th Annual Meeting of the* Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3310–3321, Online. Association for Computational Linguistics. Jiajun Cheng, Shenglin Zhao, Jiani Zhang, Irwin King, Xin Zhang, and Hui Wang. 2017. Aspect-level sentiment classification with heat (hierarchical attention) network. In *Proceedings of the 2017 ACM on Conference on Information and Knowledge Management*, CIKM '17, page 97–106. Association for Computing Machinery. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Shibhansh Dohare and Harish Karnick. 2017. Text summarization using abstract meaning representation. *CoRR*, abs/1706.01678. Mozhdeh Gheini, Xiang Ren, and Jonathan May. 2021. Cross-attention is all you need: Adapting pretrained Transformers for machine translation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1754–1765, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Hardy Hardy and Andreas Vlachos. 2018. Guided neural language generation for abstractive summarization using Abstract Meaning Representation. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 768–773, Brussels, Belgium. Association for Computational Linguistics. Mengting Hu, Shiwan Zhao, Li Zhang, Keke Cai, Zhong Su, Renhong Cheng, and Xiaowei Shen. 2019. CAN: Constrained attention networks for multi-aspect sentiment analysis. In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference* on Natural Language Processing (EMNLP-IJCNLP), pages 4601–4610, Hong Kong, China. Association for Computational Linguistics. Marcio Inácio and Thiago Pardo. 2021. Semantic-based opinion summarization. In *Proceedings of the International Conference on Recent Advances in Natural* Language Processing (RANLP 2021), pages 619–628, Held Online. INCOMA Ltd. Qingnan Jiang, Lei Chen, Ruifeng Xu, Xiang Ao, and Min Yang. 2019. A challenge dataset and effective models for aspect-based sentiment analysis. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6280– 6285, Hong Kong, China. Association for Computational Linguistics. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In *3rd International Conference on Learning Representations,* ICLR 2015, San Diego, CA, USA, May 7–9, 2015, Conference Track Proceedings. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 7871–7880, Online. Association for Computational Linguistics. Yuncong Li, Zhe Yang, Cunxiang Yin, Xu Pan, Lunan Cui, Qiang Huang, and Ting Wei. 2020a. A joint model for aspect-category sentiment analysis with shared sentiment prediction layer. In *Proceedings of* the 19th Chinese National Conference on Computational Linguistics, pages 1112–1121, Haikou, China. Chinese Information Processing Society of China. Yuncong Li, Cunxiang Yin, Sheng-hua Zhong, and Xu Pan. 2020b. Multi-instance multi-label learning networks for aspect-category sentiment analysis. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3550–3560, Online. Association for Computational Linguistics. Yunlong Liang, Fandong Meng, Jinchao Zhang, Jinan Xu, Yufeng Chen, and Jie Zhou. 2019. A novel aspect-guided deep transition model for aspect based sentiment analysis. In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language* Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 5569–5580, Hong Kong, China. Association for Computational Linguistics. Bin Liu, Tao Lin, and Ming Li. 2023. Enhancing aspectcategory sentiment analysis via syntactic data augmentation and knowledge enhancement. *KnowledgeBased Systems*, 264:110339. Fei Liu, Jeffrey Flanigan, Sam Thomson, Norman Sadeh, and Noah A. Smith. 2015. Toward abstractive summarization using semantic representations. In *Proceedings of the 2015 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1077–1086, Denver, Colorado. Association for Computational Linguistics. Jian Liu, Zhiyang Teng, Leyang Cui, Hanmeng Liu, and Yue Zhang. 2021. Solving aspect category sentiment analysis as a text generation task. In *Proceedings of* the 2021 Conference on Empirical Methods in Natural Language Processing, pages 4406–4416, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Long H. B. Nguyen, Viet Pham, and Dien Dinh. 2021. Improving neural machine translation with amr semantic graphs. *Mathematical Problems in Engineering*. Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014. SemEval-2014 task 4: Aspect based sentiment analysis. In *Proceedings of the 8th* International Workshop on Semantic Evaluation (SemEval 2014), pages 27–35, Dublin, Ireland. Association for Computational Linguistics. Sebastian Ruder, Parsa Ghaffari, and John G. Breslin. 2016. A hierarchical model of reviews for aspectbased sentiment analysis. In *Proceedings of the 2016* Conference on Empirical Methods in Natural Language Processing, pages 999–1005, Austin, Texas. Association for Computational Linguistics. Martin Schmitt, Simon Steinheber, Konrad Schreiber, and Benjamin Roth. 2018. Joint aspect and polarity classification for aspect-based sentiment analysis with end-to-end neural networks. In *Proceedings of the 2018 Conference on Empirical Methods* in Natural Language Processing, pages 1109–1114, Brussels, Belgium. Association for Computational Linguistics. Yongxue Shan, Chao Che, Xiaopeng Wei, Xiaodong Wang, Yongjun Zhu, and Bo Jin. 2022. Bi-graph attention network for aspect category sentiment classification. *Knowledge-Based Systems*, 258:109972. C. E. Shannon. 1948. A mathematical theory of communication. *Bell System Technical Journal*, 27(4):623– 656. Linfeng Song, Daniel Gildea, Yue Zhang, Zhiguo Wang, and Jinsong Su. 2019. Semantic neural machine translation using AMR. *Transactions of the Association for Computational Linguistics*, 7:19–31. Chi Sun, Luyao Huang, and Xipeng Qiu. 2019. Utilizing BERT for aspect-based sentiment analysis via constructing auxiliary sentence. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 380–385, Minneapolis, Minnesota. Association for Computational Linguistics. Yi Tay, Luu Anh Tuan, and Siu Cheung Hui. 2018. Learning to attend via word-aspect associative fusion for aspect-based sentiment analysis. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc. Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. 2018. Graph attention networks. In *6th International* Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. Yequan Wang, Minlie Huang, Xiaoyan Zhu, and Li Zhao. 2016. Attention-based LSTM for aspectlevel sentiment classification. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 606–615, Austin, Texas. Association for Computational Linguistics. Yequan Wang, Aixin Sun, Minlie Huang, and Xiaoyan Zhu. 2019. Aspect-level sentiment analysis using ascapsules. In *The World Wide Web Conference*, WWW '19, page 2033–2044. Association for Computing Machinery. Bowen Xing, Lejian Liao, Dandan Song, Jingang Wang, Fuzheng Zhang, Zhongyuan Wang, and Heyan Huang. 2019. Earlier attention? aspect-aware lstm for aspect-based sentiment analysis. In *Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19*, pages 5313–5319. International Joint Conferences on Artificial Intelligence Organization. Dongqin Xu, Junhui Li, Muhua Zhu, Min Zhang, and Guodong Zhou. 2021. XLPT-AMR: Cross-lingual pre-training via multi-task learning for zero-shot AMR parsing and text generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 896–907, Online. Association for Computational Linguistics. Wei Xue and Tao Li. 2018. Aspect based sentiment analysis with gated convolutional networks. In *Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long* Papers), pages 2514–2523, Melbourne, Australia. Association for Computational Linguistics. Peisong Zhu, Zhuang Chen, Haojie Zheng, and Tieyun Qian. 2019. Aspect aware learning for aspect category sentiment analysis. *ACM Trans. Knowl. Discov.* Data, 13(6). ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 8 A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4 ✓ B1. Did you cite the creators of artifacts you used? Section 4 B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 5 and 6 (The performance of our proposed model is reported.) ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. The statistics of the datasets we used is written in Subsection 5.1. ## C ✓ **Did You Run Computational Experiments?** Section 5 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Subsection 5.3 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Subsection 5.3 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 5 and 6 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Subsection 5.3 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
lin-ng-2023-mind
Mind the Biases: Quantifying Cognitive Biases in Language Model Prompting
https://aclanthology.org/2023.findings-acl.324
We advocate the importance of exposing uncertainty on results of language model prompting which display bias modes resembling cognitive biases, and propose to help users grasp the level of uncertainty via simple quantifying metrics. Cognitive biases in the human decision making process can lead to flawed responses when we are under uncertainty. Not surprisingly, we have seen biases in language models resembling cognitive biases as a result of training on biased textual data, raising dangers in downstream tasks that are centered around people{'}s lives if users trust their results too much. In this work, we reveal two bias modes leveraging cognitive biases when we prompt BERT, accompanied by two bias metrics. On a drug-drug interaction extraction task, our bias measurements reveal an error pattern similar to the availability bias when the labels for training prompts are imbalanced, and show that a toning-down transformation of the drug-drug description in a prompt can elicit a bias similar to the framing effect, warning users to distrust when prompting language models for answers.
## Mind The Biases: Quantifying Cognitive Biases In Language Model Prompting Ruixi Lin and **Hwee Tou Ng** Department of Computer Science National University of Singapore {ruixi,nght}@comp.nus.edu.sg ## Abstract We advocate the importance of exposing uncertainty on results of language model prompting which display bias modes resembling cognitive biases, and propose to help users grasp the level of uncertainty via simple quantifying metrics. Cognitive biases in the human decision making process can lead to flawed responses when we face uncertainty. Not surprisingly, we have seen biases in language models resembling cognitive biases as a result of training on biased text, raising dangers in downstream tasks that are centered around people's lives if users trust their results too much. In this work, we reveal two bias modes leveraging cognitive biases when we prompt BERT, accompanied by two bias metrics. On a drug-drug interaction extraction task, our bias measurements reveal an error pattern similar to the availability bias when the labels for training prompts are imbalanced, and show that a toning-down transformation of the drug-drug description in a prompt can elicit a bias similar to the framing effect, warning users to distrust when prompting language models for answers.1 ## 1 Introduction Cognitive biases describe the flawed human response patterns for decision making under uncertainty (Tversky and Kahneman, 1974, 1981; Jacowitz and Kahneman, 1995; Kahneman and Frederick, 2002; Meyer, 2004). For example, when people are biased by the availability heuristic, they make probability judgments based on the ease with which information comes to mind (Tversky and Kahneman, 1973). Knowing cognitive biases can help predict what types of error will be made, which is also helpful for interpreting behaviors of generative systems such as language models, as they may err in a similar pattern as humans do, especially when the data used to build the systems carry 1The source code of this paper is available at https: //github.com/nusnlp/CBPrompt. man-made biases (Schwartz et al., 2022; Jones and Steinhardt, 2022). We are inspired by leveraging cognitive biases - systematic error patterns which deviate from rational decisions - to study error patterns of language models. We highlight the importance of exposing uncertainty to users of language models (Pinhanez, 2021), and leverage cognitive biases to quantify the level of imprecision in results when performing language model prompting via simple, perceptual metrics. Some would argue that the biases in machines are a result of unmatched data distributions in training and test sets. However, merely matching training and test distributions does not solve the problem of biased predictions for long-tailed input distributions. For example, on the drug-drug interaction (DDI) dataset (Segura-Bedmar et al., 2013), the training and test distributions are identically skewed, and there are 100 times more *Negative* (non-interacting) drug pairs than the interacting drug pairs in both sets. Though performances on the development set and test set are not too bad for positive class inputs with a prompt-based BERT model (Devlin et al., 2019), the model still most frequently mistakes positive pairs for negative pairs, as shown by the confusion matrix in the left part of Figure 1. This label bias towards *Negative* mimics the availability bias. The availability bias is one of the most common cognitive biases in real life, especially in doctors' diagnoses which increase with years of training (Mamede et al., 2010; Saposnik et al., 2016). Moreover, an equal number of samples in each class during training does not guarantee that a "majority" class does not exist, especially when the input distribution of the negative class has a higher variance (i.e., highly diversified samples in the negative class) and the samples within the positive class share more common characteristics. Considering the input variance, even though sample sizes are the same, the negative class still can be viewed as the majority class. More on this can ![1_image_0.png](1_image_0.png) ## Be Found In Appendix A. In addition to the availability bias, the framing effect is another common cognitive bias. The framing effect is prevalent in medical diagnoses (Loke and Tan, 1992), where doctors intentionally frame diagnoses positively ("90% chance to survive") or negatively ("10% chance to die") to make patients perceive the results differently. It was recently found that a failure mode of a code generation language model Codex (Chen et al., 2021) resembles the framing effect - how an input prompt is framed changes the model output predictably (Jones and Steinhardt, 2022). In our study, prompting BERT with paraphrases generated by toning-down the original inputs improves prediction results, suggesting a bias brought by the tone of the input sentences. It is important to see that the biases found above are not the expected behavior of BERT as a promptbased classification model, and our goal of this paper is to analyze these failure modes from the lens of cognitive biases and quantify them via simple metrics. We are devoted to warning practitioners about the risks of biased language model predictions, especially on biomedical tasks. On a case study of the DDI extraction task, we measure output label distribution with content-free prompts and how model output changes when applying a toning-down transformation to prompt texts. Our key findings are: - We have identified an error pattern similar to the availability bias when the labels for training prompts are imbalanced, and our measurements quantitatively show that the bias is highest towards the majority label. - We have motivated a toning-down transformation of the drug-drug description in a prompt and found that this framing can elicit a bias similar to the framing effect. ## 2 Related Work 2.1 Cognitive Biases In Language Models Recent work on studying behaviors of pretrained language models (PLMs) has revealed that some failure modes bear resemblance to cognitive biases. Wallace et al. (2019) study triggering prompts that fool the GPT-2 language model to generate contents that mimic hallucinations arising from cognitive biases. Zhao et al. (2021) find the majority label bias to be one of the pitfalls for GPT-3 (Brown et al., 2020), resembling the availability bias. Liu et al. (2022a) and Lu et al. (2022) show that specific order of training examples can lead to different model performance for GPT-3, analogous to the anchoring bias where estimates may be influenced by what information is provided first or last. Jones and Steinhardt (2022) capture failures in GPT-3 and Codex and find that error patterns of large language models (LLMs) resemble cognitive biases in humans. Agrawal et al. (2022) also find a bias in GPT-3 which is similar to the framing effect, where using separate prompts rather than a chained prompt leads to wrong answers for medication extraction. In a nutshell, most of these works focus on studying issues of LLMs and have discerned their error patterns' resemblance to human cognitive biases. We follow this line of research, and argue that relatively small PLMs, such as BERT, also display biases resembling human cognitive biases, and we propose metrics to quantify two of these biases. ## 2.2 Prompt-Based Language Models As a booming research area, prompt-based methods show their success through few-shot learning performance for language models (Zhao et al., 2021; Jones and Steinhardt, 2022; Lu et al., 2022). However, prompts may not be understood by models the way humans do (Khashabi et al., 2022) and they affect biases in models (Webson and Pavlick, 2022; Utama et al., 2021; Prabhumoye et al., 2021). From a taxonomy viewpoint, prompt-based methods include: *Prompt design*, where the job is designing human-readable prompts to demonstrate to a frozen language model for downstream tasks (Brown et al., 2020); *Prompt tuning*, where tunable soft prompts are used for a frozen language model (Lester et al., 2021; Qin and Eisner, 2021; Sanh et al., 2022; Liu et al., 2022b); and Prompt-based fine-tuning, which utilizes fixed human-readable prompts to fine-tune a model (Scao and Rush, 2021; Gao et al., 2021; Schick and Schütze, 2021a,b; Tam et al., 2021), such as pattern-exploiting training (Schick and Schütze, 2021b; Tam et al., 2021). While the first two types are popular for large language models such as GPTs, prompt-based finetuning is more common when prompting BERT and other relatively small language models. In this work, we focus on prompt-based fine-tuning methods for BERT. Studies on interpretability focus on providing measures for the incompleteness that produces unquantified biases (Doshi-Velez and Kim, 2017). Here we aim to fill in the gap for quantifying the biases of prompt-based language models. In addition, adversarial input is a popular technique to interpret how a model is fooled, by tweaking image pixels (Akhtar and Mian, 2018; Li et al., 2019) or textual triggers (Wallace et al., 2019). However, in this work, we seek to study the effect of altered texts by leveraging cognitive bias patterns. ## 3 Proposed Metrics We propose two metrics for quantifying the bias modes by the availability bias and the framing effect respectively in prompt-based BERT, helping users perceive how much bias comes with prompt- ## Ing Results. 3.1 The Availability Bias Metric The error by the availability bias can be viewed as a shortcut of how a model "thinks" an answer is easier to recall and occurs more readily than it actually occurs at test time, as long as it has seen many prompted instances of the same answer during training. On the DDI dataset, the majority label of prompts during training is *Negative* and the inference results show many false negatives. This resembles a situation when a human sees many negative examples, then the human inferences are more likely to be negative. The Availability Bias Score. To quantify the availability bias for the DDI task, we are inspired by the work of (Zhao et al., 2021), where a language model's bias towards certain answers is estimated by feeding into the model a dummy test input that is content-free, i.e., with a dummy prompt, and measuring the deviation of the content-free prediction score from the uniform prediction score. Following this idea, we propose an availability bias metric via querying a model with multiple dummy test prompt inputs and computing the deviation of the prediction scores from the uniform prediction score as the bias measurement. The intuition is that, when a dummy-content test prompt is given, the best that an unbiased model can do is to make a uniform random guess. If availability biases are present in the results, the number of predictions in each class will not be uniform. Henceforth, we can measure the deviation of the imbalanced predictions from the uniform prediction score to quantify the availability bias. We input dummy prompts to a language model, and measure the frequency of prediction of each class, and then compute the difference between class frequency and the uniform prediction score. For example, the DDI task features 5 classes, including 4 DDI types and *Negative*. Hence, the difference from 1/5 = 20% is the availability bias score of each class. In particular, we evaluate against a prompt-based fine-tuned BERT model and first obtain predictions conditioned on dummy prompt inputs. Let N denote the number of dummy test prompts, xdummy denote a dummy prompt input. $${\hat{y}}=\arg\operatorname*{max}_{y}p(y|x_{\mathrm{dummy}})$$ $\downarrow$ . y p(y|xdummy) (1) where p(y|xdummy) is the softmax score obtained from the classification layer. Then we measure the frequency of each class prediction, i.e., the number of dummy predictions in each class (denoted by count(Ci)) divided by total number of dummy test prompts N. $$c o u n t(C_{i})=\sum_{j=1}^{N}\mathbb{1}\{{\hat{y}}_{j}=C_{i}\}\qquad\quad(2)$$ where 1{·} evaluates to 1 when the condition in the curly braces is met and 0 otherwise. Let M denote the number of classes. We propose the absolute deviation of the frequency from 1/M as the availability bias score for each class Ci, denoted by Availability(Ci), and computed as follows: $$A v a i l a b i l i t y(C i)=\left|{\frac{c o u n t(C_{i})}{N}}-{\frac{1}{M}}\right|\quad(3)$$ For fairness in the dummy prompt design, we extract from each class an equal number of test instances and replace any UMLS keyword in the text with a dummy word, N/A, to form dummy prompts. The choice of dummy word follows the content-free prompt design in (Zhao et al., 2021). The reason to construct dummy prompts by extracting templates from each class is to mitigate the effect that a class-specific content-free input may correlate with surface class patterns. Moreover, our metric is robust to the number of dummy test prompts used, and we discuss it in Appendix B. ## 3.2 The Framing Effect Metric The framing effect describes a biased perception about the same thing when it is framed differently, e.g., toning down an expression. We observe similar biases in BERT prompting when we transform the same input text describing a drug-drug interaction into a toned-down expression. When we use paraphrases as input for prompt-based fine-tuning and testing, the test predictions change and test F1 score increases. ## Measuring Framing Effect Via Paraphrasing To measure the framing effect, we paraphrase the original drug-drug interaction descriptions to sound softer. We leverage the GPT-3 (Brown et al., 2020) model to build a paraphrase dataset, which contains 500 training instances, 50 development instances, and 300 test instances. To gauge the quality of paraphrase generation, we first compute BERTScore (Zhang et al., 2020) of the 850 generated sentences and their source reference sentences. BERTScore is a cosine similarity metric based on contextual embeddings to evaluate how much candidate and reference sentences match. The average BERTScore of all pairs is 97%, suggesting that the generated sentences are similar to the original sentences. However, BERTScore does not take into account the specific characteristics of a candidate, such as how toned-down a paraphrase is compared to the original sentence. Therefore, we extend BERTScore and propose a Framing Effect Paraphrase (FEP) score to measure the framing effect-based *P, R, F*1 scores for paraphrases and their source sentences. We focus on the framing effect of toning down a description and introduce a dictionary of toned-down words. The FEP score will award a paraphrase if any word in the paraphrase occurs in the dictionary, and penalize the source sentence if it already contains toneddown words. The reason to award a paraphrase is to encourage the use of toned-down words, and the source sentence is penalized because the best a paraphrase can do is to retain a tone-down word (since it is already in the source sentence), so the paraphrase will not receive a score for that word match. The dictionary of toned-down words, denoted as A, is a list of toned-down words/rules, such as hedging words and uncertainty adjectives or adverbs, such as "may", "can", and "reportedly", and words indicating conditions, such as "if" and "when". Given a source sentence x and a paraphrase xˆ, to compute precision, the FEP score not only computes a maximal matching similarity (by greedy matching) of each token xˆj in xˆ to a token in x, but also computes a reward score of each token in xˆ by a scoring function ϕA, and precision is the larger of the two. Similarly, to compute recall, the FEP score computes both a matching similarity of each token xiin x to a token in xˆ and a penalty score of each token in x by 1 − ϕA(xi), and recall is the smaller of the two. We then measure F1 score by combining the precision and recall. The FEP precision, recall, and F1 are denoted as PFEP, RFEP, FFEP respectively and are defined as follows: $$P_{\mathrm{FEP}}=\frac{1}{|\hat{x}|}\sum_{\hat{x}_{j}\in\hat{x}}\max(\max_{x_{i}\in x}(\mathbf{x}_{i}^{\top}\hat{\mathbf{x}}_{j}),\phi_{\mathcal{A}}(\hat{x}_{j}))\tag{4}$$ where $$\phi_{\mathcal{A}}({\hat{x}}_{j})={\begin{cases}1&{\mathrm{if}}\;{\hat{x}}_{j}\in{\mathcal{A}}\\ 0&{\mathrm{if}}\;{\hat{x}}_{j}\notin{\mathcal{A}}\end{cases}}$$ $$({\mathfrak{S}})$$ $$R_{\mathrm{FEP}}=$$ RFEP =1 $$\frac{1}{|x|}\sum_{x_{i}\in x}\min(\max(\mathbf{x}_{i}^{\top}\hat{\mathbf{x}}_{j}),1-\phi_{\mathcal{A}}(x_{i})),\tag{6}$$ $$F_{\mathrm{FEP}}=2{\frac{P_{\mathrm{FEP}}\cdot R_{\mathrm{FEP}}}{P_{\mathrm{FEP}}+R_{\mathrm{FEP}}}}$$ i.e., $F_{\mathrm{FEP}}=2{\frac{P_{\mathrm{FEP}}\cdot R_{\mathrm{FEP}}}{P_{\mathrm{FEP}}+R_{\mathrm{FEP}}}}$. $$\left(7\right)$$ i) $$\quad$$ (6) $$\quad$$... The original sentence x and the paraphrase xˆ are used as the input sentence of a prompt for fine-tuning BERT and testing, respectively. The prompt pattern will be introduced in Section 4.1. We then calculate conditional probabilities in a given FFEP score range to measure the fine-grained performance changes caused by the toning down effect. For FFEP in [*a, b*), we compute the conditional probability of test pairs that are correctly predicted using the paraphrase input, given that the predictions of their original sentence are wrong. Specifically, we propose to measure the conditional probability, denoted as ∆ in a given FFEP score range, as follows: $$\Delta=\frac{\sum_{k\in\mathcal{T}}1\{f(x^{k})\neq y^{k},f(\hat{x}^{k})=y^{k}\}}{\sum_{k\in\mathcal{T}}1\{f(x^{k})\neq y^{k}\}},\quad\quad(8)$$ $$\text{given}\quad F_{\text{FEP}}(x^{k},\hat{x}^{k})\text{in}[a,b)$$ where T denotes the indices of test instances with FFEP scores in the given range, f denotes the prompt-based language model, f(x k) and f(ˆx k) denote the model prediction for the k-th test input x kand xˆ krespectively, and y denotes the correct label. ## 4 Experiments 4.1 Dataset And Model We focus on the relation extraction task of drugdrug interactions, and use the DDIExtraction dataset (Segura-Bedmar et al., 2013) for our experiments. The DDI dataset was constructed with MedLine abstracts and DrugBank documents on drug-drug interactions. The DDI dataset uses 4 positive DDI types to annotate the semantic relation for the interaction of a drug pair, including *Mechanism* (DDI-mechanism), *Effect* (DDIeffect), *Advice* (DDI-advise), and Int (DDI-int), and a false class, which we refer to as the *Negative* class. *Mechanism* denotes the relation about a pharmacokinetic mechanism, *Effect* is used to annotate an effect or a pharmacodynamics mechanism, *Advice* is the relation describing an advice or recommendation regarding a drug interaction, and Int is the type for any other positive interaction types (Zhang et al., 2018). The classes are imbalanced with 85.2% *Negative*, 6.2% *Effect*, 4.9% Mechanism, 3.1% *Advice*, and 0.6% Int. Among all positive DDI types, *Mechanism* and *Advice* are better recognized, while *Effect* and Int are harder to be identified. For data preprocessing, we follow (Yasunaga et al., 2022) to replace the names of drugs of a pair to be classified with "@DRUG$", and split the dataset into 25,296 training, 2,496 development, and 5,716 test instances. The language model we study in this work is BERT-base2, which uses a transformer (Vaswani et al., 2017) neural network pretrained on a 3.3 billion word corpus of general-domain English texts. For prompting BERT, we use the prompt-based fine-tuning method ADAPET3(Tam et al., 2021). The ADAPET method fine-tunes BERT via clozestyle prompt inputs with a [MASK] token (or tokens). The output is a softmax score produced by BERT over the [MASK] token vocabulary, which then corresponds to a class label. During training, the model is fine-tuned to minimize the sum of the decoupled label loss and the label-conditioned MLM loss. We stick to a single prompt pattern, "([MASK]) [TEXT]", where [MASK] is the label phrase to be predicted and [TEXT] is the input drug pair description. The verbalizers are {"0": "false", "DDI-effect": "effect", "DDI-mechanism": "mechanism", "DDI-advise": "advice", "DDI-int": "interaction"}. We use a simple prompt pattern in this work. Since we obtain similar findings with more complex prompt patterns, we do not include them in this paper. ## 4.2 Measuring Availability Bias In our experiments, we construct a total of 100 dummy test prompts, with 20 templates randomly extracted from each class. For the dummy test prompt design, we search for UMLS keyword contents in a sentence and replace them with dummy phrases N/A, and apply the prompt pattern: ([MASK]) [TEXT]. The [TEXT] part contains a 2https://huggingface.co/ bert-base-uncased 3https://github.com/rrmenon10/ADAPET | Availability bias score (%) | | | | | | |-------------------------------|------------|------------|------------|-------------|------------| | Training size | 10-shot | 100-shot | 1,000-shot | 10,000-shot | 25,296 | | Negative | 26.3 (2.1) | 77.7 (2.1) | 39.7 (3.9) | 47.0 (3.6) | 52.0 (2.8) | | Mechanism | 20.0 (0.0) | 20.0 (0.0) | 13.7 (0.5) | 17.3 (1.2) | 16.7 (1.3) | | Advice | 20.0 (0.0) | 18.3 (1.7) | 8.3 (2.6) | 7.0 (2.4) | 8.3 (2.6) | | Effect | 33.7 (2.1) | 20.0 (0.0) | 16.3 (2.1) | 12.7 (2.5) | 11.7 (2.4) | | Int | 20.0 (0.0) | 19.3 (0.5) | 1.3 (0.5) | 10.0 (0.8) | 15.3 (1.2) | | Test data | # pairs | F1 | |-------------------------------------------|-----------|------| | Paraphrase, including invalid paraphrases | 300 | 44.6 | | Original sentences of the above | 300 | 10.2 | | Paraphrase, excluding invalid paraphrases | 208 | 55.7 | | Original sentences of the above | 208 | 9.0 | test sentence with multiple N/As and the [MASK] part will be the predicted label during testing. For UMLS keyword extraction, we exploit MetaMap4 and its Python wrapper5. An example dummy test prompt is shown below. ([MASK]) @DRUG$ competes with a N/A of N/A for N/A N/A N/A notably ## N/A N/A N/A N/A N/A @Drug$ N/A N/A N/A And N/A N/A The language model we measure against is BERT-base, fine-tuned via the ADAPET method with prompt inputs of the full DDI training set and few-shot training sets including 10, 100, 1,000, 10,000-shot training settings. Note that the original test F1 scores of the positive DDI types on the 10, 100, 1,000, 10,000-shot, and full training set are 5.04%, 12.36%, 56.16%, 74.64%, and 80.36% respectively. We repeat the experiments three times with different random seeds, and report the mean and standard deviation. Table 1 shows the availability bias score (%) for each class, on different fine-tuned BERT models. The rightmost column in Table 1 represents the scores for the BERT model fine-tuned on the full training set. The upper limit for the availability bias score is (100-20)/100=80%, and the closer the bias score gets to the upper limit, the more biased the model makes predictions towards the associated class. As expected, the bias towards the *Negative* class is the largest, by 52%, suggesting that when supposedly making random guess for dummy inputs, the model's behavior is vastly biased towards predicting drug pairs as no relation. In addition, results in column 2 to column 5 in Table 1 present availability bias scores for fewshot training cases. It is interesting to see that the 10-shot trained model exhibits lower bias score towards the majority class. However, its accuracy on the original full test set is only 5.04%. For the remaining cases, the conclusion that the model outputs are biased towards the majority class also holds in few-shot training settings. Though one may argue that labels in prompts do not matter much for classification as in traditional supervised learning (Min et al., 2022), we find that it is not true from our availability bias scores obtained. The label in a prompt still plays an important part in prompt-based training, leading to availability bias-like predictions. The practical implication of knowing this bias pattern is that when users see model predictions, they can be informed that a prediction given by a model is biased towards the predicted label by the quantified amount. ## 4.3 Measuring Framing Effect We first build the paraphrase dataset, where we randomly select 500 training instances, 50 development instances, and 300 test instances from the full DDI training, development, and | FFEP | # Ori. | # Pp. correct | ∆ | |--------------|----------|-----------------|------| | wrong | | | | | [0.99, 1.00) | 78 | 77 | 98.7 | | [0.97, 0.99) | 23 | 12 | 52.2 | | [0.95, 0.97) | 35 | 27 | 77.1 | | [0.00, 1.00) | 141 | 119 | 84.4 | test set respectively for paraphrasing. The paraphrases are generated by prompting GPT-3 with a demonstration and the actual query, where a priming example (in blue) is appended to the test sentence to be paraphrased (denoted as [INPUT]). In our experiments, we design 8 priming examples and randomly pick one of them as demonstration. An example GPT-3 query is given below. Paraphrase the following drug interaction description. === Although @DRUG$ exerts a slight intrinsic anticonvulsant effect, its abrupt suppression of the protective effect of a @DRUG$ agonist can give rise to convulsions in epileptic patients. Description: @DRUG$ exerts a slight intrinsic anticonvulsant effect, and its abrupt suppression of the protective effect of a @DRUG$ agonist is reportedly to give rise to convulsions in epileptic patients. === [INPUT] Rephrase the above description to sound soft. Write the description in a warm tone. Description: We illustrate several GPT-3 generated paraphrases of the test instances in Figure 2. For training and testing, we use all the generated paraphrases, although some paraphrases contain hallucinations (e.g., an untruthful trailing sentence that may come from the priming example) or miss major content (e.g., missing the mention of a drug to be predicted). The language model we measure against is the BERT-base model, fine-tuned via the ADAPET method on the 500 training instances. An example of a prompt input to BERT is as follows: ([MASK]) If you are taking @DRUG$ or other potent CYP3A4 inhibitors such as other azole antifungals (eg, itraconazole, @DRUG$) or macrolide antibiotics (eg, erythromycin, clarithromycin) or cyclosporine or vinblastine, the recommended dose of DETROL LA is 2 mg daily. Table 2 shows the test F1 scores on both the original test sentences and their GPT-3 paraphrases. As shown by the last two rows, the 208 valid paraphrases obtain an F1 score of 55.7%, which is 46.7% higher than the 208 original sentences which obtain an F1 score of 9.0%. More importantly, for the 208 drug pairs with valid paraphrases, we show in Table 3 that the ∆ is 84.4%, and if we focus on highly toned-down paraphrases in FFEP range [0.99, 1.00), the conditional probability reaches 98.7%, showing that framing an original drug-drug interaction description into a toned-down paraphrase helps to improve relation extraction. These results suggest that toning down the input text in a prompt can elicit a bias in predictions qualitatively similar to the framing effect. Furthermore, we illustrate the original sentences and their framed paraphrases through some test pairs in Figure 2. In Example 1, the correct relation "effect" is identified given the paraphrase input, while no interaction is detected given the original sentence input. Compared to the original text which uses the word "produce" to describe side effects, the words "can cause " used in the paraphrase are more toned-down. In Example 2, the correct relation is no interaction, which is identified correctly using the paraphrase input, while the wrong prediction "effect" is made using the original sentence. In the original sentence, "requires" is used for the list of drugs, while "may require" is used in the paraphrase, toning down the expression. ## 5 Discussion Few-shot Training vs. the Availability Bias. We have seen from Table 1 that at 10-shot, the availability bias towards *Negative* is not as obvious and the scores are more similar among the five classes. This is in contrast to the other few-shot learning cases with more training instances, where the availability bias becomes more obvious for the negative ![7_image_0.png](7_image_0.png) class as the number of training instances increases. It does not suggest that training on more instances will worsen the availability bias, but more classbiased training prompts will amplify the availability bias. That is, since more negative class instances are drawn for a larger number of training instances, the majority class has been seen by the model more frequently, causing biased predictions due to this increased availability. Prompt-based learning is not immune to imbalanced class distribution even under few-shot settings, as it is sometimes hard to obtain real class-balanced few-shot instances (this is elaborated in Appendix A). ## 6 Conclusion In this work, we identify and quantify two bias modes in BERT's prompt-based predictions, leveraging the availability bias and the framing effect on biomedical drug-drug interaction extraction. The error mode of the availability bias suggests that the label for a prompt still matters for prompt-based learning, as shown by a large availability bias score towards the majority class, which is 52% on a scale of 0 to 80%. We also find that a toning-down transformation of the drug-drug description in a prompt can elicit a bias similar to the framing effect, since when we tone down the input description, 84.4% of drug pairs that are wrongly classified with the original text are now correctly predicted with their toned-down paraphrases. For highly toned-down paraphrases (as measured by FFEP above 0.99), this conditional probability reaches 98.7%. The magnitude of these biases suggests that language model users need to be aware of the imprecision of their prompting results. ## 7 Limitations The limitations are that our use of GPT-3 sometimes generates hallucinated texts, thus reducing the effectiveness in generating valid paraphrases. The dictionary of toned-down words could include more semantic rules or could be built automatically, which will be left as future work. ## References Monica Agrawal, Stefan Hegselmann, Hunter Lang, Yoon Kim, and David Sontag. 2022. Large language models are zero-shot clinical information extractors. arXiv preprint arXiv:2205.12689. Naveed Akhtar and Ajmal Mian. 2018. Threat of adversarial attacks on deep learning in computer vision: A survey. *arXiv preprint arXiv:1801.00553*. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In *Advances in Neural Information Processing Systems*, volume 33, pages 1877–1901. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. 2021. Evaluating large language models trained on code. *arXiv* preprint arXiv:2107.03374. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4171–4186. Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608. Tianyu Gao, Adam Fisch, and Danqi Chen. 2021. Making pre-trained language models better few-shot learners. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics* and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3816–3830. Karen E. Jacowitz and Daniel Kahneman. 1995. Measures of anchoring in estimation tasks. Personality and Social Psychology Bulletin, 21:1161–1166. Erik Jones and Jacob Steinhardt. 2022. Capturing failures of large language models via human cognitive biases. In *Advances in Neural Information Processing Systems*. Daniel Kahneman and Shane Frederick. 2002. Representativeness revisited: Attribute substitution in intuitive judgment. *Heuristics and biases: The psychology of intuitive judgment*, 49:49–81. Daniel Khashabi, Xinxi Lyu, Sewon Min, Lianhui Qin, Kyle Richardson, Sean Welleck, Hannaneh Hajishirzi, Tushar Khot, Ashish Sabharwal, Sameer Singh, and Yejin Choi. 2022. Prompt waywardness: The curious case of discretized interpretation of continuous prompts. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3631–3643. Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. In *Proceedings of the 2021 Conference on* Empirical Methods in Natural Language Processing, pages 3045–3059. Bai Li, Changyou Chen, Wenlin Wang, and Lawrence Carin. 2019. Certified adversarial robustness with additive noise. In Advances in Neural Information Processing Systems, pages 9464–9474. Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, and Weizhu Chen. 2022a. What makes good in-context examples for GPT-3? In Proceedings of Deep Learning Inside Out (DeeLIO 2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, pages 100–114. Xiao Liu, Kaixuan Ji, Yicheng Fu, Weng Tam, Zhengxiao Du, Zhilin Yang, and Jie Tang. 2022b. P-tuning: Prompt tuning can be comparable to fine-tuning across scales and tasks. In *Proceedings of the 60th* Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 61–68. Wing Hong Loke and Kai Foong Tan. 1992. Effects of framing and missing information in expert and novice judgment. *Bulletin of the Psychonomic Society*, 30:187–190. Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, and Pontus Stenetorp. 2022. Fantastically ordered prompts and where to find them: Overcoming fewshot prompt order sensitivity. In *Proceedings of the* 60th Annual Meeting of the Association for Computational Linguistics, pages 8086–8098. Sílvia Mamede, Tamara van Gog, Kees van den Berge, Remy M. J. P. Rikers, Jan L. C. M. van Saase, Coen van Guldener, and Henk G. Schmidt. 2010. Effect of availability bias and reflective reasoning on diagnostic accuracy among internal medicine residents. *Journal of the American Medical Association*, 304:1198– 1203. David E. Meyer. 2004. Semantic priming well established. *Science*, 345:523–523. Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2022. Rethinking the role of demonstrations: What makes in-context learning work? arXiv preprint arXiv:2202.12837. Claudio S. Pinhanez. 2021. Expose uncertainty, instill distrust, avoid explanations: Towards ethical guidelines for AI. *CoRR*, abs/2112.01281. Shrimai Prabhumoye, Rafal Kocielnik, Mohammad Shoeybi, Anima Anandkumar, and Bryan Catanzaro. 2021. Few-shot instruction prompts for pretrained language models to detect social biases. arXiv preprint arXiv:2112.07868. Guanghui Qin and Jason Eisner. 2021. Learning how to ask: Querying LMs with mixtures of soft prompts. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5203–5212. Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Teven Le Scao, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M Rush. 2022. Multitask prompted training enables zero-shot task generalization. In International Conference on Learning Representations. Gustavo Saposnik, Donald Redelmeier, Christian C Ruff, and Philippe N Tobler. 2016. Cognitive biases associated with medical decisions: A systematic review. BMC Medical Informatics and Decision Making, 16. Teven Le Scao and Alexander Rush. 2021. How many data points is a prompt worth? In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2627–2636. Timo Schick and Hinrich Schütze. 2021a. Exploiting cloze-questions for few-shot text classification and natural language inference. In *Proceedings of the* 16th Conference of the European Chapter of the Association for Computational Linguistics, pages 255– 269. Timo Schick and Hinrich Schütze. 2021b. It's not just size that matters: Small language models are also fewshot learners. In *Proceedings of the 2021 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2339–2352. Reva Schwartz, Apostol Vassilev, Kristen K. Greene, Lori Perine, Andrew Burt, and Patrick Hall. 2022. Towards a standard for identifying and managing bias in artificial intelligence. *Special Publication* (NIST SP). Isabel Segura-Bedmar, Paloma Martínez, and María Herrero-Zazo. 2013. Semeval-2013 task 9 : Extraction of drug-drug interactions from biomedical texts (ddiextraction 2013). In *Second Joint Conference* on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013), pages 341–350. Derek Tam, Rakesh R. Menon, Mohit Bansal, Shashank Srivastava, and Colin Raffel. 2021. Improving and simplifying pattern exploiting training. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 4980–4991. Amos Tversky and Daniel Kahneman. 1973. Availability: A heuristic for judging frequency and probability. Cognitive Psychology, 5:207–232. Amos Tversky and Daniel Kahneman. 1974. Judgment under uncertainty: Heuristics and biases. *Science*, 185:1124–1131. Amos Tversky and Daniel Kahneman. 1981. The framing of decisions and the psychology of choice. *Science*, 211:453–458. Prasetya Utama, Nafise Sadat Moosavi, Victor Sanh, and Iryna Gurevych. 2021. Avoiding inference heuristics in few-shot prompt-based finetunings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 9063–9074. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30. Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. 2019. Universal adversarial triggers for attacking and analyzing nlp. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pages 2153–2162. Albert Webson and Ellie Pavlick. 2022. Do promptbased models really understand the meaning of their prompts? In *Proceedings of the 2022 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2300–2344. Michihiro Yasunaga, Jure Leskovec, and Percy Liang. 2022. LinkBERT: Pretraining language models with document links. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8003–8016. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. BERTScore: Evaluating text generation with BERT. In *International Conference on Learning Representations*. Yijia Zhang, Wei Zheng, Hongfei Lin, Jian Wang, Zhihao Yang, and Michel Dumontier. 2018. Drug–drug interaction extraction via hierarchical rnns on sequence and shortest dependency paths. *Bioinformatics*, 34:828–835. Tony Z. Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. 2021. Calibrate before use: Improving few-shot performance of language models. In Proceedings of the 38th International Conference on Machine Learning. ## Appendix A The Majority Label In Training Since the availability bias arises from the majority label in training, where the majority label is typically defined by class size, one may argue that there should not be an availability bias with an equal number of class instances. However, we constructed a balanced training data set of equal class size for fine-tuning (by sampling 2,000 instances from each of the five classes, and all four positive classes include duplicate instances since their class sizes are less than 2,000). Interestingly, we still observe that the predictions for the positive classes are biased towards the negative class on the test set, as shown by the confusion matrix in Figure 3 . Except Int , wrong predictions most frequently fall into the negative class for all other positive classes Effect , Mechanism , Advice . This does not contradict our conclusion that the availability bias exists, and it further suggests that the majority label should not be solely defined by class size, but the class with the highest input variance. ![10_image_0.png](10_image_0.png) ![10_image_1.png](10_image_1.png) ## B Number Of Dummy Test Prompts For Availability Bias Measurement We increase the number of dummy test prompts N to show the stability of our availability bias metric, where N ranges from 100 to 1000 with a step size of 100. We repeat our experiments three times for each N and calculate the mean and standard deviation. When creating dummy test prompts, if the number of dummy templates that need to be drawn from a class exceeds the class size, we enable upsampling of duplicate templates from that Figure 4 shows that the availability bias class. measurement is stable for N ≥ 100, suggesting that our proposed metric can be used with as few as 100 dummy test prompts. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 7 A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract, Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. B ✓ **Did you use or create scientific artifacts?** Section 4, we use pretrained BERT and build upon the ADAPET open source code for experiments Section 4.2, we use the MetaMap software for UMLS keywords extraction Section 4.3, we use GPT-3 for paraphrase generation ✓ B1. Did you cite the creators of artifacts you used? Section 3, Section 4 B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Not applicable. Left blank. ## C ✓ **Did You Run Computational Experiments?** Section 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 4.1 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4.1 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 4.2, 4.3 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 4 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
choi-lee-2023-codeprompt
{C}ode{P}rompt: Task-Agnostic Prefix Tuning for Program and Language Generation
https://aclanthology.org/2023.findings-acl.325
In order to solve the inefficient parameter update and storage issues of fine-tuning in Natural Language Generation (NLG) tasks, prompt-tuning methods have emerged as lightweight alternatives. Furthermore, efforts to reduce the gap between pre-training and fine-tuning have shown successful results in low-resource settings. As large Pre-trained Language Models (PLMs) for Program and Language Generation (PLG) tasks are constantly being developed, prompt tuning methods are necessary for the tasks. However, due to the gap between pre-training and fine-tuning different from PLMs for natural language, a prompt tuning method that reflects the traits of PLM for program language is needed. In this paper, we propose a Task-Agnostic prompt tuning method for the PLG tasks, CodePrompt, that combines Input-Dependent Prompt Template (to bridge the gap between pre-training and fine-tuning of PLMs for program and language) and Corpus-Specific Prefix Tuning (to update the parameters of PLMs for program and language efficiently).Also, we propose a method to provide richer prefix word information for limited prefix lengths. We prove that our method is effective in three PLG tasks, not only in the full-data setting but also in the low-resource setting and cross-domain setting.
# Codeprompt: Task-Agnostic Prefix Tuning For Program And Language Generation YunSeok Choi, Jee-Hyong Lee College of Computing and Informatics Sungkyunkwan University Suwon, South Korea {ys.choi, john}@skku.edu ## Abstract In order to solve the inefficient parameter update and storage issues of fine-tuning in Natural Language Generation (NLG) tasks, prompttuning methods have emerged as lightweight alternatives. Furthermore, efforts to reduce the gap between pre-training and fine-tuning have shown successful results in low-resource settings. As large Pre-trained Language Models (PLMs) for Program and Language Generation (PLG) tasks are constantly being developed, prompt tuning methods are necessary for the tasks. However, due to the gap between pretraining and fine-tuning different from PLMs for natural language, a prompt tuning method that reflects the traits of PLM for program language is needed. In this paper, we propose a Task-Agnostic prompt tuning method for the PLG tasks, CodePrompt, that combines Input-Dependent Prompt Template (to bridge the gap between pre-training and fine-tuning of PLMs for program and language) and CorpusSpecific Prefix Tuning (to update the parameters of PLMs for program and language efficiently). Also, we propose a method to provide richer prefix word information for limited prefix lengths. We prove that our method is effective in three PLG tasks, not only in the full-data setting but also in the low-resource setting and cross-domain setting. ## 1 Introduction As the software engineering field continues to grow, the use of AI to increase the efficiency of developers through code intelligence is becoming increasingly important. In particular, Program and Language Generation (PLG) tasks, such as code summarization, code generation, and code translation, are essential for developers to maximize their productivity. Code summarization allows developers to quickly understand the structure and purpose of a code, code generation assists by automatically generating code given a natural language description, and code translation facilitates the translation of code from one programming language to another, such as from Java to C\#, and vice versa. The recent success of Pre-trained Language Models (PLMs) for the PLG tasks, such as CodeBERT (Feng et al., 2020), PLBART (Ahmad et al., 2021), and CodeT5 (Wang et al., 2021), has been attributed to their utilization of large-scale code and text corpora. The "*pre-training then finetuning*" approach has been widely used to derive program language representations by selfsupervised training on large-scale unlabeled data, which can then be transferred to multiple downstream tasks with limited data annotation. These approaches have proven to be successful on coderelated downstream tasks. However, fine-tuning large pre-trained models can be expensive and timeconsuming in terms of both updating and storing all parameters. Furthermore, there is a discrepancy between pre-training and fine-tuning in the viewpoint of the inputs and the training objectives (Brown et al., 2020; Wang et al., 2020). It makes the model difficult to fully utilize the knowledge of pre-trained models, resulting in suboptimal performance on downstream tasks (Lester et al., 2021; Gu et al., 2022; Han et al., 2022). In order to address the issues of fine-tuning, prompt tuning approaches have recently been proposed in Natural Language Generation (NLG) tasks. To reduce the gap between pre-training and fine-tuning, Schick and Schütze (2021) proposed a prompt tuning method that combined manually crafted templates with the input. However, finding the optimal manual prompt template for each natural language task is arduous and laborious. Furthermore, due to the updating of all parameters of the language model, it also requires updating full model parameters for each task, similar to fine-tuning. It can also easily lead to sub-optimal language model parameters in low-resource settings. Li and Liang (2021) proposed a lightweight method, prefix tuning, to freeze the language model "In the mid-1970s punk rock was born in a dank little New York nightclub called CBGB\'s. It all started when rockers like Television, the Ramones and Patti Smith launched a frontal assault on the monolith of corporate rock \'n roll. Now another artistic revolt, Remodernism, is about to widen its offensive from the birthplace of punk." On May 10, 2006, the Stedelijk Museum and the University of Amsterdam staged a talk on remodernism by Daniel Birnbaum, contributing editor of Artforum, and Alison Gingeras, Assistant Curator, Guggenheim Museum. The summary is: In August 2006, an online group called "The Remodernists of Deviantart" was founded by Clay Martin. The group is composed of artists who are active on the website deviantart.com. In 2006, artist Matt Bray said, (a) An example of Wikipedia def change_return_type(f): \# Converts the returned value of wrapped function to the \# type of the first arg or to the type specified by a kwarg key \# return_type's **value.** @wraps(f) def wrapper(*args, **kwargs): if kwargs.has_key('return_type'): return_type = kwargs['return_type'] kwargs.pop('return_type') return return_type(f(*args, **kwargs)) elif len(args) > 0: return_type = type(args[0]) return return_type(f(*args, **kwargs)) else: return f(*args, **kwargs) return wrapper (b) An example of CodeSearchNet (Husain et al., 2019) Figure 1: (a) PLMs for natural language are pre-trained ![1_image_0.png](1_image_0.png) ![1_image_1.png](1_image_1.png) ![1_image_2.png](1_image_2.png) ![1_image_3.png](1_image_3.png) ![1_image_4.png](1_image_4.png) using text from Wikipedia. (b) PLMs for program and language are pre-trained using code and comment from CodeSearchNet. The text of PLMs for program and language is agnostic to task. parameters and instead optimize a sequence of continuous task-specific vectors (prefix). This method has shown performance comparable to fine-tuning in the full data setting while updating much fewer training parameters of a large pre-trained model. Also, even in the low-resource setting, this method has been proven to be effective by prefix initialization with words specific to the task. However, those prompt tuning approaches for most NLG tasks are difficult to apply directly to the PLG tasks. The pre-training of PLMs for natural language involves the use of large amounts of text data consisting of a series of sentences. This dataset contains task-specific natural language instructions (templates). As shown in Figure 1a, task-specific natural language instructions, such as "Summarize _", "TL;DR _", and "The summary is _", appear in data for pre-training. These templates can bridge the gap between pre-training and fine-tuning. However, datasets of PLMs for program and language hardly contain such task-specific textual instructions (templates). PLMs for program and language are usually pre-trained with unimodal data (codeonly) or bimodal data (code-comment). Input is either code or its corresponding comment, but there are very few task-specific natural language instructions, shown in Figure 1b. Due to the lack of task-specific instructions in the pre-training stage, it is hard to manually select task-dependent prompt templates for the PLG tasks during the fine-tuning stage. Prefix tuning in the NLG tasks improved performance by initializing prefix embedding from task-specific words, especially in low-resource settings. However, for PLG tasks, we can hardly adopt such initialization approaches because there are very few task-specific words in data for PLG tasks, as mentioned. Also, unlike NLG tasks, PLG tasks are two cross-modal generation tasks. It is not appropriate to initialize encoder and decoder prefixes with the same words in the same language. Prefix embeddings of encoder and decoder need to be initialized with different words of their corresponding language. Therefore, we propose a task-agnostic prompt tuning method, CodePrompt, applicable to any PLG tasks. Our method consists of three components: input-dependent prompt template, corpusspecific prefix tuning, and multi-word prefix initialization. First, we propose the input-dependent prompt template by combining the template with input to bridge the gap between pre-training and fine-tuning in PLMs for program and language. Input-dependent prompt template contains unified backbone words and input-specific words, regardless of the task. Second, we propose the corpusspecific prefix tuning to reduce the number of parameters for update considering the traits of two cross-modal tasks. They can effectively transfer the task and corpus specific information in cross-modal tasks, especially in low-resource settings and zeroshot settings. Third, we propose the multi-word prefix initialization to provide richer information to prefix embeddings while maintaining the number of parameters within the limited prefix length. Our CodePrompt shows great performances on three PLG tasks in full data, low-resource, and crossdomain settings. ## 2 Related Work Pre-trained Model for Program and Language As the pre-trained models based on the Transformer architecture (Vaswani et al., 2017) have achieved great success in NLG tasks, the methods on extending natural language-based methods to code ![2_image_0.png](2_image_0.png) have recently been proposed in PLG tasks. Feng et al. (2020) proposed CodeBERT, a pretrained language model, based on BERT (Devlin et al., 2019). The model learns cross-modal representation both program language and natural language in the pre-training stage. Guo et al. (2020) proposed GraphCodeBERT to incorporate the code structure into CodeBERT. However, such models are vulnerable to the PLG task because they learn the PL-NL representation with the transformer encoder only. Ahmad et al. (2021) proposed PLBART to support both code understanding and generation tasks using encoder-decoder model BART (Lewis et al., 2020). Also, Wang et al. (2021) proposed CodeT5, a pre-trained sequence-to-sequence model based on T5 (Raffel et al., 2022), to facilitate generation tasks for source code. We utilize the framework of the CodeT5, the state-of-the-art pre-trained model, in the PLG tasks for an effective prompt tuning method of PLM for program and language. Prompt tuning for Generation In NLG tasks, the concept of prompt-tuning originated from incontext learning, which was first introduced in GPT3 (Brown et al., 2020). Schick and Schütze (2021) explored the use of fixed-prompt language model (LM) tuning for few-shot text summarization using manually created templates. Li and Liang (2021) investigated prefix tuning, fixed-LM prompt tuning, where learnable prefix tokens are prepended to the input while the parameters in pre-trained models are frozen. Lester et al. (2021) proposed soft prompt as a simplification of prefix tuning. Several prompt tuning methods have been proposed for NLG tasks, but they are not applicable to PLG tasks because they require additional data information for specific natural language tasks. Recently, Wang et al. (2022) evaluated the effect of prompt tuning in Program and Language Understanding and Generation tasks. However, they used the prompt tuning method with updating all parameters of the pre-trained language model, not freezing the parameters. Moreover, they did not consider the gap between pre-training and fine-tuning of PLMs for program and language. ## 3 Codeprompt In this section, we explain our prompt tuning method, CodePrompt, in detail. Our prompt tuning method aims to consider the traits of PLMs for program and language. Figure 2 shows the architecture of our method, which is on the basis of the CodeT5 framework, including input-dependent prompt template, corpus-specific prefix tuning, and multi-word prefix initialization. ## 3.1 Input-Dependent Prompt Template In NLG tasks, it is common to manually craft templates specific to tasks. However, in PLG tasks, it is hard to select task-specific templates because they are rarely seen in the pre-training stage. Instead of providing task-related templates, we will provide input-dependent templates. Input-dependent templates, that are agnostic to the task, can help in understanding the input by reducing the gap between pre-training and fine-tuning. PLMs for program and language are pre-trained using bi-modal data, pairs of code and its comment, but provided with unimodal data, either code or comment, in the finetuning stage. If we provide additional information that seems like comment or code, it can bridge the gap between inputs of pre-training and fine-tuning stages. It will help better transfer the knowledge gained in pre-training stage to fine-tuned models. There is a lot of information about code such as repository (owner), path, and library information, but we choose three easily extractable information from code: language, function name, and keywords. The backbone of the template is "Language _, Function Name _, Keywords _". The backbone words are task-agnostic, fixed and unified template, and "_" is dependent to the input. The information of language and function name can be easily obtained, and keywords can be extracted by various keyword extraction methods. To prove the efficiency of our template, we use a simple but effective keyword extraction method, TextRank (Mihalcea and Tarau, 2004). For example, if a code snippet is given in the code summarization task, the following prompt for the code will be added: "Language: Python, Function Name: simulate_request, Keywords: simulate request wsgi ...", as shown in Figure 2. The actual ground truth comment for the code is "simulate a request to a wsgi application". This prompt template can act not only as the comment for the code in natural language, but also as a hint for summarization. The pair of the template and the input will act like a bimodal pair seen in the pre-training stage. In the code generation task where an NL comment is given, we also use the template without the backbone word "Function Name". In this case, the keywords from the comment will act like a simple version of the corresponding code, because most of keywords from the comment may also appear in the code. ## 3.2 Corpus-Specific Prefix Tuning In prefix-tuning, prefix embedding initialization is important. Initialization with task-specific prefix words has been proven to be effective, especially in low-resource settings, because it can easily transfer task-specific knowledge to the pre-trained model. However, as mentioned above, in the pretraining stage of PLMs for program and language, the task-specific words were rarely seen. Instead of task-specific words, we try to transfer task-related knowledge to the fine-tuned model by providing frequent words in input and output corpora. We initialize the encoder prefix with common words in the input corpus and the decoder prefix with common words in the output corpus. By providing common words of each input and output corpus, we can indirectly provide what the model is required to do for the given task. For the transfer of corpus-specific information to each encoder and decoder, we initialize using corpus-specific prefix words corresponding to each input and output corpus. Corpus-specific prefix embeddings are initialized with input and output corpus's common words obtained from the train data. Our encoder combines the input embedding and input corpus prefix embedding with a bidirectional language model to learn the context. If the input corpus prefix embeddings of prefix words are composed of words representing the input corpus, the encoder can learn the global features of the corpus of the prefix embeddings and the individual feature of the input. Our decoder generates output words with a left-to-right language model through output corpus prefix embeddings. If the output corpus prefix words are the output corpus representative words, the output sequence is generated by considering frequently occurring words such as frequently occurring words "return", "get", "method", and "file". ## 3.3 Multi-Word Prefix Initialization Prefix tuning has shown effective performance in both full data and low resource settings, but only one word is used to initialize each prefix embedding. If we provide as many words as possible, it will help to transfer more knowledge of the pretrained model to the fine-tuned model. We propose a multi-word prefix initialization method. Each prefix embedding is initialized with multiple words within the limited prefix length. Let N be the prefix length and M be the corpus common words (N « M). First, we obtain the embedding of the corpus common words M from the embedding layer of a pre-trained language model. Then, we select the top-N of the corpus common words as the core words. For each N core word, K words with high cosine similarity among M words are extracted. We combine one core word and its similar words using a feed-forward neural network (FFN). K + 1 word embeddings are averaged through mean pooling in the hidden layer and then obtained the prefix embedding of all layers, as shown in Figure 2. Multi-word prefix initialization can provide a rich set of prefix words while maintaining the same number of parameters as the FFN used for prefix tuning for stable optimization. ## 3.4 Codeprompt-Based Codet5 Architecture As shown in Figure 2, we utilize our CodePrompt to apply to the framework of CodeT5. First, we extract common words from the input language and output language to obtain corpus-specific prefix words. Then, the prefix embeddings of our encoder and decoder are initialized through the multi-word prefix initialization method. When a code or comment is given as input, we generate its input-dependent prompt template and combine the template with the input. Our encoder and decoder are frozen and only the prepended prefix embedding is trained. ## 4 Experiment Setup 4.1 Downstream Tasks & Datasets We evaluate our CodePrompt method on three generation tasks in CodeXGLUE benchmark (Lu et al., 2021): code summarization, code generation and code translation. **Code Summarization** is the task of generating a natural language summary from code. The dataset consists of six programming languages, namely, Ruby, Javascript, Go, PHP, Java, and Python. **Code Generation** is the task of generating code from its natural language description. Code Translation is the task of generating a code of target language from the code of source language. Table 1 is detailed statistics of the datasets. ## 4.2 Evaluation Metrics BLEU (Papineni et al., 2002) computes the n-gram overlap between a generated sequence and a reference. **CodeBLEU** (Ren et al., 2020) is a metric for measuring the quality of the code. Unlike BLEU, CodeBLEU considers grammatical and logical correctness based on the abstract syntax tree and the data-flow structure. we refer to CodeBLEU as C.BLEU. **Exact Match (EM)** measures whether a generated sequence exactly matches the reference. **\#Param** is the number of parameters to be updated. For more details about the evaluation metrics, please refer to Appendix B. Task Language Train Valid Test Ruby 24K 1.4K 1.2K Javascript 58K 3.8K 3.2K Go 167K 7.3K 8.1K PHP 241K 12.9K 14K Java 164K 5.1K 10.9K Python 251K 13.9K 14.9K Generation NL to Java 100K 2K 2K | Summarization | |-----------------| Translation Java to C# 10.3K 0.5K 1K C# to Java 10.3K 0.5K 1K Table 1: Statistics of three PLG tasks in CodeXGLUE benchmark datasets (Lu et al., 2021). ## 4.3 Baseline Methods We compare our method with the state-of-the-art (SOTA) pre-trained models. As encoder-only models, we compare with RoBERTa (Liu et al., 2019), CodeBERT (Feng et al., 2020), GraphCodeBERT (Guo et al., 2020), and DOBF (Roziere et al., 2021). For decoder-only models, we compare with GPT2 (Radford et al., 2019) and code-version GPT models, CodeGPT-2 and CodeGPTadapted. As encoderdecoder models, we consider PLBART (Ahmad et al., 2021) and finetuned-CodeT5 (Wang et al., 2021). And we compare our method with CodeT5 which is trained by Prefix-tuning (Li and Liang, 2021). ## 4.4 Training Details We implement our prompt method based on the Hugging Face Transformer models1(Wolf et al., 2020). We use the AdamW optimizer (Loshchilov and Hutter, 2019) and a linear learning rate scheduler. We follow the implementation details of CodeT5 for all configuration settings. The default prefix length of each task is set to 200, 250, and 100 for code summarization, code generation, and code translation, respectively. We choose a simple but effective keyword extraction method, TextRank (Mihalcea and Tarau, 2004). The number of language common words M is 400, the number of similar words K is 3, and the number of keywords in input-dependent prompt template is 10. For detailed configurations for each task, refer to Appendix A. Methods #Param Ruby Javascript Go PHP Java Python Overall Fine-Tuning RoBERTa 125M 11.17 11.90 17.72 24.02 16.47 18.14 16.57 CodeBERT 172M 12.16 14.90 18.07 25.16 17.65 19.06 17.83 PLBART 139M 14.11 15.56 18.91 23.58 18.45 19.30 18.32 CodeT5-base 222M 15.24 16.16 19.56 26.03 20.31 20.01 19.55 +Input-Dependent (ours) 222M 15.44 16.21 19.66 26.26 20.39 20.27 19.71 Prompt Tuning Prefix-tuning 20M 14.91 15.36 19.15 25.03 19.96 19.93 19.06 CodePrompt (ours) 20M 15.71 15.78 19.43 25.43 20.13 20.25 19.46 Table 2: Smoothed BLEU-4 scores on the code summarization task. Table 3: Results on the code generation task. | Methods | #Param | NL to Java | | | | | | | |-----------------------------|-------------|--------------|-------|-------|---------|--------|------------|------------| | EM | BLEU C.BLEU | | | | | | | | | Fine-Tuning GPT-2 | 124M | 17.35 | 25.37 | 29.69 | | | | | | CodeGPT-2 | 124M | 18.25 | 28.69 | 32.71 | | | | | | CodeGPT-adapted | 124M | 20.10 | 32.79 | 35.98 | | | | | | PLBART | 139M | 18.75 | 36.69 | 38.52 | | | | | | CodeT5-base | 222M | 22.30 | 40.73 | 43.20 | | | | | | +Input-Dependent (ours) | 222M | 23.05 | 43.13 | 43.24 | | | | | | Prompt Tuning Prefix-tuning | 20M | 21.30 | 35.72 | 36.32 | | | | | | CodePrompt (ours) | 20M | 21.85 | 37.51 | 38.19 | Methods | #Param | Java to C# | C# to Java | | BLEU | EM | BLEU | EM | | | | | | | Fine-Tuning RoBERTa (code) | 125M | 77.46 | 56.10 | 71.99 | 57.90 | | | | | CodeBERT | 172M | 79.92 | 59.00 | 72.14 | 58.80 | | | | | GraphCodeBERT | 172M | 80.58 | 59.40 | 72.64 | 58.80 | | | | | PLBART | 139M | 83.02 | 64.60 | 78.35 | 65.00 | | | | | CodeT5-base | 222M | 84.03 | 65.90 | 79.87 | 66.90 | | | | | +Input-Dependent (ours) | 222M | 85.23 | 66.60 | 81.60 | 67.20 | | | | | Prompt Tuning Prefix-tuning | 20M | 80.58 | 57.60 | 77.21 | 61.10 | | | | | CodePrompt (ours) | 20M | 81.82 | 59.80 | 79.27 | 63.50 | | | | ## 5 Experiment Results 5.1 Full-Data Setting Code Summarization Table 2 shows the results of code summarization for six programming languages in the full-data setting. First, to prove the effectiveness of our inputdependent prompt template, we fine-tune the SOTA pre-trained model, CodeT5-base, with only inputdependent prompt template (the language model is not frozen). The model fine-tuned with only input-dependent prompt template shows better performance than the other SOTA models. This shows that our input-dependent prompt template effectively acts as a hint for generating summary and reduces the gap between pre-training and fine-tuning of PLMs for program and language. However, simply combining the template with the input is not effective because it tuned all parameters like finetuning. In prompt tuning methods, prefix-tuning (Li and Liang, 2021) shows a performance that is slightly lower than the fine-tuning methods, but with a very small number of parameters to be updated. Here, our method, CodePrompt, shows great performance comparable to fine-tuning, while updating a fewer number of parameters. The number of parameters to update is about 1/11. In the case of Ruby and Python, the scores are higher than the fine-tuning method, CodeT5-base, by 0.47 and 0.24, respectively. Code Completion The results of the code generation task in the full-data setting are shown in Table 3. Among the fine-tuning methods, CodeT5 with our input-dependent prompt template has the best performance compared to the other finetuning methods. The BLEU score increased by 2.4 compared to CodeT5-base. Additionally, CodePrompt has much better EM, BLEU, and CodeBLEU scores compared to prefix-tuning. In particular, our method has almost the same EM score as CodeT5-base and even better performance than other pre-trained models except for CodeT5 while updating very few parameters. This shows that our CodePrompt is very effective on PLM for programs and languages with a small number of updates. Code Translation Table 4 shows the results of code translation from Java to C\# and from C\# to Java in the full-data setting. As with code summarization and code generation tasks, our inputdependent prompt template with fine-tuning shows the best performance and CodePrompt show very effective results compared to other baselines. Espe- ![6_image_0.png](6_image_0.png) cially in C\# to Java, CodePrompt has BLEU score almost similar to the CodeT5 fine-tuning method and better performance than PLBART. Compared to PLBART (139M) and CodeT5 (222M) in the full-data setting, Codeprompt shows almost comparable performance by updating only 20M parameters, proving its effectiveness in the full-data setting. This shows that CodePrompt is very effective in PLG tasks. ## 5.2 Low-Resource Setting We evaluate our prompt method in code summarization in low-resource settings. We randomly selected 8, 16, 32, 64, 128, and 256 training instances from the original data. Figure 3 shows the result of code summarization on six program languages in the low-resource data setting. Our method outperforms the prefix tuning method in all few-shot environments for all languages. In all program languages, prefix-tuning showed poor performance at few shot instances (8 or 16 shots), but CodePrompt can perform well even with 16 shots. In particular, for Go, we can see that prefix tuning cannot be learned up to 64 shots, whereas our method can be learned from 16 shots. This shows that by initializing the prefix embedding for each language from corpus-specific prefix words, the model can learn the global feature of the language with little data. Furthermore, fine-tuning is highly sub-optimal for few-shot data, so it cannot produce general performance. For the few-shot results for generation and translation, please refer to Appendix C. ## 5.3 Cross Domain Setting PLMs for PLG tasks should have generalization capability on any unseen program languages. They need to understand and process languages where there is no existing training data in a new language. We study the benefits of our CodePrompt in crossdomain settings (zero-shot settings). Table 5 presents the results of code summarization in cross-domain settings. Each of the three program languages (Go, Java, and Python ) is for training data and other three program languages (Ruby, Javascript, and PHP) are regarded as the unseen target languages. The result shows that CodePrompt is much more effective in cross-domain settings than fine-tuning. Fine-tuning updates all parameters to provide sub-optimal performance for the source language, but CodePrompt based on prefixtuning only updates prefix embeddings while keeping the language model frozen. For this reason, our CodePrompt based on prefix tuning method shows better performance in cross-domain settings. For the results of other languages, refer to Appendix D. ## 5.4 Ablation Study We perform an ablation study on code summarization task in the full data setting. As shown in Table 6, we observe that the removal of inputdependent prompt template significantly decreases performance. The scores drop by 0.57, 0.15, and 0.30 for ruby, java, and python, respectively. The result proves that our input-dependent prompt template is effective to bridge the gap between pretraining and fine-tuning of PLMs for program and | Source | Methods | Target | | | |-------------|-------------|----------|-------|-------| | Ruby | Javascript | PHP | | | | Go | Fine-tuning | 11.94 | 12.43 | 18.61 | | CodePrompt | 13.05 | 13.01 | 19.27 | | | Java | Fine-tuning | 14.35 | 14.21 | 22.31 | | CodePrompt | 15.54 | 14.90 | 23.42 | | | Python | Fine-tuning | 15.13 | 14.47 | 21.56 | | CodePrompt | 15.90 | 15.59 | 23.43 | | | CodeT5-base | 15.24 | 16.16 | 26.03 | | ![7_image_0.png](7_image_0.png) language. Corpus-specific prefix tuning and multiword prefix initialization are helpful to improve the performance and reduce the parameters of PLMs for update store. For the results of other languages and the effect of prefix module, refer to Appendix E. ## 5.5 Effect Of Prefix Length We also study the impact of different lengths of prefix prompts. We illustrate the performance under different prefix prompt lengths for the three tasks. As shown in Figure 4, prefix prompts of too short or long lengths can degrade the model performance. For each task and program language, the best performance of prefix length varied slightly. In our work, the prefix lengths are set as 200, 250, and 100 for code summarization, code generation, and code translation tasks, respectively. | Methods | Ruby | Java | Python | |-----------------|--------|--------|----------| | CodePrompt | 15.71 | 20.13 | 20.25 | | w/o Input. | 15.14 | 19.98 | 19.95 | | w/o Corpus. | 15.46 | 20.04 | 20.06 | | w/o Multi-Word. | 15.50 | 20.06 | 20.06 | Methods # Params Ruby Java Python Fine-Tuning CodeT5-small 60M 14.87 19.92 20.04 CodeT5-base 222M 15.24 20.31 20.01 CodeT5-large 737M 15.58 20.74 20.57 Prompt Tuning Prefix-tuning 52M 15.06 20.32 20.09 CodePrompt 52M 15.79 20.61 20.35 ## 5.6 Expand To Large Pre-Trained Model We applied our CodePrompt to the CodeT5-large by utilizing the very effective benefits of the parameters. Table 7 shows the results of applying CodePrompt to the CodeT5 large model. Our method showed better performance to the CodeT5-base model, as well as comparable performance to finetuning for the codeT5-large. Especially when using CodePrompt on the large model, the number of parameters was much less than when fine-tuning CodeT5-base, yet it showed much better performance. Our CodePrompt has shown to be very effective even for models with a large number of parameters. For more results of other languages, please refer to Appendix F. ## 6 Conclusion In this work, we proposed CodePrompt, a TaskAgnostic prompt tuning method for Program and Language Generation tasks. Our CodePrompt combined input-dependent prompt template to bridge the gap between pre-training and fine-tuning of PLMs for program and language, and corpusspecific prefix tuning to efficiently update the parameters of PLMs. Additionally, we proposed multi-word prefix initialization method to provide more rich prefix word information for limited prefix lengths. We demonstrated that our method is effective in three PLG tasks, both in full-data and low-resource settings, as well as in cross-domain settings. ## Limitations In this section, we discuss some limitations and potential risks of our work. (1) Our CodePrompt focused on Program and Language Generation tasks, so it is difficult to directly apply our method to Program and Language Understanding tasks. (2) We designed an input-dependent prompt template with fixed backbone words (Language, Function Name, Keywords) for a simple and efficient template. A more effective template can be crafted. (3) We applied only CodeT5, the most state-of-the-art model, as the basis of the framework of our CodePrompt. ## Ethics Statement This paper proposes a task-agnostic prompt tuning method for the PLG tasks to bridge the gap between pre-training and fine-tuning of PLMs for program and language and to efficiently update the parameters of PLMs for program and language, which is beneficial to energy efficient Program and Language applications. The research conducted in this paper will not cause any ethical issues or have any negative social effects. The data used is publicly accessible and is commonly used by researchers as a benchmark for program and language generation tasks. The proposed method does not introduce any ethical or social bias, or worsen any existing bias in the data. ## Acknowledgements This work was supported by Institute of Information & communications Technology Planning & Evaluation(IITP) grant funded by the Korea government(MSIT) (No.2019-0-00421, AI Graduate School Support Program(Sungkyunkwan University)), (No.2022-0-01045, Self-directed Multi-modal Intelligence for solving unknown, open domain problems), and (No.2020-000990,Platform Development and Proof of High Trust & Low Latency Processing for Heterogeneous·Atypical·Large Scaled Data in 5G-IoT Environment) ## References Wasi Ahmad, Saikat Chakraborty, Baishakhi Ray, and Kai-Wei Chang. 2021. Unified pre-training for program understanding and generation. In *Proceedings* of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2655–2668, Online. Association for Computational Linguistics. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In *Advances in Neural Information Processing Systems 33:* Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, and Ming Zhou. 2020. CodeBERT: A pre-trained model for programming and natural languages. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1536–1547, Online. Association for Computational Linguistics. Yuxian Gu, Xu Han, Zhiyuan Liu, and Minlie Huang. 2022. PPT: Pre-trained prompt tuning for few-shot learning. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics* (Volume 1: Long Papers), pages 8410–8423, Dublin, Ireland. Association for Computational Linguistics. Daya Guo, Shuo Ren, Shuai Lu, Zhangyin Feng, Duyu Tang, Shujie Liu, Long Zhou, Nan Duan, Alexey Svyatkovskiy, Shengyu Fu, et al. 2020. Graphcodebert: Pre-training code representations with data flow. ArXiv preprint, abs/2009.08366. Xu Han, Weilin Zhao, Ning Ding, Zhiyuan Liu, and Maosong Sun. 2022. Ptr: Prompt tuning with rules for text classification. *AI Open*, 3:182–192. Hamel Husain, Ho-Hsiang Wu, Tiferet Gazit, Miltiadis Allamanis, and Marc Brockschmidt. 2019. Codesearchnet challenge: Evaluating the state of semantic code search. *ArXiv preprint*, abs/1909.09436. Hung Le, Yue Wang, Akhilesh Deepak Gotmare, Silvio Savarese, and Steven CH Hoi. 2022. Coderl: Mastering code generation through pretrained models and deep reinforcement learning. *ArXiv preprint*, abs/2207.01780. Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3045–3059, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 7871–7880, Online. Association for Computational Linguistics. Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4582– 4597, Online. Association for Computational Linguistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. *ArXiv preprint*, abs/1907.11692. Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In *7th International* Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. Shuai Lu, Daya Guo, Shuo Ren, Junjie Huang, Alexey Svyatkovskiy, Ambrosio Blanco, Colin Clement, Dawn Drain, Daxin Jiang, Duyu Tang, et al. 2021. Codexglue: A machine learning benchmark dataset for code understanding and generation. ArXiv preprint, abs/2102.04664. Rada Mihalcea and Paul Tarau. 2004. TextRank: Bringing order into text. In *Proceedings of the 2004 Conference on Empirical Methods in Natural Language* Processing, pages 404–411, Barcelona, Spain. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In *Proceedings of the* 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2022. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(1). Shuo Ren, Daya Guo, Shuai Lu, Long Zhou, Shujie Liu, Duyu Tang, Neel Sundaresan, Ming Zhou, Ambrosio Blanco, and Shuai Ma. 2020. Codebleu: a method for automatic evaluation of code synthesis. *ArXiv* preprint, abs/2009.10297. Baptiste Roziere, Marie-Anne Lachaux, Marc Szafraniec, and Guillaume Lample. 2021. Dobf: A deobfuscation pre-training objective for programming languages. *ArXiv preprint*, abs/2102.07492. Timo Schick and Hinrich Schütze. 2021. Few-shot text generation with natural language instructions. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 390– 402, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998–6008. Chaozheng Wang, Yuanhang Yang, Cuiyun Gao, Yun Peng, Hongyu Zhang, and Michael R Lyu. 2022. No more fine-tuning? an experimental evaluation of prompt tuning in code intelligence. In *Proceedings of* the 30th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering, pages 382–394. Chengyi Wang, Yu Wu, Shujie Liu, Zhenglu Yang, and Ming Zhou. 2020. Bridging the gap between pretraining and fine-tuning for end-to-end speech translation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 9161–9168. Yue Wang, Weishi Wang, Shafiq Joty, and Steven C.H. Hoi. 2021. CodeT5: Identifier-aware unified pretrained encoder-decoder models for code understanding and generation. In *Proceedings of the 2021* Conference on Empirical Methods in Natural Language Processing, pages 8696–8708, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. ## A Implementation Details We set the environment for all experiments as follows: one NVIDIA 3090 GPU with 24GB graphic memory, Ubuntu 20.04, Python 3.8, and CUDA 11.7 version. In the full data settings, the average training time for CodePrompt takes about 2, 4, 15, 18, 14, and 20 hours on ruby, javascript, go, php, java, and python, respectively. The average training time for code generation and code translation takes about 14 and 5 hours, respectively. The description of the hyperparameter for the experiment is shown in the tables below. ![11_image_0.png](11_image_0.png) ## B Evaluation Metrics BLEU(Papineni et al., 2002) is a Bilingual Evaluation Understudy to measure the quality of generated code summaries. The formula for computing BLEU is as follows: $${\mathrm{BLEU}}={\mathrm{BP}}\cdot\exp\sum_{n=1}^{N}\omega_{n}\log p_{n}$$ where pn is the geometric average of the modified n-gram precisions, ωn is uniform weights 1/N and BP is the brevity penalty. CodeBLEU (Ren et al., 2020) is an automatic evaluation of code synthesis considering information from the n-gram, syntactic, and semantic match. The formula for computing CodeBLEU is as follows: CodeBLEU $=\alpha\cdot\text{BLEU}+\beta\cdot\text{BLEU}_\text{weight}$ (1) $\qquad\qquad\qquad+\gamma\cdot\text{Match}_\text{ast}+\delta\cdot\text{Match}_\text{df}$ where BLEU is calculated by standard BLEU, BLEUweight is the weighted n-gram match, Matchast is the syntactic AST match, Matchdf is the semantic dataflow match. The weighted ngram match and the syntactic AST match are used to measure grammatical correctness, and the semantic data-flow match is used to calculate logic correctness. The values of *α, β, γ, δ* are all set as 0.25. Exact Match (EM) evaluates whether a generated sequence exactly matches the reference. If the characters of the sequence generated by the model exactly match the characters of the reference, EM = 1, otherwise EM = 0. ![11_image_1.png](11_image_1.png) ## C Low Resource Settings | Source | Methods | Target | | | |--------------------------------------------------|-------------------------------------|-------------------------|-------------------------|-------------| | Ruby | JS | Go | PHP | Java Python | | Ruby | Fine-tuning | - | 14.98 15.87 22.59 17.05 | 17.78 | | CodePrompt | - | 14.78 15.13 22.53 18.44 | 18.28 | | | JS | Fine-tuning 14.60 | - | 15.15 23.17 18.03 | 18.04 | | CodePrompt 15.68 | - | 15.48 23.32 18.92 | 18.83 | | | Go | Fine-tuning 11.94 12.43 | - | 18.61 15.39 | 13.41 | | CodePrompt 13.05 13.01 | - | 19.27 17.08 | 14.61 | | | PHP | Fine-tuning 15.09 15.46 14.75 | - | 17.23 | 18.35 | | CodePrompt 15.91 15.73 15.24 | - | 19.1 | 19.2 | | | Java | Fine-tuning 14.35 14.21 15.48 22.31 | - | 17.51 | | | CodePrompt 15.54 14.90 15.72 23.42 | - | 18.54 | | | | Python Fine-tuning 15.13 14.47 15.36 21.56 16.85 | - | | | | | CodePrompt 15.90 15.59 15.13 23.98 18.65 | - | | | | | CodeT5-base | 15.24 16.16 19.56 26.03 20.31 | 20.01 | | | ## D Cross Domain Settings Table 12: Comparison with task-specific template on code summarization task in the full data setting. **Task.** refers to task specific prompt template for summarization task proposed by Wang et al. (2022). Table 13: Comparison with task-specific template on code translation task in the full data setting. **Task.** refers to task-specific prompt template for translation task proposed by Wang et al. (2022). | Methods | Java to C# | C# to Java | | | |------------------|--------------|--------------|-------|-------| | BLEU | EM | BLEU | EM | | | CodeT5-base | 84.03 | 65.90 | 79.87 | 66.90 | | w/ Task. | 83.99 | 65.40 | 79.76 | 66.10 | | w/ Input. (ours) | 85.23 | 66.60 | 81.60 | 67.20 | Table 9: Smoothed BLEU-4 scores on code summarization task in cross-domain setting. ## E More Ablation Study We study ablation study on code summarization task in the full data setting. Table 10: Ablation study on code summarization task in the full data setting. | Methods | Ruby | JS | Go | PHP | Java | Python | |-----------------------------------------------|-------------------------------|-------|------|-------|--------|----------| | CodePrompt | 15.71 15.78 19.43 25.43 20.13 | 20.25 | | | | | | w/o Input. | 15.14 15.44 19.19 25.04 19.98 | 19.95 | | | | | | w/o Corpus. | 15.46 15.41 19.30 25.06 20.04 | 20.06 | | | | | | w/o Multi-Word. 15.50 15.45 19.39 25.10 20.06 | 20.06 | | | | | | We evaluated the effects of the prefix module in the encoder and decoder on the code summarization task in the full data setting. When the encoder or decoder prefix module was removed, the performance of the model decreased significantly. Additionally, we observed that removing the source prefix module caused a more critical performance degradation than removing the target prefix module. | Methods | Ruby | JS | Go | PHP | Java | Python | |------------------------------------------------|-------------------------------|-------|------|-------|--------|----------| | CodeT5-base | 15.24 16.16 19.56 26.03 20.31 | 20.01 | | | | | | w/ Task. | 14.70 15.85 19.35 25.79 19.89 | 19.77 | | | | | | w/ Input. (ours) 15.44 16.21 19.66 26.26 20.39 | 20.27 | | | | | | | Methods | Ruby | JS | Go | PHP | Java | Python | |--------------------------------------------------|-------------------------------|-------|------|-------|--------|----------| | CodePrompt | 15.71 15.78 19.43 25.43 20.13 | 20.25 | | | | | | w/o source. prefix 15.32 15.04 18.75 24.25 19.39 | 19.18 | | | | | | | w/o target. prefix | 15.34 15.40 19.01 24.61 19.79 | 20.01 | | | | | Table 11: Effects of prefix module on code summarization task in the full data setting. Table 12 and 13 present the performance comparison between task-specific templates and our approach for the summarization and translation tasks, respectively, in the full data setting. In the code summarization task, we re-implemented the method proposed in Wang et al. (2022) using the publicly available dataset used in our work to ensure a fair comparison. Additionally, for the code translation task, we presented the results as reported in the paper by (Wang et al., 2022). Our model demonstrated significantly more effective performance. ## F Expand To Codet5-Large | Methods | # Param Ruby | JS | Go | PHP | Java | Python | |-----------------------------|----------------|-------------------------------|-------|-------|--------|----------| | Fine-Tuning CodeT5-small | 60M | 14.87 15.32 19.25 25.46 19.92 | 20.04 | | | | | CodeT5-base | 222M | 15.24 16.16 19.56 26.03 20.31 | 20.01 | | | | | CodeT5-large | 737M | 15.58 16.17 19.69 26.49 20.74 | 20.57 | | | | | Prompt Tuning Prefix-tuning | 52M | 15.06 15.43 19.12 25.45 20.32 | 20.09 | | | | | CodePrompt | 52M | 15.79 15.53 19.36 25.74 20.61 | 20.35 | | | | Table 14: Results of CodeT5-large on code summarization task. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitations ✓ A2. Did you discuss any potential risks of your work? Limitations ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and 1. Introduction ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** 5. Experiment Results ✓ B1. Did you cite the creators of artifacts you used? 4. Experiment Setup ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? 4. Experiment Setup and Appendix A B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. 4. Experiment Setup ## C ✓ **Did You Run Computational Experiments?** 5. Experiment Results And Appendix A ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 5. Experiment Results and Appendix A The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✗ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? We used the same hyperparameter as the previous study. ✗ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? We adopted the median value among the 3 models. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 4. Experiment Setup and Appendix A, B D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
deshpande-etal-2023-honey
Honey, {I} Shrunk the Language: Language Model Behavior at Reduced Scale.
https://aclanthology.org/2023.findings-acl.326
In recent years, language models have drastically grown in size, and the abilities of these models have been shown to improve with scale. The majority of recent scaling laws studies focused on high-compute high-parameter count settings, leaving the question of when these abilities begin to emerge largely unanswered. In this paper, we investigate whether the effects of pre-training can be observed when the problem size is reduced, modeling a smaller, reduced-vocabulary language. We show the benefits of pre-training with masked language modeling (MLM) objective in models as small as 1.25M parameters, and establish a strong correlation between pre-training perplexity and downstream performance (GLUE benchmark). We examine downscaling effects, extending scaling laws to models as small as {\textasciitilde}1M parameters. At this scale, we observe a break of the power law for compute-optimal models and show that the MLM loss does not scale smoothly with compute-cost (FLOPs) below $2.2 \times 10^{15}$ FLOPs. We also find that adding layers does not always benefit downstream performance.Our filtered pre-training data, reduced English vocabulary, and code are available at \url{https://github.com/text-machine-lab/mini_bert}$github.com/text-machine-lab/mini\_bert$
# Honey, I Shrunk The Language: Language Model Behavior At Reduced Scale Vijeta Deshpande1, Dan Pechi2, Shree Thatte1, Vladislav Lialin1, Anna Rumshisky1,3 1University of Massachusetts Lowell, Computer Science Department 2New York University, Center for Data Science 3Amazon Alexa AI {vijeta_deshpande,shree_thatte}@student.uml.edu, [email protected] {vlialin,arum}@cs.uml.edu ## Abstract ![0_Image_0.Png](0_Image_0.Png) In recent years, language models have drastically grown in size, and the abilities of these models have been shown to improve with scale. The majority of recent scaling laws studies focused on high-compute high-parameter count settings, leaving the question of when these abilities begin to emerge largely unanswered. In this paper, we investigate whether the effects of pre-training can be observed when the problem size is reduced, modeling a smaller, reduced-vocabulary language. We show the benefits of pre-training with masked language modeling (MLM) objective in models as small as 1.25M parameters, and establish a strong correlation between pre-training perplexity and downstream performance (GLUE benchmark). We examine downscaling effects, extending scaling laws to models as small as 1M parameters. At this scale, we observe a break of the power law for compute-optimal models and show that the MLM loss does not scale smoothly with compute-cost (FLOPs) below 2.2 × 1015 FLOPs. We also find that adding layers does not always benefit downstream performance.1 ## 1 Introduction In the past few years, large language models (LLMs) have grown ever larger (Brown et al., 2020; Shoeybi et al., 2019; Chowdhery et al., 2022; Fedus et al., 2022), and the emergent abilities of these models improve with scale. While several studies have looked at the relationship between model size, the amount of training, and performance for LLMs (Kaplan et al., 2020; Hoffmann et al., 2022), the main focus has been on scaling laws for highcompute settings. Very few studies have considered the effects of pre-training at a smaller scale (Turc et al., 2019; Huebner et al., 2021). Thus, the 1Our filtered pre-training data, reduced English vocabulary, and code are available at https://github.com/text-machinelab/mini_bert. question of when exactly model abilities begin to emerge remains largely unanswered. In this study, we were interested in understanding whether the emergent phenomena can be observed at a drastically reduced scale, and what the relationship is between upstream and downstream performance at this scale. We also wanted to examine model shapes, configurations, and other factors that might affect whether we see the benefits of pre-training when downscaling a model. Smaller models have been shown to do poorly when trained even on large volumes of data (Turc et al., 2019), which makes studying downscaling non-trivial. However, during language acquisition, humans are exposed to a reduced-size language before gradually expanding their vocabulary, yet they become fluent even when their vocabulary is limited. Taking our cue from humans, we explore the hypothesis that reducing language size might allow us to observe the effects of pre-training in small models. There has been one previous attempt to reduce language size (Huebner et al., 2021), but it was quite limited: one reduced-size Transformer encoder was trained with a non-standard version of masked language modeling (MLM) loss on a relatively small corpus of child-directed speech and evaluated for its ability to pass linguistic tests from a custom grammar test suite. We use a vocabulary of 21,000 words derived from AO-CHILDES (Huebner and Willits, 2021), a corpus of child-directed speech, to create a filtered corpus containing a subset of standard pre-training corpora: C4 (Raffel et al., 2020), Wikipedia, Book Corpus (Zhu et al., 2015), and others. We pretrain over 70 Transformer encoder models in the 1-100M parameter range, varying model shape, and configuration, including the number of layers, hidden size, the number of attention heads, and the feed-forward layer dimension (intermediate size). We fine-tune and evaluate a series of checkpoints at different FLOPs count on the subset of GLUE filtered with the same vocabulary. We present evidence that for a realistically downscaled language, the benefits of pre-training are observable even in smaller models with as few as 1.25M parameters. Our results also indicate that models with fewer layers achieve better performance on GLUE tasks. In contrast to Tay et al. (2022), we find a strong correlation between upstream and downstream performance (here, model perplexity and GLUE score). However, pre-training compute optimality does not appear to be crucial for downstream results. We also show that for compute-optimal models at this scale, parameter count does not reliably predict MLM loss, suggesting a limitation to scaling laws. We observe a departure from the FLOPs-Perplexity law, characterized by a sudden shift in the exponent value in the low-compute region of FLOPs ≤ 2.2 × 1015 (cf. Figure 1). This represents a divergence from previous observations regarding scaling laws (Kaplan et al., 2020; Hoffmann et al., 2022). ## 2 Related Work Scaling Laws Kaplan et al. (2020) has demonstrated and popularized power law dependency between language model parameter count and perplexity. This further motivated the existing trend for increasing model size (Brown et al., 2020; Chowdhery et al., 2022; Fedus et al., 2022). Investigation of smaller models has mostly focused on distillation (Sanh et al., 2019; Turc et al., 2019) and achieving the best performance given the parameter count. In contrast, we are interested in understanding at what scale pre-training works and the emergence of language model abilities to help downstream tasks. Changing pre-training data size via training token volume (Pérez-Mayos et al., 2021; Zhang et al., 2020), vocabulary size (Gowda and May, 2020), or the number of epochs (Voloshina et al., 2022) has also been explored for effects on language acquisition. These studies have generally demonstrated low-level linguistic tasks involving syntax require relatively little data volume compared to more complex tasks like WinoGrande (Sakaguchi et al., 2019). The relationship between model size, input data, and downstream performance remains the subject of much debate as to the nature of scaling laws. Hoffmann et al. (2022) concluded data size and model size should be scaled equally to achieve compute-optimal training for LLM's. Further credence to reducing LLM compute is lent by Sorscher et al. (2022), who found input data pruning can improve models to scale exponentially, and Chan et al. (2022), who showed LLM in-context learning derives from the Zipfian distribution of pre-training data. Most releavant to this work is Huebner et al. (2021) who found a small language model trained on child-directed speech can achieve comparable performance to larger LMs on a set of probing tasks. In contrast to them, we train multiple models, explore scaling in the low-compute region and evaluate on a filtered version of GLUE instead of a set of linguistic tests. ## 3 Methodology This section discusses the language simplification process, pre-training data, development of the data tokenizer, language model configuration, and pretraining objective in detail. ## 3.1 Simplifying Language To create a corpus of reduced English, we filter large text corpora based on a word vocabulary from AO-CHILDES (Huebner and Willits, 2021). The AO-CHILDES corpus contains English transcripts | Corpus name | Sentences (mil.) | Tokens (mil.) | |----------------------|--------------------|-----------------| | C43 | 3 | 427 | | C4 | 27 | 428 | | Book Corpus | 12 | 190 | | Wikipedia | 4.8 | 76 | | Simplified Wikipedia | 0.19 | 3 | | Children's Book Test | 0.08 | 1 | | Total | 47.07 | 1,125 | of child-directed speech. With the transcripts, we generate a vocabulary by removing special characters and tokenizing words by spaces. We also remove gibberish words present in the transcripts e.g. "bababa". With this process, we construct a set of 21 thousand unique words. ## 3.2 Pre-Training Data We filter data from five text corpora: Wikipedia, Simple English Wikipedia2, Book Corpus (Zhu et al., 2015), Children's Book Test (CBT) (Hill et al., 2015), and Common Crawl (C4) (Raffel et al., 2020), to obtain pre-training data. We filter C4 two ways: span level (110 words span size, 30 words stride) and sentence level. We select a text span (or sentence) to include in the pre-training data if and only if there are no words out of the vocabulary of interest, ignoring any numeric characters. For sentence-level filtration, we process text data on all five corpora. With sentence-level filtration, we collect approximately 44 million sentences which we concatenate to construct six million spans. The combination of both span- and sentence-level data filtration provided us with over nine million pretraining sequences of an average length of 127 BPE tokens. Finally, we split the filtered data into three sets: train, development, and test, of sizes nine million, 100 thousand, and 100 thousand, respectively. We provide the amount of filtered data from each text corpus in Table 1. For the rest of the paper, we use the word "vocabulary" to refer to the number of unique tokens instead of unique whitespace-separated words, unless otherwise mentioned. ## 3.3 Tokenizer Since we are working with a reduced language, commonly used subword vocabulary sizes for En-2https://simple.wikipedia.org 3Span-level filtering. glish might be suboptimal. We want to achieve a reasonable balance between over-splitting (splitting the words into units smaller than the smallest meaningful morphological units, e.g., splitting into characters) and under-splitting (e.g., retaining full words instead of splitting them into meaningful subwords). We conducted a series of experiments in which we tuned the vocabulary size to find the right balance. While varying the vocabulary size, we track two metrics for the tokenized text: *word-split ratio* and another metric we define, the *exact sub-token* matching score (ESMS). Word-split ratio is the number of tokens into which a word is split, where words are separated by whitespace. For example, if the word "cooking" is converted to "cook" and "ing", then the word-split ratio value is two. We measure and report the word-split ratio value for 5,000 examples sampled from the set of collected pre-training data without replacement. To measure ESMS, we compare the tokenizer performance with morpheme-based subword tokens. For example, in case of the word "cooking", we check whether the tokenizer is splitting the word into two tokens, 'cook' and 'ing'. For this purpose, we used a manually-curated list of 127 words with their corresponding morpheme-based sub-tokens, (see Table 5 in the Appendix for some examples). ESMS is computed as an exact match to the reference tokenization. For one example, it is equal to 1 if the word is tokenized exactly as in the reference and 0 in any other case. We experiment with three types of tokenizers, Byte-Pair Encoding (BPE) (Radford et al., 2019), WordPiece (Devlin et al., 2018), and SentencePiece (Raffel et al., 2020). Similar to the study conducted by FitzGerald et al. (2022), we select vocabulary size for each type of tokenizer by minimizing the absolute difference of word-split ratio compared to the reference tokenizer. We consider separate reference tokenizers for each tokenizer type. For BPE, WordPiece, and SentencePiece, we select pretrained tokenizers published by (Liu et al., 2019), (Devlin et al., 2018), and (Raffel et al., 2020), respectively, as our reference tokenizers. After selecting the vocabulary size for each tokenizer type, we select the tokenizer with the highest value of ESMS as our final choice. With the above-mentioned selection process, we find that the BPE tokenizer with a vocabulary size of 19,000 and ESMS of 0.2604, is the best-suited tokenizer for our study. We provide the results of our tokenizer selection experiments in Appendix B. ## 3.4 Model Architecture And Configuration The models we pre-train in our experiments closely follow the configuration setting of RoBERTa (Liu et al., 2019). We scale down and vary the model's hidden size, intermediate size (FFN hidden dimension size), number of hidden layers, and number of attention heads such that the total number of trainable parameters does not exceed 20 million. To separately control model hidden size and embedding size, we also add a linear layer, followed by a normalization layer (Ba et al., 2016) between the embedding block and the first Transformer layer. ## 3.5 Pre-Training Objective In our study, we pre-train models on a Masked Language Modeling (MLM) task (Devlin et al., 2018). We chose MLM instead of regular (causal) language modeling, because of its effectiveness for natural language understanding tasks at a smaller scale as demonstrated by BERT. We conducted an exploratory set of experiments to observe the effect of various MLM objective settings on validation perplexity. We found that using a random word replacement strategy and same-word replacement strategy doesn't improve the model at a small scale. Hence, to enable considerable learning in the limited parameter setting, we do not use random replacement and same-word replacement of the token selected for masking. In other words, we always replace the token selected for masking with the mask token <mask> before inputting it into the model. Otherwise, we adopt the same strategy as BERT pre-training by masking 15% of tokens. ## 4 Experimental Setup In our experiments, we explore the relationship between training data size, model size (number of parameters), model shape, cost of training (FLOPs), and performance during pre-training and downstream. In the following subsections, we will discuss our strategy for exploring various model shapes followed by a discussion on hyperparameter settings in detail. ## 4.1 Exploration Of Model Configuration To investigate the impact of reduced model size, we start by scaling down the base configuration of RoBERTa (Liu et al., 2019) from its initial hidden size of 768 to 256, and the number of hidden layers and attention heads from 12 to 8. For intermediate layer size, we follow the common practice of setting it to a value four times that of the hidden size. We refer to this configuration as the anchor configuration. We pre-train a model with the anchor configuration and explore three values for embedding size, hidden size, intermediate size, number of hidden layers, and number of attention heads, varying one hyperparameter at a time. With such unidirectional exploration, we pre-train 16 models. We refer to this set of 16 models as **set-1**. To explore more model configurations, we randomly sample 16 configurations that are not included in set-1. For random sampling, we only explore values that are powers of two and are upper-bounded by 256, 256, 1024, 8, and 8, for the embedding size, hidden size, intermediate size, number of attention heads, and number of hidden layers, respectively. We refer to this set of 16 models as **set-2**. Furthermore, we pre-train 30 more models by performing unidirectional explorations of hidden size and the number of hidden layer values by anchoring other hyperparameter values. We refer to this set of 30 models as **set-3**. ## 4.2 Pre-Training For every model configuration, we keep the input sequence length fixed at 128 tokens. All models are initialized with a fixed random seed value of zero. Once initialized, we train the model for one epoch with a batch size of 256 for 35,000 weight updates. We use an inverse square root learning rate scheduler with 5% warmup. We conducted a few trials guided to decide the peak learning rate value for our experiments. We started with values higher than 6e-4, based on findings published by Liu et al. (2019) and Kaplan et al. (2020), and kept on reducing the learning rate until we observed a stable training loss curve. We observed 1 × 10−1to be suitable for models with more than 18 million parameters and 5×10−1 otherwise. For optimization, we use the AdamW optimizer (Loshchilov and Hutter, 2017) with β1, β2, and ϵ values set to 0.9, 0.95, and 10−8, respectively. After preliminary experiments we set the weight decay parameter to 0.01. Besides the learning rate, we keep all optimizer-related hyperparameters constant across all model configurations. For all dropout layers in the model, we adopt the same value of 10% as that of the RoBERTa model (Liu et al., 2019). ## 4.3 Fine-Tuning We evaluate pre-trained models on GLUE (Wang et al., 2018). Because our pre-trained data consists of a limited vocabulary, we fine-tune and test on GLUE task datasets with the same vocabulary filtering, in addition to unfiltered variants. For all tasks, we fine-tune our pre-trained models for 5 epochs and report the performance of the best performance value on the validation set averaged over three seed values. For all fine-tuning experiments, we keep the batch size fixed at 32. Over the five training epochs, we vary the learning rate value with a linear scheduler with a warmup of 5%. We set the peak-learning rate within the range from 2e-5 to 2e-4 value, according to the task. In addition to these pre-trained models, we fine-tune and evaluate GLUE for randomly-initialized versions of pre-trained models as well. ## 4.4 Evaluation Metrics Pre-training For pre-training results, we measure and report the cross-entropy loss and perplexity on the test split of the data. We use the crossentropy loss and perplexity calculated on the development set for curve fitting. In both cases, we calculate the cross-entropy loss only for the masked tokens, and the perplexity value is calculated by exponentiating the cross-entropy loss value. We also calculate the FLOPs (compute cost) as defined by Hoffmann et al. (2022). We first calculate FLOPs per training sequence based on the model parameters (including the embedding parameters) and multiply it by the amount of training data seen by the model to get a total number of FLOPs. We provide a detailed formula of FLOPs calculation in Appendix A. Cost-effectiveness analysis We use the Incremental Cost-Effectiveness Ratio (ICER) (Bambha and Kim, 2004) to conduct cost-effectiveness analysis of different model configuration hyperparameters. We treat the FLOPs and model perplexity values as proxies for the expenditure and the outcome of expenditure, respectively. Therefore, We calculate the difference (the ∆ values) by comparing a model configuration with the next cheaper option (e.g., we compare the model with a hidden size of 2 8to the model with a hidden size of 2 7). For the specific case of increasing hidden size from 2 7to 2 8, ICER represents performance gain (reduction in perplexity) per additional FLOPs spent on increasing the hidden size value from 2 7 to 2 8. We calculate ICER values for four hyperparameters namely, embedding size, hidden size, intermediate size, and the number of hidden layers. Fine-tuning We use standard metrics for the GLUE benchmark: accuracy, Matthew's correlation score, and combined correlations score depending on the task. For our conducted experiments, we report the average value of all performance metrics across tasks. ## 5 Results And Discussion 5.1 Curve Fitting To assess the empirical relationship between model performance and data size, model size, and FLOP values, we fit a power curve of the form y = C · x e, separately, to model size, data size, and FLOP values. We only consider the compute-optimal instances for curve fitting. To find compute optimal instances, we first divide the FLOPs values into over 30 bins and fetch the checkpoint corresponding to the minimum value MLM loss for each FLOPs bin. We use an implementation of the Levenberg-Marquardt (Moré, 1978) algorithm provided under the *SciP y* library for curve fitting. We observe that the optimal values of the exponents for data size and model size are, −0.2459 and −0.2805. Note that these values are expected to be different from those in Kaplan et al. (2020), since we work with a different loss and a reducedvocabulary language. The small difference between both exponent values suggests that the MLM loss reduces with a similar pace for data and model scaling. Hence, in our downscaled problem, we find data and model scaling equally important for compute-optimality. Although, we find R24 values for both curves i.e., loss vs. data size and loss vs. model size, are low (Figures 2 and 3). Moreover, we observe that for FLOPs values greater than 2 15, a power curve nearly perfectly predicts the compute optimal MLM loss value for a given compute $$ICER=\frac{\Delta Outcomes}{\Delta Cost}=\frac{\Delta Perplexity}{\Delta FLOPs}\quad\text{(1)}$$ 4R 2is the coefficient of determination and we adopt the default definition of R 2in the Scipy Python library. Please refer to Scipy for more details. ![5_image_0.png](5_image_0.png) budget. In this region, we find the exponent for the FLOPs values to be −0.1412 with an R2 value of 0.9888. In our experiments, we find few model configurations to be effective for a long range of FLOP values. We highlight a couple of examples of such configurations in Figures 2 and 3. The occurrence of such a configuration causes a discontinuous transition of MLM loss with respect to data size, model size, and FLOP values. We observe this effect to be more pronounced in the lower FLOPs region. Consequentially, a power curve does not fit in the region of lower FLOP values (≤ 2 × 1015). With the exponent value of −0.0929, we observe the best value of R2to be 0.68. In order to illustrate the effects of parameter count increase in the downscaled setting, we extended our pre-training experiments to include models of larger size in Figures 2 and 3. We observed that increasing the parameter count up to 100 million does not allow the model to beat the perplexity achieved by a smaller 16 million parameter model (See Appendix D). ## 5.2 Incremental Cost-Effectiveness Analysis For the cost-effectiveness analysis, we focus on the pre-trained model configurations in **set-1**. We calculate the ICER values separately for four hyperparameters: embedding size, hidden size, intermediate size, and the number of hidden layers. We arrange the models in increasing order of FLOPs and calculate ICERs by comparing each model to the next cheapest option. The ICER values presented in Table 2 represent performance gain (reduction in perplexity) per an additional expenditure of a billion FLOPs, scaling only one hyperparameter at a time. We observe the highest ICER value of 3.0075 for scaling the hidden size from 32 to 64, refer to Table 2. For further scaling of hidden size, from 64 to 128, and from 128 to 256, ICER values drop at least by 3x for each increment. Besides rapidly reducing values, ICERs for hidden size were always the highest, making it the most cost-effective choice. Comparably high ICERs were observed for scaling the model by increasing the number of hidden layers. For increasing the hidden layers from one to two, we record an ICER of 2.6271. This value reduces to 0.6810 and 0.2089, when scaling | Model config. (E, H, I, L, A) | ICER | |---------------------------------|--------| | (256, 32, 1024, 8, 8) | - | | (256, 64, 1024, 8, 8) | 3.0075 | | (256, 128, 1024, 8, 8) | 0.8316 | | (256, 256, 1024, 8, 8) | 0.2411 | | (256, 256, 1024, 1, 8) | - | | (256, 256, 1024, 2, 8) | 2.6271 | | (256, 256, 1024, 4, 8) | 0.6810 | | (256, 256, 1024, 8, 8) | 0.2089 | | (32, 256, 1024, 8, 8) | - | | (64, 256, 1024, 8, 8) | 0.6277 | | (128, 256, 1024, 8, 8) | 0.2105 | | (256, 256, 1024, 8, 8) | 0.1669 | | (256, 256, 128, 8, 8) | - | | (256, 256, 256, 8, 8) | 0.4002 | | (256, 256, 512, 8, 8) | 0.4127 | | (256, 256, 1024, 8, 8) | 0.1970 | ![6_image_0.png](6_image_0.png) the model with two layers to have four layers, and a model with four layers to have eight layers, respectively. We find the ICER values for embedding size and intermediate size significantly lower than the values for hidden size and the number of hidden layers. The differences between ICERs were higher for the lower values of each hyperparameter. Although, when all hyperparameter values reached the corresponding highest values, the difference in ICERs diminished. A comparison between ICER values for embedding size and intermediate size shows that increasing embedding size from 32 to 64 brings 0.2275 more improvement in the perplexity per million FLOPs, compared to increasing intermediate size from 128 to 256. However, for all further increments in the hyperparameter values, increasing intermediate size results in at least 0.03 more ICER value than for embedding size. ## 5.3 Downstream Evaluation We report fine-tuning performance on vocabularyfiltered GLUE benchmark in Table 3 (cf. Section 3.1). For reference, we also report performance on unfiltered GLUE. We find that GLUE performance peaks for models with 2 and 4 hidden layers with average scores of 59.39 and 60.67, respectively. Interestingly, we find the average GLUE score decreases for the model with 8 hidden layers to 56.29. Such reduction is not observed when increasing the hidden size or the embedding size. As expected, models consistently demonstrate better performance on vocabulary-filtered GLUE. We also see that model performance is strongest for models with 2 and 4 hidden layers assessed on vocabulary-filtered GLUE. To assess whether pre-training effects are beneficial in the downscaled setting, we compare the average GLUE score of each pre-trained model with the score of the same model fine-tuned without pre-training. Table 3 shows that for each model shape, the fine-tuned model outperforms its respective randomly initialized counterpart. Our results show that in a reduced-vocabulary setting, the advantages of pre-training are observable even for smaller models, starting with a 1.25M parameter | Model config. | Model size | (×1015) | Perplexity | GLUE score | | | |------------------------|-------------------|--------------|--------------|--------------------|-------|-------| | FLOPs | GLUE score | GLUE score | | | | | | (E, H, I, L, A) | (mil. parameters) | (unfiltered) | (filtered) | (filtered, w/o PT) | | | | (256, 256, 1024, 8, 8) | 16.24 | 110 | 4.80 | 51.73 | 56.29 | 40.24 | | (32, 256, 1024, 8, 8) | 11.89 | 80 | 5.60 | 46.99 | 48.39 | 49.80 | | (64, 256, 1024, 8, 8) | 12.51 | 84 | 5.31 | 45.12 | 51.09 | 51.99 | | (128, 256, 1024, 8, 8) | 13.75, | 92 | 5.11 | 48.07 | 52.73 | 52.09 | | (256, 32, 1024, 8, 8) | 6.10 | 42 | 10.42 | 47.98 | 50.23 | 45.93 | | (256, 64, 1024, 8, 8) | 7.34 | 50 | 7.56 | 49.34 | 53.78 | 50.67 | | (256, 128, 1024, 8, 8) | 10.04 | 69 | 5.88 | 51.16 | 55.63 | 50.60 | | (256, 256, 128, 8, 8) | 7.63 | 85 | 5.61 | 57.65 | 57.18 | 41.62 | | (256, 256, 256, 8, 8) | 8.15 | 88 | 5.45 | 54.91 | 56.48 | 41.28 | | (256, 256, 512, 8, 8) | 9.20 | 96 | 5.12 | 51.60 | 57.36 | 40.55 | | (256, 256, 1024, 1, 8) | 10.71 | 73 | 7.60 | 50.87 | 55.99 | 50.53 | | (256, 256, 1024, 2, 8) | 11.50 | 79 | 6.07 | 54.31 | 59.39 | 51.17 | | (256, 256, 1024, 4, 8) | 13.08 | 89 | 5.28 | 53.85 | 60.67 | 47.15 | | (256, 256, 1024, 8, 1) | 16.24 | 110 | 4.74 | 50.47 | 53.68 | 40.30 | | (256, 256, 1024, 8, 2) | 16.24 | 110 | 4.67 | 50.39 | 54.98 | 40.10 | | (256, 256, 1024, 8, 4) | 16.24 | 110 | 4.75 | 49.55 | 54.21 | 39.66 | | (32, 32,128, 2, 2) | 1.27 | 8.57 | 20.07 | 46.25 | 49.03 | 48.68 | | (32, 32, 128, 1, 1) | 1.25 | 8.60 | 23.40 | 44.97 | 48.22 | 47.98 | | (32, 32, 64, 1, 1) | 1.25 | 8.71 | 23.42 | 44.91 | 47.20 | 48.69 | ## Count. We further fine-tune a set of 27 models, comprising a mix of compute-optimal and non-computeoptimal checkpoints, to better understand the relation between upstream and downstream performance. In Figure 4, we plot each model's GLUE score against its size and number of FLOPs, with color indicating the test perplexity of each model. We observe that for a given parameter count, compute-optimal models do not necessarily outperform the undertrained models on the GLUE benchmark. Lastly, considering all fine-tuning results together, we conduct a test to measure the correlation between perplexity (upstream performance) and average GLUE score (downstream performance). We find that the correlation between the average GLUE score for unfiltered GLUE datasets and model perplexity is inconclusive with the Spearman coefficient value of -0.17 and a p-value of 0.28. On the other hand, we find average GLUE score calculated for filtered GLUE datasets highly correlates with model perplexity with the Spearman coefficient value of -0.67 and a p-value ≤ 0.01. 5 ## 5.4 Comparison With Unconstrained Text To highlight the ability of smaller models to benefit from pre-training when the language size is reduced (rather than unconstrained), we also pre-trained 10 models on unconstrained language (i.e., without any vocabulary reduction). We provide the details of the data collection, tokenizer training, and the experimental setup in Appendix C. Table 4 shows the relative performance figures for the models trained on unconstrained language. We fix the model configuration and report the change in performance relative to the constrained (limited-vocabulary) case. The training data size and hyperparameter setting for pre-training and fine-tuning are kept the same. Note that for the unconstrained case, there is a considerable increase in the model size due to an increase in the Byte-BPE vocabulary (from 19,000 to 29,000). The increased model size also increases the compute cost by at least 32.87%. Despite the increased model size and compute cost, no corresponding improvement in pre-training performance is observed, as shown in Table 4. In fact, although the perplexity on reduced-vocabulary data decreases with model size, none of the model configurations studied reach the MLM perplexity of | Model config. | ∆ Model size | % ∆ | % ∆ | % ∆ GLUE score | % ∆ GLUE score | |------------------------|-------------------|-------|------------|------------------|------------------| | (E, H, I, L, A) | (mil. parameters) | FLOPs | Perplexity | (filtered) | (unfiltered) | | (256, 256, 1024, 8, 8) | 5.13 | 32.87 | 23.96 | -18.28 | -11.79 | | (256, 32, 1024, 8, 8) | 2.89 | 47.59 | 17.86 | -3.58 | 2.15 | | (256, 64, 1024, 8, 8) | 3.21 | 44.01 | 16.16 | -3.32 | 1.60 | | (256, 128, 1024, 8, 8) | 3.85 | 38.99 | 20.85 | -4.97 | -0.08 | | (256, 256, 1024, 1, 8) | 5.13 | 49.44 | 18.90 | -15.39 | -1.90 | | (256, 256, 1024, 2, 8) | 5.13 | 46.12 | 17.24 | -18.99 | -7.80 | | (256, 256, 1024, 4, 8) | 5.13 | 40.66 | 16.93 | -19.93 | -7.61 | | (32, 32, 128, 2, 2) | 0.65 | 51.23 | 18.90 | -8.69 | -1.00 | | (32, 32, 128, 1, 1) | 0.65 | 51.89 | 25.79 | -9.06 | 1.19 | | (32, 32, 64, 1, 1) | 0.65 | 52.07 | 14.78 | -8.94 | -2.05 | the reduced-scale model, when evaluated on the test split of the data. Since perplexity values are directly impacted by the increase in the Byte-BPE vocabulary size, we also evaluate the unconstrained data models on GLUE benchmarks for fairer evaluation. Similar to pre-training results, we observe a consistent degradation of the performance for the filtered versions of the GLUE benchmark. For all model configurations considered, the average GLUE score for filtered datasets reduces by up to ≈ 20% due to pretraining on unconstrained data. On the other hand, for the unfiltered version of the GLUE datasets, we do not expect models trained in limited-vocabulary data to do well. However, we find that the average GLUE score on unfiltered datasets improves only in three of the 10 model configurations we considered. These results further confirm that limiting the vocabulary benefits the models with ≤ 22 million parameters. One possible explanation for this is the relative contribution of embedding parameters to model size. In an unconstrained setting, vocabulary embedding parameters account for most of the model, with no parameters left for transformer blocks.6 There is prior work showing that pre-training loss improves only minimally for models with zero transformer blocks (Kaplan et al., 2020). Thus, con-6For example, for a model with 10M parameters, if the embedding size is 200 with a vocabulary of 50,000, no transformer block can be added. But if the vocabulary size is 20,000, 6M parameters can be used in transformer blocks. straining vocabulary allows one to increase transformer block capacity while otherwise maintaining a small parameter count. ## 6 Conclusions & Future Work In this study, we investigated whether reducing language size allows the benefits of pre-training to be observed in a downscaled setting for models with ≤ 20M parameters. We evaluated a range of model configurations and found that the advantages of pre-training are observable even for models with as few as 1.25M parameters, with a strong correlation between upstream and downstream performance. However, we also observed that compute-optimal training does not appear to be crucial for downstream results and that parameter count does not reliably predict upstream performance. Furthermore, we observed a break of the FLOP-Perplexity power law at the 2.2 × 1015 FLOP region, which shows the limited applicability of scaling laws. Overall, our experiments provide insight into the behavior of small language models in a downscaled language setting. The next logical steps as a follow-on to this work would be to check whether generative models would demonstrate any emergent abilities in a downscaled setting. ## 7 Limitations While we do explore a range of models in the 120M parameter space, our work does not constitute a complete study of downscaling. In this work, we aimed to explore the more fundamental components of model shape, model size, and input data. However, our findings may not generalize to other models with alternative applications of downscaling methods. Considering it to be out of scope for this study's assessment of pre-training effects, we did not compare our results to knowledge distillation methods of similar model shape and size. Furthermore, our exploration of model shape and size was limited to a model's hidden size, number of hidden layers, embedding size, and intermediate size, and number of attention heads as these are the most commonly-tuned hyperparameters. Our usage of vocabulary filtration as a means of downscaling input data size may not be the most effective means of limiting input data. While shown to be effective, alternative approaches for input data manipulation such as curriculum learning, and data pruning merit study beyond the scope of this paper. ## Ethics Statement Our exploration of smaller language models presents a number of implications for accessibility, environmental impact, and cost. By exploring models in the space of 1-20M parameters, our findings can inform language modeling work for those without access to large, GPU-enabled environments. This is important as it can encourage further research work in this space by those who are otherwise unable to work with SoTA LLMs. We acknowledge that our resources enabled the breadth of study in this paper; most of this study was conducted using a single GPU. This consideration underscores our commitment to improving accessibility for under-resourced technologists throughout the world. Furthermore, in working with downscaled LLMs, we hope to encourage methods that reduce overall carbon footprint and bolster sustainable practices in NLP. These considerations are especially important given the particular burden placed on those with limited access to electricity and technology. The cost of running and experimenting with these models may prove quite costly in terms of person-hours and compute resources. As such, we hope our work at smaller scale can help lessen these burdens, and positively impact the lives of technologists, and others. Any model from the study can be trained in less than a day on a single consumer-grade GPU. ## References Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. *arXiv preprint* arXiv:1607.06450. Kiran Bambha and W Ray Kim. 2004. Costeffectiveness analysis and incremental costeffectiveness ratios: uses and pitfalls. *European* journal of gastroenterology & hepatology, 16(6):519– 526. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In *Advances in Neural Information Processing Systems*, volume 33, pages 1877–1901. Curran Associates, Inc. Stephanie CY Chan, Adam Santoro, Andrew Kyle Lampinen, Jane X Wang, Aaditya K Singh, Pierre Harvey Richemond, James McClelland, and Felix Hill. 2022. Data distributional properties drive emergent in-context learning in transformers. In *Advances in Neural Information Processing Systems*. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022. Palm: Scaling language modeling with pathways. *arxiv:2204.02311*. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*. William Fedus, Barret Zoph, and Noam Shazeer. 2022. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. Journal of Machine Learning Research, 23(120):1–39. Jack FitzGerald, Shankar Ananthakrishnan, Konstantine Arkoudas, Davide Bernardi, Abhishek Bhagia, Claudio Delli Bovi, Jin Cao, Rakesh Chada, Amit Chauhan, Luoxin Chen, et al. 2022. Alexa teacher model: Pretraining and distilling multibillion-parameter encoders for natural language understanding systems. Thamme Gowda and Jonathan May. 2020. Finding the optimal vocabulary size for neural machine translation. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 3955–3964, Online. Association for Computational Linguistics. Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2015. The goldilocks principle: Reading children's books with explicit memory representations. *arXiv preprint arXiv:1511.02301*. Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Noland, Katie Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karen Simonyan, Erich Elsen, Jack W. Rae, Oriol Vinyals, and Laurent Sifre. 2022. Training compute-optimal large language models. Philip A Huebner, Elior Sulem, Fisher Cynthia, and Dan Roth. 2021. Babyberta: Learning more grammar with small-scale child-directed language. In Proceedings of the 25th Conference on Computational Natural Language Learning, pages 624–646. Philip A Huebner and Jon A Willits. 2021. Using lexical context to discover the noun category: Younger children have it easier. In Psychology of learning and motivation, volume 75, pages 279–331. Elsevier. Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*. Ilya Loshchilov and Frank Hutter. 2017. Decoupled weight decay regularization. *arXiv preprint* arXiv:1711.05101. Jorge J Moré. 1978. The levenberg-marquardt algorithm: implementation and theory. In *Numerical* analysis, pages 105–116. Springer. Laura Pérez-Mayos, Miguel Ballesteros, and Leo Wanner. 2021. How much pretraining data do language models need to learn syntax? Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. *OpenAI* blog, 1(8):9. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(140):1–67. Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2019. Winogrande: An adversarial winograd schema challenge at scale. *Commun.* ACM, 64:99–106. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. 2019. Megatron-lm: Training multi-billion parameter language models using model parallelism. arXiv preprint arXiv:1909.08053. Ben Sorscher, Robert Geirhos, Shashank Shekhar, Surya Ganguli, and Ari S. Morcos. 2022. Beyond neural scaling laws: beating power law scaling via data pruning. In *Advances in Neural Information Processing* Systems. Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, and Donald Metzler. 2022. Scale efficiently: Insights from pretraining and finetuning transformers. In International Conference on Learning Representations. Iulia Turc, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Well-read students learn better: On the importance of pre-training compact models. Ekaterina Voloshina, , Oleg Serikov, Tatiana Shavrina, and and. 2022. Is neural language acquisition similar to natural? a chronological probing study. In *Computational Linguistics and Intellectual Technologies*. RSUH. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461. Yian Zhang, Alex Warstadt, Haau-Sing Li, and Samuel R. Bowman. 2020. When do you need billions of words of pretraining data? Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In *The IEEE International Conference on Computer Vision (ICCV)*. ## A Calculation Of Compute Cost (Flops) We adopt the same approach for calculating the compute cost (FLOPs) as presented by Hoffmann et al. (2022). For notational convenience, we denote the sequence length, vocabulary size, embedding size, hidden size, intermediate size (hidden dimension of the feed-forward network in the transformer block), number of attention heads, key size (for the attention block), and number of layers by S, V, E, H, I, A, K and L, respectively. The FLOPs for the forward pass are calculated as follows. - Single embedding block: $$\mathrm{DIOCK}_{1}$$ ### Single tunneling from: $$C_{emb}=2\times S\times(VE+EH)$$ - Single attention block: - Cost of the low, warm, and evolve regions. - Cost of the key, query, and value projections $$C_{att}=2\times3\times SH\times(KA)$$. - Cost of the dot product operation of key and query $$C_{a t t}+=2\times S S\times(K A)$$ $$\mathbf{\partial}\cdot\mathbf{C}\mathrm{{ost}}\ \mathbf{0}$$ $$\mathbf{\partial}$$ - Cost of the softmax operation. - $$C_{att}\,+=3\times SS\times(A)$$ - Cost of the query reduction $$C_{att}\,+=2\times SS\times(KA)$$ - Cost of the final linear layer - $\mathrm{Single}$ . $$\cdot$$ $$C_{a t t}+=2\times S H\times(K A)$$ - Single feed-forward layer: $$C_{int}=2\times\left(HI+IH\right)$$. - Single language model head: $$C_{l m h}=2\times S H V$$. - $\color{blue}{\text{Single forward pass:}}$. $${\mathrm{gle~forward~pass}}\colon$$ $$C_{f o r w a r d}=C_{e m b}+C_{l m h}+L\times(C_{a t t}+C_{i n t})$$ - Single backward pass: $$C_{b a c k w a r d}=2\times C_{f o r w a r d}$$ $${\mathrm{ing~sequence}}:$$ - Cost per training sequence: $$C_{s e q}=C_{f o r w a r d}+C_{b a c k w a r d}$$ Therefore, we calculate the total compute cost as Number of parameter updates ×*batch size* × Cseq. ## B Tokenizer Selection B.1 List Of Reference Words For Esms In Table 5 we provide a list of words we used for calculating the Exact Sub-token Matching Score (ESMS). We also provide the morpheme-based tokens per word and the maximum value of exact matches per word. ## B.2 Comparison Of Different Vocabulary Sizes And Tokenizer Types We provide the results of our experiments to determine the best-suited tokenizer in this section. Table 6 provides the vocabulary size, corresponding word-split ratio, and ESMS value for the three types of tokenizers we evaluated. The bolded row is the final tokenizer we used in our pre-training experiments. ## C Comparison With Unconstrained Language Our main experiments were conducted on language that is constrained by a predefined vocabulary. To study the effect of the applied vocabulary constraint in comparison with free text, we conduct a set of experiments on unconstrained language i.e., without any vocabulary-based filtering. In the following subsections, we provide details of the data collection, tokenizer training, and pre-training process adopted. ## C.1 Pre-Training Data Our objective in curating constrained language was solely to impose a vocabulary constraint. However, our filtering method (Section 3.2) resulted in constrained language comprised of non-consecutive text sequences. To address differences beyond vocabulary, we conducted unconstrained language collection using the following approach. We divided all instances in a specific corpus into spans of | Reference word | Morpheme sub-tokens | Maximum exact matches per word | |------------------|-----------------------|----------------------------------| | Cooking | cook, ing | 2 | | Dangerous | danger, ous | 2 | | Pretext | pre, text | 2 | | Fitness | fit, ness | 2 | | Antisocial | anti, social | 2 | | Podium | pod, ium | 2 | | Universe | uni, verse | 2 | | European | europ, ean | 2 | | Decode | de, code | 2 | | Subvert | sub, vert | 2 | | Proactive | pro, active | 2 | | Concentric | con, centr, ic | 3 | | Octopus | octo, pus | 2 | | Tokenizer name | Vocabulary | Word-split | ESMS | |-------------------------------------|--------------|--------------|--------| | size | ratio | | | | BPE (Radford et al., 2019) | 18,000 | 1.34 | 0.2868 | | BPE (Radford et al., 2019) | 19,000 | 1.32 | 0.2604 | | BPE (Radford et al., 2019) | 20,000 | 1.31 | 0.2490 | | BPE (Radford et al., 2019) | Pre-trained | 1.32 | 0.1547 | | WordPiece (Devlin et al., 2018) | 16,000 | 1.17 | 0.0339 | | WordPiece (Devlin et al., 2018) | 17,000 | 1.17 | 0.0264 | | WordPiece (Devlin et al., 2018) | 18,000 | 1.16 | 0.0188 | | WordPiece (Devlin et al., 2018) | Pre-trained | 1.17 | 0.0339 | | SentencePiece (Raffel et al., 2020) | 9,000 | 1.32 | 0.0301 | | SentencePiece (Raffel et al., 2020) | 10,000 | 1.29 | 0.0226 | | SentencePiece (Raffel et al., 2020) | 11,000 | 1.26 | 0.0188 | | SentencePiece (Raffel et al., 2020) | Pre-trained | 1.29 | 0.0339 | Table 6: Values of word-split ratio and ESMS for various tokenizer and vocabulary size settings. 110 words and randomly sampled spans. The number of randomly sampled spans was determined to maintain the same data distribution across different corpora, as indicated in Table 1. This method aimed to minimize the impact of data features other than vocabulary. We gathered an equivalent number of training sequences (approximately nine million) as in the constrained pre-training data. Finally, we ensured a fair comparison of pre-training performance by using the same evaluation and test split for both pre-training datasets. ## C.2 Tokenizer After data curation, we conduct experiments with various tokenizers to finalize the tokenizer type and size of the token vocabulary for the language model. These experiments were conducted in the same manner described in Section 3.3. The final tokenizer we select is the Byte-BPE tokenizer (Radford et al., 2019) with a vocabulary of 29,000 tokens (1.6× that of the vocabulary size for the constrained language). The word-split ratio and the ESMS (exact sub-token matching score) for the final tokenizer were 1.53 and 0.2339. ## C.3 Experimental Setup After finalizing the token vocabulary, we measure the pre-training as well as the downstream performance of the models trained on unconstrained language. We focus on the model configuration explored in the **set-1** (refer to section 4.1). Furthermore, guided by our results in the ICER analysis (refer to section 5.2), we only consider the model configurations that either perturb the hidden size or the number of layers in the model. With such a selection, we pre-train seven language models. In addition, we pre-train the smallest model configuration that highlighted the benefits of pre-training in our main experiment (refer to Table 3). Overall, we pre-train 10 models on the collected unconstrained language. We keep all the hyperparameter values the same as in our main experiments (refer to Sections 4.2 and 4.3). For the comparison of the pretraining performance, we measure the MLM loss and perplexity values calculated on the test split of the limited-vocabulary pre-training data. For comparison of the downstream performance, we finetune the final checkpoint of all pre-trained models on GLUE tasks and record the average GLUE score, separately for filtered and unfiltered versions of the GLUE datasets. ## D Training Larger Models We continued our pre-training experiments with the constrained language (limited vocabulary) data to include larger models i.e., models with more than 20 million parameters. We first set the anchor configuration to have embedding size, hidden size, intermediate size, number of layers, and number of attention heads equal to 512, 512, 2048, 8, and 8, respectively. After defining the anchor configuration, we follow the same approach of varying each configuration feature to explore and pre-train various models. However, for the training of larger models, we only focused on the hidden size and number of layers. We present the pre-training results of the larger models in Table 7. In the larger model configurations, we observe that the additional model parameters do not reduce the model perplexity below 4.67. However, within the set of larger models, we observe an expected reduction in the perplexity values with an increase in the size of the model. For our main experiments, we calculated the training data size for the expected largest model size i.e., 20 million parameters based on the findings provided by Hoffmann et al. (2022). The data size value was between 500 to 600 million tokens. Note that this data size is the size required to train the 20 million parameter model 'computeoptimally'. Hence, we collected more data to observe the effect after the compute-optimal point. Finally, we collected approximately double the quantity of data (≥ 1100 million tokens). Findings provided by (Hoffmann et al., 2022) are based on decoder-only models but, at the time of the experimentation, this was the best available guide for us to make a decision. Hence, we speculate that the size of our filtered pre-training data is not sufficient for the larger models that we consider in this set of experiments where we pre-train considerably larger models. Therefore, we do not include the model configuration considered in this set of experiments for our main results and figures. | Model config. | Model size | FLOPs (×1015) | Perplexity | |-------------------------|-------------------|-----------------|--------------| | (E, H, I, L, A) | (mil. parameters) | | | | (256, 256, 1024, 8, 2) | 16.24 | 110 | 4.67 | | (512, 512, 2048, 8, 8) | 45.30 | 302 | 5.57 | | (512, 256, 2048, 8, 8) | 25.41 | 174 | 6.47 | | (512, 768, 2048, 8, 8) | 69.52 | 452 | 5.22 | | (512, 1024, 2048, 8, 8) | 98.06 | 624 | 4.94 | | (512, 512, 2048, 1, 8) | 23.23 | 158 | 13.27 | | (512, 512, 2048, 2, 8) | 26.38 | 179 | 6.67 | | (512, 512, 2048, 4, 8) | 32.69 | 220 | 5.75 | ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 6 ✗ A2. Did you discuss any potential risks of your work? Our study works with understanding models at a very small scale (<10M params). These models, unlike LLMs do not present harm in terms of environmental impact or misuse as they are not as capable. ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1 ✓ A4. Have you used AI writing assistants when working on this paper? We used ChatGPT and GPT-3 to help us rephrase our text to make it more naturally looking and fluent. After rephrasing we could carefully read and edit the output of the system if nesessary. In some cases we would completely discard the generated text in favor of manual text editing. Additionally, we used Grammarly to check the typos, grammar, and phrasing. These methods were used throughout the paper. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Results ✓ B1. Did you cite the creators of artifacts you used? I don't understand the question. If this refers to pre-trained models and datasets, then the methods and results section and a big chunk of the rest of the paper. ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? We will provide the licence when the dataset is published. Also, legally, the license must include the names of the authors, which contradicts anonymity. ✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? We use open-access datasets that are not restricted. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. ✗ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? It will be provided upon dataset publication ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Left blank. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ## C ✓ **Did You Run Computational Experiments?** Results ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Results ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Experimental setup ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Results ✗ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? It will be available in the published code, we can't publish code now according to the anonymity policy ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
liu-etal-2023-communication
Communication Efficient Federated Learning for Multilingual Neural Machine Translation with Adapter
https://aclanthology.org/2023.findings-acl.327
Federated Multilingual Neural Machine Translation (Fed-MNMT) has emerged as a promising paradigm for institutions with limited language resources. This approach allows multiple institutions to act as clients and train a unified model through model synchronization, rather than collecting sensitive data for centralized training. This significantly reduces the cost of corpus collection and preserves data privacy. However, as pre-trained language models (PLMs) continue to increase in size, the communication cost for transmitting parameters during synchronization has become a training speed bottleneck. In this paper, we propose a communication-efficient Fed-MNMT framework that addresses this issue by keeping PLMs frozen and only transferring lightweight adapter modules between clients. Since different language pairs exhibit substantial discrepancies in data distributions, adapter parameters of clients may conflict with each other. To tackle this, we explore various clustering strategies to group parameters for integration and mitigate the negative effects of conflicting parameters. Experimental results demonstrate that our framework reduces communication cost by over 98{\%} while achieving similar or even better performance compared to competitive baselines. Further analysis reveals that clustering strategies effectively solve the problem of linguistic discrepancy and pruning adapter modules further improves communication efficiency.
# Communication Efficient Federated Learning For Multilingual Neural Machine Translation With Adapter Yi Liu1, Xiaohan Bi2, Lei Li1, Sishuo Chen2, Wenkai Yang2**, Xu Sun**1 1National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University 2Center for Data Science, Peking University [email protected] {bxh,wkyang}@stu.pku.edu.cn [email protected] {chensishuo,xusun}@pku.edu.cn ## Abstract Federated Multilingual Neural Machine Translation (Fed-MNMT) has emerged as a promising paradigm for institutions with limited language resources. This approach allows multiple institutions to act as clients and train a unified model through model synchronization, rather than collecting sensitive data for centralized training. This significantly reduces the cost of corpus collection and preserves data privacy. However, as pre-trained language models (PLMs) continue to increase in size, the communication cost for transmitting parameters during synchronization has become a training speed bottleneck. In this paper, we propose a communication-efficient Fed-MNMT framework that addresses this issue by keeping PLMs frozen and only transferring lightweight adapter modules between clients. Since different language pairs exhibit substantial discrepancies in data distributions, adapter parameters of clients may conflict with each other. To tackle this, we explore various clustering strategies to group parameters for integration and mitigate the negative effects of conflicting parameters. Experimental results demonstrate that our framework reduces communication cost by over 98% while achieving similar or even better performance compared to competitive baselines. Further analysis reveals that clustering strategies effectively solve the problem of linguistic discrepancy and pruning adapter modules further improves communication efficiency.1 ## 1 Introduction Federated Learning (FL) (McMahan et al., 2017) provides a new training framework utilizing data from various clients without privacy leakage. In FL, the server receives models from clients trained with their local data and aggregates all parameters it has received to acquire a global model, and then 1Our code is available at https://github.com/ lancopku/FedMNMT ![0_image_0.png](0_image_0.png) sends it back to all clients to start the next training round. This characteristic enables FL to get widely applied in real-world scenarios (Ge et al., 2020; Roosta et al., 2021; Passban et al., 2022; Niu and Deng, 2022). In recent years, federated multilingual neural machine translation (Fed-MNMT) has become a new training paradigm and making it feasible for most institutions to train MNMT models (Roosta et al., 2021; Passban et al., 2022). FL makes it possible to leverage corpora from other organizations without privacy problems, solving the problem that training an MNMT model needs to collect large-scale multilingual corpora, which is expensive, time-consuming, and often unaffordable for resource-constrained institutions. Therefore, Fed-MNMT is a secure and cost-effective alternative to conventional centralized training for the optimization of MNMT models. However, the issue of communication cost is non-negligible when we introduce FL to neural machine translation. Unlike local centralized learning, federated learning requires frequent commu- ![1_image_0.png](1_image_0.png) nication of model parameters between the server and clients. Therefore, the communication cost grows rapidly along with the increase in model size. Nowadays, pre-trained language models are widely adopted as backbone models for MNMT, whose parameters are usually over 108, e.g., 611M for mBART-50 (Tang et al., 2020) and 1.2B for M2M100 (Fan et al., 2020). Considering the increasing number of clients in realistic scenarios as there frequently appear new clients, communication costs will severely hinder the efficient training of the entire Fed-MNMT system and thus make the application of FL to MNMT impractical. To tackle this problem, we introduce the parameter-efficient tuning idea (Houlsby et al., 2019; Pfeiffer et al., 2021; Karimi Mahabadi et al., 2021) into Fed-MNMT. Specifically, we focus on adapter (Rebuffi et al., 2017; Houlsby et al., 2019), which is a popular technique for efficient tuning that requires only updating a lightweight adapter module. In the training process of adapters, a number of randomly initialized modules are inserted into the backbone models and fine-tuned on new data. Concretely, only the parameters of these modules are updated during training, so the number of parameters needed to be transferred between the server and clients is substantially reduced. As the communication cost before and after introducing adapter illustrated in Figure 1, this approach significantly saves communication costs and enables practical applications of Fed-MNMT. However, directly adding adapter modules to NMT models results in a performance decline, which is initially observed by Roosta et al. (2021) and also confirmed by our experimental results. This phenomenon is attributed to the divergence of different language pairs. In Fed-MNMT, corpora in diverse languages from different clients are not independently and identically distributed (Non-I.I.D.), so directly aggregating parameters from clients leads to a decrease in model's performance (Zhao et al., 2018). Considering the adverse effect of conflicting parameters from diverse languages in Fed-MNMT, we introduce clustering strategies to alleviate this issue. The core idea is to cluster the samples according to the characteristics of their data and only conduct aggregation within each cluster where samples share similar properties. Specifically, we cluster all clients with different language pairs based on the language family, gradient similarity, and random, respectively, and systematically compare the performance of different clustering strategies on multilingual translation benchmarks. Figure 2 gives a general view of our training framework. Our experimental results show that clustering on adapters alleviates the data Non-I.I.D. problem and yields better performance in most cases. Overall, our work opens a new direction for future improvements on Fed-MNMT in the real world. In conclusion, our primary contributions can be summarized as follows: - Aware of the communication barrier in the training of Fed-MNMT models, we introduce a practical efficient Fed-MNMT framework to enable real-world applications. - By exploring adapter and clustering strategies for alleviating the undesirable effect of data discrepancy, we achieve comparable results with over 98% communication cost reduced compared to vanilla Fed-MNMT. ## 2 Methodology In this section, we first define the Fed-MNMT problem in § 2.1. Next, we elaborate on the adapter modules and the investigated clustering strategies in § 2.2 and § 2.3, respectively. Last, we provide an analysis of communication costs between the original Fed-MNMT and our method in § 2.4. ## 2.1 Problem Formulation For a Fed-MNMT problem, we suppose that the set of clients is {Ci} N i=1 where N > 1 and client Ci owns only one language pair Pi, whose source language and target language are srci and tgti, respectively, and corresponding dataset Di = {xij , yij} ni j=1, where niis the size of Di. In each training round, the optimization target for Ciis minimizing the cross entropy loss between the ground truths yi and model's predictions y˜i: $${\mathcal{L}}_{i}=-\sum_{j=1}^{n_{i}}\sum_{k=1}^{l_{i j}}\log p\left({\hat{y}}_{i j}^{k}=y_{i j}^{k}|x_{i j}\right)\quad\quad(1)$$ After the t-th training round, all clients will deliver local parameters to the server. The server will aggregate these parameters to obtain the initial parameters for the next round's training. A commonly adopted aggregation algorithm is FedAvg (McMahan et al., 2017), where the weighted average of clients' parameters is calculated according to the quantities of local data samples. Let Θ denote model parameters, the FedAvg algorithm can be formulated as: Θ t+1 = X N i=1 ni n Θ t i, (2) where n =PN i=1 ni. Then the aggregated parameters will be sent back to all clients to initialize their local models for the next round of training. However, data sizes can vary sharply among low-resource and high-resource languages in FedMNMT and FedAvg cannot deal well with data quantity skew well (Wang et al., 2020). Thus we change FedAvg, calculating the weighted mean of different clients' parameters, to directly calculating the arithmetic mean of parameters: $$\Theta^{t+1}=\sum_{i=1}^{N}\frac{1}{N}\Theta_{i}^{t}\qquad\qquad(3)$$ We refer to this aggregation method as FedMean in our paper. Considering the size of pre-trained multilanguage models, the communication of model parameters between the server and clients is timeconsuming. Inspired by recent progress in parameter efficient tuning, we are interested in whether adapter can be used to improve the efficiency in FL. ## 2.2 Adapter Modules We introduce bottleneck adapter (Houlsby et al., 2019) into pre-trained multilingual models. Following the settings of Houlsby et al. (2019) and Pfeiffer et al. (2020), we add adapter modules after the self-attention layer and feed-forward network (FFN) layer for each encoder layer and an additional adapter layer after the cross-attention layer for each decoder layer. During training, only the parameters of adapters and layer-norm modules will be updated thus only a small proportion of parameters have to be communicated between the server and clients. ## 2.3 Client Clustering Strategies Related research (Johnson et al., 2017; Firat et al., 2016) has shown that parameter sharing among different languages in MNMT boosts the model's performance, especially for those low-resource languages. Motivated by the success of language clustering in MNMT (Tan et al., 2019), we decide to introduce the method of language pairs clustering into the Fed-MNMT problem and we only allow inner-cluster parameters aggregation. Assuming that the multi-language model consists of an encoder and a decoder, we first conduct a clustering algorithm to obtain the encoder clusters set Ge = {gi} me i=1 and the decoder clusters set Gd = {gi} md i=1. Each cluster gi contains the ids of clients in this cluster. Detailed aggregation algorithm is shown in Algorithm 1. We explore the following three different clustering strategies. Language families/groups. Chronopoulou et al. (2022) have verified the strategy of sharing parameters within the same language family in the MNMT problem. We decide to use this strategy in the FL setting. We choose 8 languages belonging to 4 different language families from the TED2020 corpus and 10 languages belonging to 4 different language groups, which are all parts of the Indo-European language family, from the Europarl corpus. The clustering of the encoder depends on language families/groups of source languages and the clustering of the decoder is decided by the target languages' families/groups. Languages from the same family Algorithm 1: Inner-cluster Aggregation Input: Encoder and decoder clusters sets Ge and Gd; Initial encoder and decoder paras Θ 0 e and Θ 0 d; Clients set {Ci} N i=1; Training round T. Output: Encoder paras {Θ T e,i} N i=1; Decoder paras {Θ T d,i} N i=1 1 . for i *from* 1 to N do 2 Initialize Θ 0 e,i with Θ 0 e; 3 Initialize Θ 0 d,i with Θ 0 d; 4 for t *from* 1 to T do 5 for i *from* 1 to N do // local update of client i 6 update Θ t−1 e,i and Θ t−1 d,i with local data; // inner-cluster aggregation of encoder parameters 7 **foreach** g in Ge do 8 Θ t e,g =Pid∈g 1 |g|Θ t−1 e,id; 9 **foreach** id in g do 10 Θ t e,id = Θte,g; // inner-cluster aggregation of decoder parameters ![3_image_0.png](3_image_0.png) Gradients. Unlike in the scene of centralized learning, clustering based on model parameters (Tan et al., 2019) in Fed-MNMT is unfeasible due to privacy problems. Therefore, we use gradients as the basis of the feature for clustering instead. For each language pair, we use a pre-trained multi-language model to acquire an average gradient vector of all data samples, then a clustering algorithm is applied to the gradient vectors in order to separate clients into different groups. The number of parameters we use for gradients clustering is only about 131K for each client and will hardly introduce any extra communication cost. Random clustering. We also test randomly separating all clients into different groups as a baseline for clustering strategies. In detail, we uniformly separate the clients and keep the numbers of clusters in the encoder and the decoder the same as those in the language families/groups strategy. ## 2.4 Communication Cost Comparison Taking the mBART-50 model (Tang et al., 2020), which is a popular pre-trained multi-language model, as an example, the number of parameters is around 610.9M, which requires about 2.44GB storage space in the FP32 format. In comparison, after adding adapter modules, only about 8M parameters have to be transferred, which will save approximately 98.7% communication cost. More concretely, we provide an approximation for the transmission time needed between the server and clients as follows. Assuming the maximum bandwidth of the server is 1000Mbps, the time to transfer the entire mBART model from client to server is around 2.44GB / 1000Mbps = 19.5 seconds. Assuming all clients share the bandwidth, the total transfer time grows linearly with the number of clients. The synchronization process for all clients to finish transferring models to the server will occupy a large proportion of the total training time. In our actual experiments with 12 clients, the theoretical total transferring time is about 19.5 × 12 = 234 seconds. However, for clients with low-resource languages, the training could be finished within 7 minutes, which means transferring time occupies over half of the local training time. By contrast, the time to transfer the adapter's parameters is only about 0.26 seconds, which is negligible compared to local training time thus significantly improving the training efficiency. or group will be clustered into the same group. ## 3 Experimental Setup 3.1 Datasets And Evaluation Metrics We conduct experiments in two different settings: Multi-to-English and Multi-to-Multi (hereinafter referred to as "m2en" and "m2m", respectively). We use the **TED2020** corpus (Reimers and Gurevych, 2020) for the m2en setting and the Europarl corpus (Koehn, 2005) for the m2m setting. The TED2020 corpus is extracted from TED speeches and contains over 100 languages around the world. The Europarl corpus is from the proceedings of the European Parliament and contains 21 languages of European countries. For each language pair2, we divide the original corpus into training, dev, and test datasets in accordance of the proportion of 6:2:2. We further sample subsets from the divided datasets. To simulate the scene of low-resource languages and high-resource languages, the training data size of each language pair varies according to the corresponding original 2Abbreviations for languages we use: Chinese->zh, English->en, Thai->th, Arabic->ar, Hebrew->he, Finnish->fi, Estonian->et, Russian->ru, Slovene->sl, German->de, Dutch- >nl, French->fr, Italian->it, Spain->es, Polish->pl, Slovene- >sl, Lithuanian->lt, Latvian->lv. corpus size. The specific language pairs we use and corresponding data sizes are shown in Appendix A. In the m2en setting, for clustering strategies based on language families/groups and random shuffle, clustering algorithms are only applied to the encoder and all clients share the decoder's parameters because their target languages are all English. But for clustering based on gradients, we also cluster the decoder's parameters into different groups and the number of groups stays the same as that in the encoder. In the m2m setting, clustering is conducted for both the encoder and the decoder. Meanwhile, the numbers of groups in the encoder and decoder are the same in all clustering strategies. We will provide a further analysis of the clustering strategies of the m2m setting in § 4.2. We choose the BLEU score as the evaluation metric using the SacreBLEU (Post, 2018) package. Aside from the BLEU score on each language pair, we additionally report the macro average and micro average scores on all language pairs. ## 3.2 Baselines We evaluate the following methods as baselines: Centralized-model. The results of centralized training, where data from all clients are gathered together, using the original multi-language model without extra modules. Centralized-adapter. The results of centralized training using the multi-language model with adapter modules. Adapter-local. We train a model for each client using local data without parameter aggregation with other language pairs. Model-fed. We train the original multi-language model without Adapter modules under the federated learning framework, where the parameters are shared among all clients using the aggregation algorithm in Eq. (3) without any clustering strategies. Adapter-fed. In this method, adapter modules are attached to the backbone model, while the rest settings are the same as those in *model-fed*. This baseline corresponds to the scene of directly introducing adapter without any clustering strategies. ## 3.3 Training Setup We choose the mBART-50 pre-trained model 3as our backbone model. To fairly compare the training and communication costs of different methods, we train each model for 5 rounds. We select the checkpoint with the lowest loss on the dev set and evaluate it on the test set. Parameters are aggregated every time all clients finish an epoch of local training. For every client, the batch size is 8 and the local model is updated every 16 steps. The local learning rate is 5 × 10−5for the mBART model and 1 × 10−3for the models with adapter modules. The hidden size of adapter modules is 64. For all experiments, we train the model with 3 random seeds and report the average scores. For the random clustering strategy, the clustering groups are different when using different random seeds. ## 4 Experimental Results 4.1 Primary Results And Findings The experimental results in the m2en and m2m settings are shown in Table 1 and Table 2, respectively. In general, directly adding adapter modules leads to a performance drop (comparing *adapterfed* and *model-fed*) and methods with clustering strategies all achieve better performance than the direct baseline *adapter-fed*, indicating the ability of our clustering strategies to alleviate data discrepancy. In both settings, *adapter-families* method performs best in macro and micro average scores among three clustering strategies, even surpassing model-fed in the m2m setting. It is noteworthy that the clustering strategies acquire more significant performance improvements in the m2m setting than in the m2en setting. The problem of conflicting parameters is more nettlesome in the m2m setting because there exist more kinds of languages (especially target languages). Thus introducing clustering strategies will bring more benefit to m2m translation tasks. Meanwhile, we notice that our clustering strategies fail to beat *adapter-local* in the m2m setting. This can also be explained by the difference in the difficulty of tasks. In the more complicated m2m translation task, more elaborate clustering strategies should be designed to fully make advantage of other language pairs and avoid the influence of conflicting parameters. However, we bring the ability of multi-language translation to these clients through FL with an acceptable drop in performance compared to *adapter-local*. ## 4.2 Ablation Study In the m2m setting, the clustering of the adapter modules attached to the encoder and the decoder Method Comm. Cost zh-en th-en ar-en he-en fi-en et-en ru-en sl-en Macro Avg. Micro Avg. centralized-model N / A 24.72 28.97 38.29 43.59 32.81 32.70 30.14 47.92 34.89 32.33 centralized-adapter N / A 24.97 21.47 39.02 45.05 33.62 32.62 30.65 50.45 34.73 32.03 adapter-local N / A 25.16 21.68 39.46 44.32 33.12 32.64 30.58 50.93 34.73 32.15 model-fed 611M 25.31 23.41 39.54 45.13 32.87 33.27 30.77 51.85 35.27 32.55 adapter-fed 8M 25.12 16.64 **39.62 44.93** 33.14 32.66 30.41 **53.62** 34.52 31.71 adapter-random 8M **25.37** 21.48 39.61 44.82 33.26 33.21 30.75 52.64 35.14 32.38 adapter-gradients 8M 25.26 21.26 39.39 44.64 33.62 33.14 30.72 51.16 34.90 32.21 adapter-families 8M 25.28 **21.58** 39.45 44.70 **33.87 33.23 30.92** 52.64 **35.21 32.40** Table 1: BLEU scores on the TED2020 corpus. Comm. Cost, which is short for communication cost, denotes the number of parameters communicated between the server and each client. Adapter-random, *adapter-gradients*, and *adapter-families* refer to clustering strategies of random clustering, gradients, and language families/groups, respectively. The best result of each language pair is highlighted in **bold** (only methods with adapter modules trained in the FL setting are considered). Method Comm. Cost de-fr nl-pl en-lt fr-nl it-sl es-lv pl-en sl-es sl-lt lt-de lv-it lv-pl Macro Avg. Micro Avg. centralized-model N / A 30.43 19.15 28.86 23.20 19.96 27.17 40.35 33.20 21.24 21.84 23.63 20.57 25.80 26.10 centralized-adapter N / A 30.59 19.07 29.65 22.92 18.68 27.73 41.52 33.20 21.45 22.16 23.40 20.85 25.93 26.19 adapter-local N / A 30.88 19.19 30.31 23.50 20.01 28.12 41.84 33.39 21.13 22.27 23.62 21.05 26.28 26.56 model-fed 611M 30.41 17.60 29.62 19.76 13.41 28.01 39.77 32.25 21.10 21.94 20.09 19.92 24.49 24.68 adapter-fed 8M 29.75 17.47 30.08 16.92 11.85 28.01 38.18 31.06 20.18 21.23 18.02 19.97 23.56 23.53 adapter-random 8M **30.90** 18.57 30.04 22.39 16.53 **28.25** 36.97 33.04 21.13 **22.28 22.72** 20.37 25.27 25.67 adapter-gradients 8M 30.14 **19.69 30.19** 19.53 **19.14** 28.06 **41.73 33.31 21.51** 21.60 19.95 **21.29** 25.51 25.35 adapter-families 8M 30.60 19.31 30.12 **22.65** 16.69 27.99 41.33 33.21 21.31 22.04 22.68 21.21 **25.76 26.04** Table 2: BLEU scores on the Europarl corpus. The meanings of symbols stay the same as those in Table 1. are independent. To further explore the specific influence of clustering in these two modules on the model's performance, we apply the clustering strategy to only the encoder and the decoder separately and show the results in Figure 3. We use language families/groups as the clustering strategy. All methods are trained in the m2m setting using the Europarl dataset with one random seed. Other settings are the same as those in our main experiments. To our surprise, either clustering in the encoder or the decoder significantly improves performance compared to no-sharing strategies, even surpasses adapter-families. We owe this phenomenon to our naive clustering strategies in the encoder and the decoder. In *adapter-families*, the clustering of the encoder and the decoder is only related to the source and target languages respectively. However, parameters in both the encoder and the decoder are influenced by source and target languages together during training. The inconsistency between clustering strategies and parameter update results in adapter-families method's worse performance than adapter-encoder and *adapter-decoder*. ## 5 Further Analysis 5.1 Case Study We select some representative cases of translations from method *adapter-families* and adapter- ![5_image_0.png](5_image_0.png) shareAll, which are shown in Table 3, to further study the influence of our clustering strategies. We separate the mistakes into three categories. Opposite semantics In case 1 and case 2, adapter-fed misses negation adverbs in predictions and results in totally opposite semantics. Inaccurate words/phrases In case 3, *adapterfed* translates adverbial of time into "a day or two", which should be "four or five weeks" actually. In case 4, *adapter-fed* uses the expression "save the world", which differs from the original expression "change the world" in semantics. | Method | Sentence | |------------------|-----------------------------------------------------------| | Ground truth | I think if somebody tells a lie, they're not just a liar. | | adapter-families | I think if somebody tells a lie, they're not just a liar. | | adapter-fed | I know that when people lie, they're just lying. | | Ground truth | I never miss a single training session. | | adapter-families | I didn't miss a single training session. | | adapter-fed | I missed one training session. | | Ground truth | And within four or five weeks, he can do it again. | | adapter-families | And within four or five weeks, he can do it again. | | adapter-fed | And within a day or two, he can do this. | | Ground truth | So could art change the world? | | adapter-families | Can art change the world? | | adapter-fed | Is art about saving the world? | | Ground truth | Over 100,000 children learn science this way. | | adapter-families | Hundreds of thousands of children learn science this way. | | adapter-fed | Hundreds of thousands of people learned how to do that. | | Ground truth | And all at once I became a learner. | | adapter-families | And all of a sudden, I became a learner. | | adapter-fed | And all of a sudden, it's happening. | | TED2020 | Europarl | | | | | |--------------------------|------------|-----------------------|------------|-------|-------| | Method | Macro Avg. | Micro Avg. Macro Avg. | Micro Avg. | | | | adapter-fed | FedAvg | 34.31 | 31.70 | 22.20 | 23.00 | | FedMean | 34.52 | 31.71 | 23.56 | 23.53 | | | adapter-random | FedAvg | 34.79 | 32.16 | 25.19 | 25.76 | | FedMean | 35.14 | 32.38 | 25.27 | 25.67 | | | adapter-gradients FedAvg | 34.46 | 31.96 | 25.64 | 25.84 | | | FedMean | 34.90 | 32.21 | 25.51 | 25.35 | | | adapter-families | FedAvg | 34.83 | 32.09 | 25.47 | 26.02 | | FedMean | 35.21 | 32.40 | 25.76 | 26.04 | | Ambiguous semantics In cases 5 and 6, *adapterfed* loses specific semantic information in ground truths. It fails to properly translate "children", "science" and "become a learner" and uses more ambiguous expressions instead. In comparison, *adapter-families* makes more accurate predictions in the above cases, which suggests that appropriate clustering strategies help the model produce better translations with improvements in semantics. ## 5.2 **Both Fedmean And Clustering Contribute** In our experiments, discrepancies in data come from two aspects: data quantity skew and linguistic discrepancy (language difference). We adjust the aggregation algorithm from Eq. (2) to Eq. (3), which we call *FedMean* here, to tackle the problem of quantity skew. Moreover, we propose clustering strategies to prevent clients from receiving conflicting parameters from dissimilar languages. To explore how these two methods contribute to improvement in performance, we further conduct experiments with the aggregation algorithm changed to FedAvg (see Eq. (2)) and keep other training settings unchanged. As the results displayed in Table 4, clustering strategies bring more significant improvements in performance on the Europarl corpus than the TED2020 corpus. Since experiments on the Europarl corpus consists of more different languages and are conducted in a more complicated m2m setting, the problem of linguistic discrepancy is more severe for Europarl. For the TED2020 corpus, changing the aggregation algorithm from FedAvg to FedMean leads to more significant improvements for methods with clustering strategies compared to *adapter-fed*. In contrast, for the Europarl corpus, *adapter-fed* substantially benefits from FedMean, while FedMean hardly brings any benefits to methods with clustering strategies, even causing performance drop in some cases. Based on these observations, we contend that both aggregation algorithms and clustering strategies contribute to performance gain by alleviating data discrepancies. The specific extent of improvement depends on the extent of data quantity skew and linguistic discrepancy. ## 5.3 Further Cost Saving By Adapter Pruning On top of the adapter tuning approach, adapter pruning techniques (Rücklé et al., 2021; Pfeiffer et al., 2021; Karimi Mahabadi et al., 2021) further compress the number of parameters to be updated. To further reduce the communication cost, we conduct an exploratory attempt to prune parts of adapter modules in both the encoder and the decoder. We still choose the mBART-50 model, with 12 layers in the encoder and the decoder separately, to conduct the experiments. Specifically, we evenly separate all adapter modules we add in mBART into three parts: input-end adapters (adapter modules in the first 4 layers of the encoder or the decoder), middle-layer adapters (adapter modules in layers 5 to 8 of the encoder or the decoder), outputend adapters (adapter modules in the last 4 layers of the encoder or the decoder). In each strategy, only one part of the adapter modules is kept, so the communication cost is saved by two-thirds. We use adapter-families as the baseline and train all models with one random seed. The rest settings stay the ![7_image_0.png](7_image_0.png) Table 5: BLEU scores of different adapter pruning strategies. We acquire similar results with only 1/3 communication cost compared to keeping all adapter modules. same as those in previous main experiments. The results are shown in Table 5. It is encouraging that pruning adapters do not result in a sharp decrease in performance. We observe that keeping output-end adapters achieves the highest score among the three pruning strategies, which suggests that adapters in the top layers play more important roles. Overall, the results indicate that it is possible to further reduce communication costs and it is worthwhile to explore more elaborate pruning techniques in future work. ## 6 Related Work Federated Learning was first proposed by McMahan et al. (2017) as a decentralized training framework. Due to its decentralized and private nature, FL shows great potential in actual applications. Recently, there has been a surge in the NLP community to explore the application of federated learning in diverse NLP tasks, such as emojis prediciton (Gandhi et al., 2022), named entity recognition (Ge et al., 2020), and machine translation (Roosta et al., 2021; Passban et al., 2022; Weller et al., 2022), etc. Roosta et al. (2021) first applied FL to NMT tasks. However, training language models in the FL setting brings huge communication overheads. To solve this problem, researchers have proposed to only exchange some dedicated "Controller" (Roosta et al., 2021) layers between the server and clients. Moreover, Passban et al. (2022) introduced parameter pruning strategies to reduce communication bandwidth. Our methods with adapter modules have advantages in communication efficiency with fewer parameters to be transferred compared to Controller (see Appendix D), and other parameter pruning strategies can also be applied to our adapter modules to further reduce communication costs. Multilingual Neural Machine Translation (MNMT) trains a single model to handle translation between multiple language pairs (Johnson et al., 2017; Aharoni et al., 2019; Zhang et al., 2020). Moreover, MNMT significantly reduces training and inference costs by eliminating the need to train models for each language pair. Massively pre-trained multilingual models have been used for MNMT, such as mBART-50 (Tang et al., 2020) and M2M100 (Fan et al., 2021). In recent years, adapter has become a popular method in MNMT (Bapna and Firat, 2019; Cooper Stickland et al., 2021; Philip et al., 2020; Üstün et al., 2021; Chronopoulou et al., 2022) due to its high parameter efficiency and transferability between tasks. Different from previous works on this topic, inspired by recent progress in improving the efficiency of NLP methods (Strubell et al., 2019; Li et al., 2021a; Xu et al., 2021; Li et al., 2021b), we focus on communication efficiency in FL-MNMT and make the first effort to introduce adapter modules in order to reduce communication costs. We also apply different clustering strategies to resolve the issue of conflicting parameters stemming from data discrepancy. ## 7 Conclusion In this paper, we introduce adapter modules to PLMs for the Fed-MNMT problem to boost communication efficiency. We reduce the communication cost by over 98% and make the training process of Fed-MNMT practical. To deal with the problem of performance drop after introducing adapter modules, we propose different clustering strategies to separate clients into different groups to avoid the negative influence of data discrepancy. We surpass the direct baseline with a substantial gap, especially in the more complicated multi-tomulti translation setting. Furthermore, our analytic experiments indicate that both aggregation algorithms in server and clustering strategies affect the performance of FedMNMT. We also explore the possibility of further reducing communication costs by pruning adapter modules and find that adapters in top layers are more significant for translation performance. In future work, we will explore more welldesigned clustering strategies and attach other parameter-efficient techniques to adapter to further reduce the parameters to be transferred. ## Limitations First, in this work, we assume that clustering in the encoder and the decoder is only related to the source and target languages, respectively. Actually, both parameters in the encoder and the decoder are influenced by source and target languages simultaneously. Therefore, our assumption may lead to a performance drop. In future work, we plan to explore more complicated clustering strategies. Moreover, our *adapter-families* method depends on prior linguistic knowledge. Its actual effectiveness can be affected by the distribution of language families/groups in clients. Our methods mainly apply to comparably uniform language distribution. In addition, the effectiveness of our methods on other PLMs needs to be verified. However, it is easy to transfer our methods to other models so it will not be a challenging problem. ## Acknowledgements We thank all reviewers for their insightful comments and suggestions. This work is supported by Natural Science Foundation of China (NSFC) No.62176002. We sincerely thank Jingjing Xu for her valuable suggestions. Xu Sun is the corresponding author. ## References Roee Aharoni, Melvin Johnson, and Orhan Firat. 2019. Massively multilingual neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3874–3884, Minneapolis, Minnesota. Association for Computational Linguistics. Ankur Bapna and Orhan Firat. 2019. Simple, scalable adaptation for neural machine translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1538– 1548, Hong Kong, China. Association for Computational Linguistics. Alexandra Chronopoulou, Dario Stojanovski, and Alexander Fraser. 2022. Language-family adapters for multilingual neural machine translation. ArXiv preprint, abs/2209.15236. Asa Cooper Stickland, Xian Li, and Marjan Ghazvininejad. 2021. Recipes for adapting pre-trained monolingual and multilingual models to machine translation. In *Proceedings of the 16th Conference of the European Chapter of the Association for Computational* Linguistics: Main Volume, pages 3440–3453, Online. Association for Computational Linguistics. Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, and Armand Joulin. 2020. Beyond english-centric multilingual machine translation. Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, et al. 2021. Beyond english-centric multilingual machine translation. *J. Mach. Learn. Res.*, 22(107):1–48. Orhan Firat, Kyunghyun Cho, and Yoshua Bengio. 2016. Multi-way, multilingual neural machine translation with a shared attention mechanism. In *Proceedings* of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 866–875, San Diego, California. Association for Computational Linguistics. Deep Gandhi, Jash Mehta, Nirali Parekh, Karan Waghela, Lynette D'Mello, and Zeerak Talat. 2022. A federated approach to predicting emojis in hindi tweets. *ArXiv preprint*, abs/2211.06401. Suyu Ge, Fangzhao Wu, Chuhan Wu, Tao Qi, Yongfeng Huang, and Xing Xie. 2020. Fedner: Privacypreserving medical named entity recognition with federated learning. *arXiv e-prints*, pages arXiv– 2003. Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for NLP. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of *Proceedings* of Machine Learning Research, pages 2790–2799. PMLR. Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Viégas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google's multilingual neural machine translation system: Enabling zero-shot translation. *Transactions of the Association for Computational Linguistics*, 5:339–351. Rabeeh Karimi Mahabadi, James Henderson, and Sebastian Ruder. 2021. Compacter: Efficient low-rank hypercomplex adapter layers. Advances in Neural Information Processing Systems, 34:1022–1035. Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In Proceedings of Machine Translation Summit X: Papers, pages 79–86, Phuket, Thailand. Lei Li, Yankai Lin, Deli Chen, Shuhuai Ren, Peng Li, Jie Zhou, and Xu Sun. 2021a. Cascadebert: Accelerating inference of pre-trained language models via calibrated complete models cascade. In *Findings of the Association for Computational Linguistics: EMNLP 2021*, pages 475–486. Lei Li, Yankai Lin, Shuhuai Ren, Peng Li, Jie Zhou, and Xu Sun. 2021b. Dynamic knowledge distillation for pre-trained language models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 379–389. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Agüera y Arcas. 2017. Communication-efficient learning of deep networks from decentralized data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, AISTATS 2017, 20-22 April 2017, Fort Lauderdale, FL, USA, volume 54 of *Proceedings* of Machine Learning Research, pages 1273–1282. PMLR. Yifan Niu and Weihong Deng. 2022. Federated learning for face recognition with gradient correction. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 1999–2007. Peyman Passban, Tanya Roosta, Rahul Gupta, Ankit Chadha, and Clement Chung. 2022. Training mixeddomain translation models via federated learning. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2576–2586, Seattle, United States. Association for Computational Linguistics. Jonas Pfeiffer, Aishwarya Kamath, Andreas Rücklé, Kyunghyun Cho, and Iryna Gurevych. 2021. AdapterFusion: Non-destructive task composition for transfer learning. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 487–503, Online. Association for Computational Linguistics. Jonas Pfeiffer, Andreas Rücklé, Clifton Poth, Aishwarya Kamath, Ivan Vulic, Sebastian Ruder, Kyunghyun ´ Cho, and Iryna Gurevych. 2020. AdapterHub: A framework for adapting transformers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 46–54, Online. Association for Computational Linguistics. Jerin Philip, Alexandre Berard, Matthias Gallé, and Laurent Besacier. 2020. Monolingual adapters for zero-shot neural machine translation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4465–4470, Online. Association for Computational Linguistics. Matt Post. 2018. A call for clarity in reporting BLEU scores. In *Proceedings of the Third Conference on* Machine Translation: Research Papers, pages 186– 191, Brussels, Belgium. Association for Computational Linguistics. Sylvestre-Alvise Rebuffi, Hakan Bilen, and Andrea Vedaldi. 2017. Learning multiple visual domains with residual adapters. In *Advances in Neural Information Processing Systems 30: Annual Conference* on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 506– 516. Nils Reimers and Iryna Gurevych. 2020. Making monolingual sentence embeddings multilingual using knowledge distillation. In *Proceedings of the* 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4512–4525, Online. Association for Computational Linguistics. Tanya Roosta, Peyman Passban, and Ankit Chadha. 2021. Communication-efficient federated learning for neural machine translation. *ArXiv preprint*, abs/2112.06135. Andreas Rücklé, Gregor Geigle, Max Glockner, Tilman Beck, Jonas Pfeiffer, Nils Reimers, and Iryna Gurevych. 2021. AdapterDrop: On the efficiency of adapters in transformers. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7930–7946, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Emma Strubell, Ananya Ganesh, and Andrew McCallum. 2019. Energy and policy considerations for deep learning in nlp. In *Proceedings of the 57th Annual Meeting of the Association for Computational* Linguistics, pages 3645–3650. Xu Tan, Jiale Chen, Di He, Yingce Xia, Tao Qin, and Tie-Yan Liu. 2019. Multilingual neural machine translation with language clustering. In *Proceedings of the 2019 Conference on Empirical Methods* in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 963–973, Hong Kong, China. Association for Computational Linguistics. Y. Tang, C. Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, and Angela Fan. 2020. Multilingual translation with extensible multilingual pretraining and finetuning. *ArXiv*, abs/2008.00401. Ahmet Üstün, Alexandre Berard, Laurent Besacier, and Matthias Gallé. 2021. Multilingual unsupervised neural machine translation with denoising adapters. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 6650–6662, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Jianyu Wang, Qinghua Liu, Hao Liang, Gauri Joshi, and H. Vincent Poor. 2020. Tackling the objective inconsistency problem in heterogeneous federated optimization. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. Orion Weller, Marc Marone, Vladimir Braverman, Dawn Lawrie, and Benjamin Van Durme. 2022. Pretrained models for multilingual federated learning. In *Proceedings of the 2022 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1413–1421, Seattle, United States. Association for Computational Linguistics. Jingjing Xu, Wangchunshu Zhou, Zhiyi Fu, Hao Zhou, and Lei Li. 2021. A survey on green deep learning. arXiv preprint arXiv:2111.05193. Biao Zhang, Philip Williams, Ivan Titov, and Rico Sennrich. 2020. Improving massively multilingual neural machine translation and zero-shot translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1628– 1639, Online. Association for Computational Linguistics. Yue Zhao, Meng Li, Liangzhen Lai, Naveen Suda, Damon Civin, and Vikas Chandra. 2018. Federated learning with non-iid data. *ArXiv preprint*, abs/1806.00582. ## A Training Data Sizes The specific training data size of each language pair is shown in Table 6. ## B Complete Results Of Ablation Study We show the complete results on all language pairs of the ablation study in Table 7. We find that only applying clustering to the decoder acquires the highest scores on 7 out of the total 12 language pairs. To our surprise, *adapterfamilies* method fails to reach the best performance in average scores, which has been explained in § 4.2 in the main text. ## C Uniform Data Distribution We also conduct experiments on TED2020 with each client owning training data of equal size. The results are shown in Table 8. We observe that the performance gaps between different methods are similar to those in Table 1. Notably, *Adapterfamilies* beats *adapter-random* by a slight margin. Both clustering strategies acquire obvious performance improvement compared to the baseline adapter-fed. These empirical results verify that our methods apply to various data distributions. | Corpus | Source Language Family/Group | Language Pair | Dataset Size | |---------------|--------------------------------|-----------------|----------------| | Sino-Tibetan | zh->en | 9984 | | | th->en | 4992 | | | | Afro-asiatic | ar->en | 9984 | | | he->en | 1920 | | | | TED2020 | Uralic | fi->en | 1920 | | et->en | 1920 | | | | Indo-European | ru->en | 9984 | | | sl->en | 1920 | | | | de->fr | 11648 | | | | nl->pl | 3584 | | | | Germanic | en->lt | 3712 | | | fr->nl | 12160 | | | | it->sl | 3456 | | | | Romance | es->lv | 3584 | | | Europarl | pl->en | 3712 | | | sl->es | 3584 | | | | Slavic | sl->lt | 3584 | | | lt->de | 3328 | | | | lv->it | 3584 | | | | Baltic | lv->pl | 3712 | | Table 6: Detailed data sizes of language pairs from Europarl corpus. ## D Comparison To Controllers Controllers (Roosta et al., 2021) only exchange 8 layers in a 32-layer Transformer (4 from encoder and 4 from decoder) between the server and clients, which means that they reduce the communication cost by approximately 66% (the number of layers in the original model without Controllers is 24). Compared with Controllers, we introduce adapter modules in Fed-MNMT without the need to define additional layers. Besides, our methods transmit a much smaller amount of parameters in client-toserver exchanges than using Controllers. Therefore, our proposal is superior to Controllers in terms of communication efficiency. | Method | de-fr | nl-pl | en-lt | fr-nl | it-sl | es-lv | pl-en | sl-es | sl-lt | lt-de | lv-it | lv-pl | Macro Avg. | Micro Avg. | |----------------------|---------|---------|---------|-------------------------------|-------------|-------------|---------|---------|---------|---------|-------------|---------|--------------|--------------| | adapter-fed | 29.66 | 17.35 | 30.22 | 16.89 | 11.95 | 27.91 | 37.31 | 31.04 | 20.13 | 21.18 | 17.95 | 19.97 | 23.46 | 23.44 | | + encoder clustering | 30.50 | 19.24 | 29.85 | 22.94 18.56 28.01 41.93 33.25 | 20.84 22.09 | 23.34 20.82 | 25.95 | 26.19 | | | | | | | | + decoder clustering | 31.19 | 19.41 | 30.53 | 23.06 18.26 28.20 41.71 33.56 | 21.26 21.96 | 23.03 | 21.06 | 26.10 | 26.42 | | | | | | | adapter-families | 30.18 | 18.85 | 29.86 | 22.42 | 16.10 | 27.66 | 41.13 | 32.96 | 20.78 | 21.80 | 22.78 21.27 | 25.48 | 25.75 | | | Method | zh-en | th-en | ar-en | he-en | fi-en | et-en | ru-en | sl-en | Avg. | |------------------|---------|---------|---------|---------|---------|---------|---------|---------|--------| | model-fed | 25.58 | 22.90 | 39.63 | 46.12 | 34.78 | 34.41 | 30.42 | 51.32 | 35.64 | | adapter-fed | 24.83 | 16.79 | 40.02 | 45.99 | 33.55 | 33.67 | 29.61 | 52.20 | 34.58 | | adapter-random | 25.30 | 22.33 | 39.58 | 45.59 | 34.58 | 34.31 | 30.12 | 51.52 | 35.41 | | adapter-families | 25.04 | 22.08 | 39.50 | 45.74 | 34.91 | 34.65 | 30.16 | 51.38 | 35.43 | ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? section Limitations A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? abstract and section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? No response. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? No response. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? No response. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No response. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? No response. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. No response. ## C ✓ **Did You Run Computational Experiments?** Section 3, 4 And 5 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? section 4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? section 3 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? section 4 and 5 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? section 3.1 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
park-etal-2023-cross
Cross-task Knowledge Transfer for Extremely Weakly Supervised Text Classification
https://aclanthology.org/2023.findings-acl.328
Text classification with extremely weak supervision (EWS) imposes stricter supervision constraints compared to regular weakly supervise classification. Absolutely no labeled training samples or hand-crafted rules specific to the evaluation data are allowed. Such restrictions limit state-of-the-art EWS classification methods to indirect weak labeling techniques that assign unnatural label uncertainty estimates. We present PLAT, a framework that creates weak labels by leveraging recent developments in zero-shot text classification. PLAT employs models trained for sub-tasks other than classification to label documents. Most importantly, PLAT refrains from assigning overly confident weak labels and improves soft-label training performance for downstream classifiers. Classifiers trained with PLAT significantly outperform those trained on weak labels generated by the previous state-of-the-art in extremely weakly supervised text classification.
# Cross-Task Knowledge Transfer For Extremely Weakly Supervised Text Classification Seongmin Park Kyungho Kim Jihwa Lee ActionPower, Seoul, Republic of Korea {seongmin.park, kyungho.kim, jihwa.lee}@actionpower.kr ## Abstract Text classification with extremely weak supervision (EWS) imposes stricter supervision constraints compared to regular weakly supervised classification. Absolutely no labeled training samples or hand-crafted rules specific to the evaluation data are allowed. Such restrictions limit state-of-the-art EWS classification methods to indirect weak labeling techniques that assign unnatural label uncertainty estimates. We present PLAT, a framework that creates weak labels by leveraging recent developments in zero-shot text classification. PLAT employs models trained for sub-tasks other than classification to label documents. Most importantly, PLAT refrains from assigning overly confident weak labels and improves soft-label training performance for downstream classifiers. Classifiers trained with PLAT significantly outperform those trained on weak labels generated by the previous state-of-the-art in extremely weakly supervised text classification. ## 1 Introduction We undertake the low-resource task of categorizing an unlabeled set of documents using just candidate category labels. The task is a stricter subtask of weakly supervised text classification - weak labels cannot be obtained even through utilizing a small training set or hand-crafted rules based on domain knowledge. Such task formulation mimics a realistic and practical scenario where one has to classify a set of documents into a label from a pre-defined label set, using only class names. Following Wang et al. (2021) we call this task *classification with* extremely weak supervision (EWS). Due to such additional constraints on sources of supervision, models under EWS cannot trivially adapt recent state-of-the-art approaches under regular weak supervision. Best-performing methods for classification under EWS usually involve mining a set of category-indicative keywords from pre-trained language models (Meng et al., 2018; Mekala and Shang, 2020; Türker et al., 2020; Meng et al., 2020; Zeng et al., 2022). At evaluation time, each document is compared to the keyword set of each label. The weak label for a document is the label with the keyword set most similar to words that constitute the document. This divorcement of feature extraction and label assignment introduces additional noise during weak labeling, causing unnatural assignment of label confidence and oversensitivity to training size (Wang et al., 2021). We overcome such limitations in EWS by leveraging pre-trained language models to create weak labels in an end-to-end fashion. Most importantly, we eliminate the keyword-collection step in currently popular EWS approaches. We employ language models trained on non-classification tasks (textual entailment, next sentence prediction, and multiple-choice question-answering) as weak labelers for classification. Our research bridges weaklysupervised noisy-label training with recent developments in prompt-based low-shot text classification (Yin et al., 2019; Keskar et al., 2019). Our framework realizes both the robustness of noisylabel training and the label efficiency of prompting. We use publicly available, off-the-shelf models for each source task in our experiments. Our contributions are as follows: - We analyze the limitations of popular existing methods in EWS, especially in their unnatural assignment of pseudo-label confidence. - We present PLAT1, a framework that utilizes models trained in subtasks other than classification to create weak labels for classification. Downstream classifiers trained with our weak labels significantly outperform the previous state-of-the-art in difficult EWS datasets. - We analyze how cross-task weak labels act as better pseudo-labels, with roots in existing 1Pseudo-Labeling Across Tasks ## 2 Background 2.1 Weakly Supervised Text Classification Broadly, two lines of research exist in weaklysupervised text classification: obtaining better weak labels (Hancock et al., 2018; Chatterjee et al., 2020a; Rao et al., 2021; Zhang et al., 2021a, 2022a), and streamlining the training of downstream classifiers with the obtained noisy labels (Onoe and Durrett, 2019; Ren et al., 2020; Mekala et al., 2022; Yu et al., 2022; Kuang et al., 2022). PLAT focuses on improving the former by creating weak labels via knowledge transfer from models trained on tasks other than classification. Weak labels were traditionally assigned using manually written rules (Cachay et al., 2021; Zhang et al., 2022a). Since rule-based labeling necessitates domain knowledge and hand-crafted rules specific to each dataset, much research efforts focused on automatic rule generation. However, even automatically generated rules cannot be used in situations that require EWS because the process either necessitates a small labeled dataset of the same classification task (Varma and Ré, 2018; Banerjee et al., 2019; Sukumaran et al., 2022), or human feedback is required in the iterative learning process (Zhang et al., 2022b). Under EWS, we require a method that fully automates the weaklabeling process, without any classification datasets or dataset-specific domain knowledge. PLAT employs cross-task knowledge transfer to achieve completely automated weak-labeling without any labeled classification data. ## 2.2 Cross-Task Knowledge Transfer In cross-task knowledge transfer, a model trained for a specific source task solves a different target task. Cross-task knowledge transfer is useful when labeled training data is scarce for the target task but is abundant or unnecessary for the source task (Egonmwan et al., 2019; Lin et al., 2021). Such preconditions make cross-task knowledge transfer naturally suitable for pseudo-labeling in weakly supervised training. In Egonmwan et al. (2019), for instance, question-answering models are used to create weak summary labels. Because data efficacy of weak labelers is a prerequisite under weak supervision, recently popular zero-shot classification methods based on prompting (Brown et al., 2020; Liu et al., 2021; Sanh et al., 2022) are appealing approaches to automatic label creation. In the EWS setup, only cross-task zero-shot labelers can be used, because the task prohibits any labeled data for classification. Although cross-task knowledge transfer with a classification target task is extensively researched (Hancock et al., 2018; Wang et al., 2019; Khodorchenko, 2019; Rao et al., 2021; Chatterjee et al., 2020a), we are the first to explore prompt-based, cross-task distillation for weak-labeling in text classification. Zhang et al. (2021a) also uses language model prompting for weak label generation, but its weaklabeler is a text classification model and thus is not a cross-task setup. In concurrent work, Smith et al. (2022) also prompts language models for zero-shot weak label generation. The research leverages multi-task models trained with multiple source tasks, either with extremely large scale (GPT-3) or already on text classification source tasks (T0++). In contrast, our work focuses on cross-task knowledge distillation capabilities of data-efficient, single-task models, each trained for a different non-classification source task. Smith et al. (2022) focuses on zero-shot capabilities that emerge from extreme-scale text generation models, while our work explores methods to handle various model output types (open-ended, binary, and multiple-choice) for weak labeling. We further provide qualitative analysis on the confidence assigned to each weak label. ## 2.3 Common Approaches To Text Classification Under Ews Popular methods under the EWS constraint employ a keyword-set matching scheme for weak labeling. Keywords for each label are auto-generated by mining pre-trained language models. Throughout this paper, we call these methods *keyword-based EWS*. WeSTClass (Meng et al., 2018) augments training data by creating pseudo-documents from seed words for each class. ConWea (Mekala and Shang, 2020) uses masked language models to discern overlapping keywords with context. Contextinfused keyword set for each class is matched with documents for weak labeling. WESSTEC (Türker et al., 2020) queries a knowledge base for information about each label and calculates the similarity between a document's vector representation to each label knowledge embedding. LOTClass (Meng et al., 2020) enriches each label's keyword set by collecting possible replacement words for ![2_image_0.png](2_image_0.png) every label from masked language models. X-Class (Wang et al., 2021) force-aligns document representations to label embeddings for weak labeling and achieves state-of-the-art results in EWS. All aforementioned methods train a downstream classifier such as BERT (Devlin et al., 2019) or RoBERTa (Liu et al., 2019) as the second step in their pipelines. In succeeding work, ClassKG (Zhang et al., 2021b) posits EWS as a keywordsubgraph annotation task and takes keyword correlation into account. PLAT departs from existing conventions by eliminating the keyword collection process from the EWS pipeline. We directly mine weak predictions instead of keywords from trained language models. In a similar and concurrent work as ours, WDDC (Zeng et al., 2022) also uses zero-shot prompting in pre-trained language models to create weak labels. WDDC uses cloze prompts to extract keyword sets (to be compared against document words, as in most aforementioned works), which is significantly different from PLAT that *assigns classification labels directly with the prompts*. We choose LOTClass and X-Class as baselines in our experiments, for their state-of-the-art results and reproducibility. ## 3 **Problems With Keyword-Matching Ews** Compared to simple supervised classification, EWS methods mentioned in Section 2.3 introduce two additional steps to the training pipeline: building category-indicative keyword sets and assigning weak labels to unlabeled documents using the built keyword sets. Our investigations show that the disjoint nature of such approaches leads to unwanted ![2_image_1.png](2_image_1.png) ## 3.1 Uninformative Label Confidence Accurate estimates of label uncertainty are important in noisy training scenarios commonly used in weakly-supervised classification (Meng et al., 2020; Yuan et al., 2020). The keyword-matching process used in state-of-the-art EWS forces weak labelers to gauge weak label confidence indirectly. We find that weak labels obtained this way are often coupled with unreliable label confidence that is sensitive to hyperparameters such as the size of the keyword set or evaluation set. In X-Class, measuring the distance of a document's embedding from its label cluster center is the only way to measure label uncertainty. In LOTClass, prediction confidence is the number of keywords a document contains in its pseudo-label keyword set. Even though PLAT uses non-classification models for weak labeling, much more natural confidence is assigned to its weak labels. Label confidence of correct predictions are higher on average compared to those of wrong predictions (Figure 1). In contrast, LOTClass and X-Class assigns similar confidence to both. Weak labels created by LOTClass and X-Class show drastic drops in pseudolabel count as the label confidence threshold increases (Figure 2). Such overconfidence in label quality estimates can hinder downstream classifier performance (Wei et al., 2022; Jiang et al., 2021). ## 3.2 Inability To Handle Complex Class Names Keyword-based EWS relies on mining words within documents in the evaluation set and extracting category-indicative words for each class. Therefore, even state-of-the-art methods require class names to be either lexically or contextually descriptive. In clickbait classification, for example, the word "clickbait" does not exist among news headlines the model has to classify. In such cases, keyword-based EWS methods have no anchor within the documents to extract categoryindicative keywords for the word "clickbait". Wang et al. (2021) shows existing EWS methods falter when the label names do not appear in documents to be weakly labeled. We observe the same phenomenon even with keyword-based EWS methods that consider language context. Robustness further deteriorates when label names are more complex, such as consisting of multiple tokens. Most keyword-based EWS use masked language models for contextaware keyword search, and it is not straightforward to consolidate sequence vectors as a single, contextual vector to be used in clustering algorithms in keyword-based EWS. Our method sidesteps such limitations by flexibly handling any label name through language model prompting. ## 3.3 Sensitivity To Dataset Size Weak-label quality of keyword-based EWS also relies heavily on the size of the test set. While EWS methods require no labeled documents for training, existing methods still require a sizable count of unlabeled data to perform well. At their core, most keyword-based EWS methods aim to generate cluster-centers for each class by leveraging textual information in unlabeled documents. A smaller evaluation set will naturally result in lower-quality cluster boundaries. To overcome such reliance on ![3_image_0.png](3_image_0.png) dataset size, we propose a way to weakly label each document in a zero-shot manner. We confirm this intuition by comparing F1 scores of weak labels created by keyword-based EWS (X-Class) and with PLAT (Figure 3) at varying confidence thresholds. ## 4 Plat PLAT draws inspiration from recent findings in zero-shot language model prompting (Yin et al., 2019; Keskar et al., 2019; Ma et al., 2021) to obtain weak labels for unlabeled documents. In the EWS setting, PLAT leverages source models trained on a single non-classification task to solve classification tasks. PLAT follows the typical two-step weaklysupervised classification pipeline. In the first phase, PLAT creates weak labels for classification. In the second phase, a final classifier is trained using the obtained weak labels. The novelty of PLAT lies in improving the weak labeling phase with cross-task knowledge distillation. We aim to keep the PLAT framework source-model-agnostic and make the weak labelers hot-swappable, taking advantage of parallel advances in zero-shot NLP. ## 4.1 Cross-Task Weak Labeling We test three different models for weak labeling, each trained on a single task: entailment (PLATENT), next sentence prediction (PLATNSP), and multiple-choice question answering (PLATQA). Although each task and corresponding model have different input and output formats, adding appropriate prompts can reduce all tasks into indirect classification tasks. | Dataset | Type | # of Classes | Dataset size | Average word count per sample | |-----------|---------------------|----------------|----------------|---------------------------------| | AGNews | News topic | 4 | 7,600 | 37.72 | | Yahoo | News topic | 10 | 60,000 | 10.70 | | DBpedia | Article topic | 14 | 70,000 | 46.14 | | Clickbait | Clickbait detection | 2 | 16,000 | 9.09 | Table 1: Datasets to benchmark PLAT against keyword-based EWS. Let X = {x0*, . . . , x*m} be a set of unlabeled documents to be classified. A weak labeler must categorize xi as a single class from the set of all possible classes C = {c0*, ..., c*n}. For every xi ∈ X, PLAT's pseudo-labelers generate two kinds of labels: a hard label Hi ∈ C, which is the single most likely class that xi belongs to, and a soft label Si, which is a categorical distribution over C expressing the class probability of xi. We train separate downstream classifiers using hard and soft labels, to compare how taking label uncertainty into account can help in weakly-supervised classification. ## 4.1.1 Entailment (Platent) Yin et al. (2019) explores zero-shot text classification through entailment. Similarly, we pose classification as an entailment task by ranking entailment probabilities of xi and each verbalized class. A verbalized class (Schick and Schütze, 2021) is the every class name in the form of a sentence, adapted to appear as an input to the entailment model. Verbalizers can be adapted for each classification task. For topic classification, the verbalizer could be *"This text is about <class name>."* For spam detection, the verbalizer to represent the spam class could be *"This is an ad"*. The full set of verbalizers used is detailed in the Experiments section. VENT is a set of all verbalized class names: $$V_{E N T}=\{v e r b a l i z e r(c)\mid c\in C\}.\qquad(1)$$ For every xi ∈ X, we construct a set of all pairs of xi and each verbalized label v ∈ VENT , between all of which we calculate textual entailment: $$P a i r s_{E N T}^{i}=\{(x_{i},v)\mid v\in V_{E N T}\}.\quad(2)$$ Entailment model MENT is a model that takes a sentence pair (s1, s2) and calculates the probabilities that sentence s1 entails, contradicts, or has no relation to sentence s2. In this work, we only use entailment probabilities. We use MENT to calculate the entailment probability of every (xi, v) ∈ P airsiENT . $$Probs^{i}_{ENT}=$$ $$\{M_{ENT}(x_{i},v)\mid(x_{i},v)\in\mbox{\it Pairs}^{i}_{ENT}\}.\tag{3}$$ The hard label $H_{i}$ for $x_{i}$ is $argmax(Probs^{i}_{ENT})$, and the soft label Siis sof tmax(P robsiENT ). 4.1.2 Next sentence prediction (PLATNSP) Ma et al. (2021) finds that next sentence prediction (NSP) and reverse NSP models perform on par with entailment in zero-shot text classification. In our experiments, NSP and reverse NSP weak labelers had a negligible difference in final classifier performance. We choose reverse NSP for higher reported classification scores in Ma et al. (2021). We use the same verbalizers (VNSP = VENT ) as in entailment-based weak labeling in the preceding section. For every xi ∈ X, we construct a set of all pairs of xi and each verbalized label v ∈ VNSP , similar to P airsiENT in 4.1.1: $$P a i r s_{N S P}^{i}=\{(x_{i},v)\mid v\in V_{N S P}\}.\quad\quad(4)$$ For all v ∈ V , we calculate probabilities that xi appears after each v. NSP Model MNSP takes a sentence pair (s1, s2) and calculates the probability that s2 appears after s1. $$Probs^{i}_{NSP}=$$ $$\{M_{NSP}(v,x_{i})\mid(x_{i},v)\in\mbox{\it Pair}^{i}_{NSP}\}.\tag{5}$$ The hard label $H_{i}$ for $x_{i}$ is $argmax(Probs^{i}_{NSP})$, and the soft label Siis sof tmax(P robsiNSP ). ## 4.1.3 Multiple-Choice Question-Answering (Platqa) A multiple-choice question-answering (QA) model MQA takes a context, a question, and answer choices, and returns the distribution of answer possibility over the answer choices. To pose QA as a classification task, we set the context as each xi, | Model | AGNews | Yahoo | DBpedia | Clickbait | |---------------------|---------------|---------------|---------------|---------------| | Supervised | 93.97 / 93.97 | 72.11 / 72.64 | 99.11 / 99.11 | 98.57 / 98.58 | | LOTClass Hard label | 25.63 / 19.47 | 9.93 / 5.37 | 6.89 / 6.29 | 44.17 / 43.18 | | Final classifier | 25.00 / 10.00 | 10.00 / 1.82 | 0.80 / 0.17 | 50.00 / 33.33 | | X-Class Hard label | 61.82 / 57.81 | 40.35 / 42.53 | 88.17 / 87.91 | 23.79 / 23.72 | | Final classifier | 62.87 / 58.81 | 41.76 / 43.86 | 88.52 / 88.21 | 21.22 / 20.81 | | PLATENT Hard label | 64.88 / 60.07 | 54.17 / 54.91 | 81.78 / 80.84 | 51.35 / 36.48 | | Final classifier | 64.86 / 57.73 | 55.29 / 56.45 | 82.62 / 81.43 | 50.00 / 33.33 | | PLATNSP Hard label | 64.79 / 61.90 | 49.45 / 47.39 | 39.90 / 44.18 | 77.23 / 77.03 | | Final classifier | 60.87 / 56.63 | 52.46 / 50.04 | 41.32 / 44.97 | 79.68 / 79.50 | | PLATQA Hard label | 80.86 / 80.67 | 41.44 / 44.59 | 83.32 / 82.54 | 83.87 / 83.80 | | Final classifier | 81.72 / 81.57 | 43.83 / 46.88 | 84.91 / 84.00 | 87.44 / 87.40 | the question as a dataset-specific prompt p, and the answer choices as verbalized versions of all classes. The question forces the model to select one verbalized element of C as an answer. The full set of prompts and verbalizers used in PLATQA is detailed in the experiments section. Even though prompt p is dataset-specific, its construction does not require domain knowledge, and instead depends on the *type* of classification (I.e. topic classification, location classification, etc.). Formally defined, the weak label given the document. The loss function is a standard cross-entropy objective: $$\mathcal{L}_{hard}=-\sum_{i=0}^{m}\sum_{j\in C}y(H_{i})\log(B(x_{i})_{j}),\tag{8}$$ where $y(H_{i})$ is 1 only if $j=H_{i}$ and 0 otherwise. B(xi)j is the prediction confidence of classifier for class cj ∈ C on document xi. 4.2.2 Training with soft labels We adopt a similar objective function when training with confidence-aware soft labels. The final classifier is trained to minimize the divergence between the one-hot model prediction and the soft confidence distribution from the weak labeler over the set of all possible class names. The classifier's objective function becomes: $${\mathcal{L}}_{s o f t}=-\sum_{i=0}^{m}\sum_{j\in C}S_{i}^{j}\log(B(x_{i})_{j}),\quad\quad(9)$$ where S j i is the weak label confidence of specific class j ∈ C from overall confidence distribution assigned to xi. ## 5 Experiments And Results We qualitatively analyze PLAT in two aspects: accuracy of generated weak labels, and prediction $\mathbf{a}$. where $$P r o b s_{Q A}^{i}=M_{Q A}(x_{i},p,V_{Q A}),$$ $$V_{QA}=\{\mbox{\it verbalizer}(c)\mid c\in C\}.\tag{7}$$ and label $H$ for $n$ is convex ($P$ and $i$ The hard label Hi for xiis argmax(*P robs*iQA) and the soft label Siis sof tmax(*P robs*iQA). ## 4.2 Final Classifier Training A separate text classifier is trained with obtained weak labels. We use BERT in all our experiments. The final classifier is the output model of PLAT. ## 4.2.1 Training With Hard Labels Given a set of hard labels {H0*, ..., H*i} created by a weak-label generator, we train a downstream classifier B by maximizing the likelihood of predicting | Model | AGNews | Yahoo | DBpedia | Clickbait | |------------|-------------------------------|-------------------------------|-------------------------------|-------------------------------| | Supervised | 93.97 / 93.97 | 72.11 / 72.64 | 99.11 / 99.11 | 98.57 / 98.58 | | LOTClass | 25.00 (+0.00) / 10.00 (+0.00) | 10.00 (+0.00) / 1.82 (+0.00) | 3.39 (+2.59) / 0.54 (+0.37) | 50.00 (+0.00) / 33.33 (+0.00) | | X-Class | 60.61 (-2.26) / 52.84 (-5.96) | 41.07 (-0.69) / 43.16 (-0.70) | 88.70 (+0.18) / 88.38 (+0.17) | 20.38 (-0.84) / 20.20 (-0.61) | | PLATENT | 66.41 (+1.55) / 60.01 (+2.28) | 55.39 (+0.10) / 56.20 (-0.24) | 82.99 (+0.36) / 82.16 (+0.73) | 50.59 (+0.59) / 34.77 (+1.44) | | PLATNSP | 67.68 (+6.82) / 64.19 (+7.56) | 52.30 (-0.16) / 50.34 (+0.30) | 40.20 (-1.12) / 43.23 (-1.74) | 81.22 (+1.54) / 81.07 (+1.57) | | PLATQA | 81.99 (+0.26) / 81.83 (+0.26) | 43.32 (-0.50) / 46.53 (-0.35) | 86.38 (+1.46) / 85.74 (+1.74) | 88.38 (+0.94) / 88.37 (+0.97) | Table 3: Final classifier performance on 4 classification benchmarks after training with soft labels. Numbers in ![6_image_0.png](6_image_0.png) parenthesis indicate absolute increase in F1 scores compared to hard label results in Table 2. classification performance of the final classifier trained with the weak labels. For the latter, we measure classifier performance in both hard- and softlabel (confidence-aware) training. The same configuration for training the final classifier is applied to all variants of PLAT and baseline weak labelers. Classifier performance is measured in macro- and micro-F1 scores. ## 5.1 Datasets We test PLAT on topic and clickbait classification datasets. For topic classification, we use AGNews (Zhang et al., 2015), Yahoo Topics (Zhang et al., 2015), and DBpedia (Zhang et al., 2015). We use Clickbait Detection (Chakraborty et al., 2016) for clickbait classification. Table 1 provides a detailed description of each benchmark dataset. For topic classification datasets, we use the verbalizer *"This text is about <class name>"* for all models and the prompt *"What is this text about?"* for PLATQA. For clickbait classification, we use the verbalizers *"This is <news/spam>"* and the prompt "Is this news or spam?" for PLATQA. ## 5.2 Source Models For Weak-Labeling We use publicly available cross-task labelers in all variants of PLAT. For PLATENT, we use BART2 (Lewis et al., 2020) trained on MNLI (Williams et al., 2018). For PLATNSP, we use BERT3trained 2https://huggingface.co/facebook/bart-large-mnli 3https://huggingface.co/bert-large-cased with standard token unmasking and NSP objectives. For PLATQA, we use RoBERTa4 model trained on RACE (Lai et al., 2017). To train the final classifier with weak labels generated by aforementioned models, we fine-tune a pre-trained BERT model5 with a constant learning rate of 5e−5. We use an AdamW optimizer (Loshchilov and Hutter, 2018) with β1 = 0.9, β2 = 0.999*, eps* = 1e−6, and no weight decay. ## 5.3 Hard Label Training Results Classification performance of the final classifier for baseline weak-labelers and variants of PLAT is detailed in Table 2. Weak labels generated with PLAT yield notably higher F1 scores compared to those generated by baselines, except on DBpedia. The high performance of X-Class on DBpedia can be attributed to longer average document length and greater test set size. Compared to other datasets, DBpedia provides a greater amount of raw text from which keyword-based baselines can mine category-indicative keywords. In such settings, PLAT's zero-shot capability does not provide an advantage as great in scenarios with fewer resources. ![7_image_0.png](7_image_0.png) ## Confidence-Aware Training Results 5.4 Classification performance after confidence-aware training is detailed in Table 3 . PLAT also notably outperforms baselines with soft labels, taking better advantage of confidence-aware training. Our results confirm past research in knowledge distillation that accurate estimates of label uncertainty lead to better model calibration (Chatterjee et al., 2020b; Rizve et al., 2021 ). Earlier works in EWS try retaining only labels with confidence over a certain threshold δ . In a noisy-training scenario, a trade-off exists between retaining a large number of training examples and average label confidence. Our work confirms findings in Wang et al. ( 2021 ) that excessively high label δ degrades final classifier performance (Figure 4). While tuning the threshold parameter results in a higher increase in F 1 scores for PLAT, we report scores at δ = 0 for a fair comparison with previous work and to eliminate δ as a hyperparameter. ## Classification Difficulty Analysis 5.5 We analyze how the performance of each weak labeler changes according to the classification difficulty of each sample. Classification difficulty of a sample is defined as the number of weak labelers that made wrong predictions. Since we compare 5 models, the maximum difficulty is 5. PLAT assigns a much more "natural" confidence distribution, where the model is confident about lowdifficulty questions while comparatively uncertain about high-diffi culty questions (Figure 5). Models that fail to show such graduality tend to make inaccurate predictions (LOTClass in AGNews and Yahoo, X-Class in Clickbait, and PLAT NSP in DB- pedia), especially on more difficult samples. ## Conclusion 6 We present three variants of PLAT, a framework for text classification under extremely weak supervision. By eliminating keyword-based weak labeling, PLAT sidesteps the brittle dependence on evaluation set size and hyperparameters found in previous state-of-the art methods. PLAT is a flexible framework that leverages prompting to generate weak labels with more natural confidence estimates. PLAT makes no assumptions about the training dynamics of its source models. Therefore, evolutions of source models are completely orthogonal to developments in PLAT. The black-box treatment of its weak labeler models enables the usage of completely unsupervised weak labelers - a potential already demonstrated by PLAT NSP . We expect future developments in unsupervised solutions to enable even more resource-efficient classification uder PLAT. ## Limitations We identify the following limitations of PLAT and strategies to overcome such drawbacks: - *Performance of the final classifier is dependent on the black-box source weak labeler.* We believe this limitation can be worked around in a real-word setting by ensembling source models to vote on a likely weak label for practical accuracy gains. - *Best-performing source models might differ* for different tasks. The dataless nature of EWS prevents precursory accuracy evaluations while choosing the source weak labeler model. However, quality of candidate weak labelers can be gauged indirectly. Users can examine confidence distributions of weak labels (as in Figure 1 and Figure 2) as an indicator of pseudo-label "naturalness". They can also perform difficulty analysis (as shown in Figure 5(a)) that does not require any labeled data. In a real-world scenario, ensemble weak labelers will be used, eliminating the need to choose a single best source model. ## References Siddhartha Banerjee, Cem Akkaya, Francisco PerezSorrosal, and Kostas Tsioutsiouliklis. 2019. Hierarchical transfer learning for multi-label text classification. In *Proceedings of the 57th Annual Meeting of* the Association for Computational Linguistics, pages 6295–6300, Florence, Italy. Association for Computational Linguistics. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. *Advances in neural information processing* systems, 33:1877–1901. Salva Rühling Cachay, Benedikt Boecking, and Artur Dubrawski. 2021. End-to-end weak supervision. In Advances in Neural Information Processing Systems. Abhijnan Chakraborty, Bhargavi Paranjape, Sourya Kakarla, and Niloy Ganguly. 2016. Stop clickbait: Detecting and preventing clickbaits in online news media. In 2016 IEEE/ACM international conference on advances in social networks analysis and mining (ASONAM), pages 9–16. IEEE. Oishik Chatterjee, Ganesh Ramakrishnan, and Sunita Sarawagi. 2020a. Robust data programming with precision-guided labeling functions. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 3397–3404. Oishik Chatterjee, Ganesh Ramakrishnan, and Sunita Sarawagi. 2020b. Robust data programming with precision-guided labeling functions. In The ThirtyFourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 3397–3404. AAAI Press. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Elozino Egonmwan, Vittorio Castelli, and Md Arafat Sultan. 2019. Cross-task knowledge transfer for query-based text summarization. In *Proceedings of* the 2nd Workshop on Machine Reading for Question Answering, pages 72–77, Hong Kong, China. Association for Computational Linguistics. Braden Hancock, Paroma Varma, Stephanie Wang, Martin Bringmann, Percy Liang, and Christopher Ré. 2018. Training classifiers with natural language explanations. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1884–1895, Melbourne, Australia. Association for Computational Linguistics. Zhengbao Jiang, Jun Araki, Haibo Ding, and Graham Neubig. 2021. How can we know when language models know? on the calibration of language models for question answering. *Transactions of the Association for Computational Linguistics*, 9:962–977. Nitish Shirish Keskar, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Unifying question answering, text classification, and regression via span extraction. *arXiv preprint arXiv:1904.09286*. Maria Khodorchenko. 2019. Distant supervision and knowledge transfer for domain-oriented text classification in online social networks. *Procedia Computer* Science, 156:166–175. Zhaobin Kuang, Chidubem G. Arachie, Bangyong Liang, Pradyumna Narayana, Giulia Desalvo, Michael S. Quinn, Bert Huang, Geoffrey Downs, and Yang Yang. 2022. Firebolt: Weak supervision under weaker assumptions. In *Proceedings of The 25th International Conference on Artificial Intelligence and* Statistics, volume 151 of Proceedings of Machine Learning Research, pages 8214–8259. PMLR. Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. Race: Large-scale reading comprehension dataset from examinations. arXiv preprint arXiv:1704.04683. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871–7880. Zhaojiang Lin, Bing Liu, Andrea Madotto, Seungwhan Moon, Zhenpeng Zhou, Paul Crook, Zhiguang Wang, Zhou Yu, Eunjoon Cho, Rajen Subba, and Pascale Fung. 2021. Zero-shot dialogue state tracking via cross-task transfer. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7890–7900, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2021. Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. *ArXiv*, abs/1907.11692. Ilya Loshchilov and Frank Hutter. 2018. Decoupled weight decay regularization. In *International Conference on Learning Representations*. Tingting Ma, Jin-Ge Yao, Chin-Yew Lin, and Tiejun Zhao. 2021. Issues with entailment-based zero-shot text classification. In *Proceedings of the 59th Annual Meeting of the Association for Computational* Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 786–796, Online. Association for Computational Linguistics. Dheeraj Mekala, Chengyu Dong, and Jingbo Shang. 2022. Lops: Learning order inspired pseudo-label selection for weakly supervised text classification. arXiv preprint arXiv:2205.12528. Dheeraj Mekala and Jingbo Shang. 2020. Contextualized weak supervision for text classification. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 323–333, Online. Association for Computational Linguistics. Yu Meng, Jiaming Shen, Chao Zhang, and Jiawei Han. 2018. Weakly-supervised neural text classification. In *Proceedings of the 27th ACM International Conference on Information and Knowledge Management*, CIKM '18, page 983–992, New York, NY, USA. Association for Computing Machinery. Yu Meng, Yunyi Zhang, Jiaxin Huang, Chenyan Xiong, Heng Ji, Chao Zhang, and Jiawei Han. 2020. Text classification using label names only: A language model self-training approach. In *Proceedings of the* 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9006–9017, Online. Association for Computational Linguistics. Yasumasa Onoe and Greg Durrett. 2019. Learning to denoise distantly-labeled data for entity typing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2407–2417, Minneapolis, Minnesota. Association for Computational Linguistics. Nikitha Rao, Chetan Bansal, and Joe Guan. 2021. Search4code: Code search intent classification using weak supervision. In 2021 IEEE/ACM 18th International Conference on Mining Software Repositories (MSR), pages 575–579. IEEE. Wendi Ren, Yinghao Li, Hanting Su, David Kartchner, Cassie Mitchell, and Chao Zhang. 2020. Denoising multi-source weak supervision for neural text classification. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 3739–3754, Online. Association for Computational Linguistics. Mamshad Nayeem Rizve, Kevin Duarte, Yogesh S Rawat, and Mubarak Shah. 2021. In defense of pseudo-labeling: An uncertainty-aware pseudo-label selection framework for semi-supervised learning. In International Conference on Learning Representations. Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Teven Le Scao, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M Rush. 2022. Multitask prompted training enables zero-shot task generalization. In *International Conference on Learning* Representations. Timo Schick and Hinrich Schütze. 2021. Exploiting cloze-questions for few-shot text classification and natural language inference. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 255–269, Online. Association for Computational Linguistics. Ryan Smith, Jason A Fries, Braden Hancock, and Stephen H Bach. 2022. Language models in the loop: Incorporating prompting into weak supervision. arXiv preprint arXiv:2205.02318. Rohan Sukumaran, Sumanth Prabhu, and Hemant Misra. 2022. Enhanced text classification using proxy labels and knowledge distillation. In 5th Joint International Conference on Data Science Management of Data (9th ACM IKDD CODS and 27th COMAD), CODSCOMAD 2022, page 227–230, New York, NY, USA. Association for Computing Machinery. Rima Türker, Lei Zhang, Mehwish Alam, and Harald Sack. 2020. Weakly supervised short text categorization using world knowledge. In The Semantic Web - ISWC 2020: 19th International Semantic Web Conference, Athens, Greece, November 2–6, 2020, Proceedings, Part I, page 584–600, Berlin, Heidelberg. Springer-Verlag. Paroma Varma and Christopher Ré. 2018. Snuba: Automating weak supervision to label training data. In Proceedings of the VLDB Endowment. International Conference on Very Large Data Bases, volume 12, page 223. NIH Public Access. Yanshan Wang, Sunghwan Sohn, Sijia Liu, Feichen Shen, Liwei Wang, Elizabeth J Atkinson, Shreyasee Amin, and Hongfang Liu. 2019. A clinical text classification paradigm using weak supervision and deep representation. *BMC medical informatics and decision making*, 19(1):1–13. Zihan Wang, Dheeraj Mekala, and Jingbo Shang. 2021. X-class: Text classification with extremely weak supervision. In *Proceedings of the 2021 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3043–3053, Online. Association for Computational Linguistics. Hongxin Wei, Renchunzi Xie, Hao Cheng, Lei Feng, Bo An, and Yixuan Li. 2022. Mitigating neural network overconfidence with logit normalization. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In *Proceedings of the 2018 Conference of the North American* Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122. Association for Computational Linguistics. Wenpeng Yin, Jamaal Hay, and Dan Roth. 2019. Benchmarking zero-shot text classification: Datasets, evaluation and entailment approach. In *Proceedings of* the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3914–3923, Hong Kong, China. Association for Computational Linguistics. Peilin Yu, Tiffany Ding, and Stephen H Bach. 2022. Learning from multiple noisy partial labelers. In International Conference on Artificial Intelligence and Statistics, pages 11072–11095. PMLR. Li Yuan, Francis EH Tay, Guilin Li, Tao Wang, and Jiashi Feng. 2020. Revisiting knowledge distillation via label smoothing regularization. In *2020* IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 3902–3910. Ziqian Zeng, Weimin Ni, Tianqing Fang, Xiang Li, Xinran Zhao, and Yangqiu Song. 2022. Weakly supervised text classification using supervision signals from a language model. arXiv preprint arXiv:2205.06604. Jieyu Zhang, Cheng-Yu Hsieh, Yue Yu, Chao Zhang, and Alexander Ratner. 2022a. A survey on programmatic weak supervision. *arXiv preprint* arXiv:2202.05433. Jieyu Zhang, Bohan Wang, Xiangchen Song, Yujing Wang, Yaming Yang, Jing Bai, and Alexander Ratner. 2021a. Creating training sets via weak indirect supervision. *arXiv preprint arXiv:2110.03484*. Lu Zhang, Jiandong Ding, Yi Xu, Yingyao Liu, and Shuigeng Zhou. 2021b. Weakly-supervised text classification based on keyword graph. In *Proceedings* of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 2803–2813, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Rongzhi Zhang, Yue Yu, Pranav Shetty, Le Song, and Chao Zhang. 2022b. Prompt-based rule discovery and boosting for interactive weakly-supervised learning. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 745–758, Dublin, Ireland. Association for Computational Linguistics. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In *Advances in Neural Information Processing Systems*, volume 28. Curran Associates, Inc. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Last section A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Left blank. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? Not applicable. Left blank. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Left blank. ## C ✓ **Did You Run Computational Experiments?** 5 ✗ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Low resource study; computing infrastructure was almost all-encompassing. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Left blank. ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Left blank. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Detailed in footnotes ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
mohbat-etal-2023-gvdoc
{GV}doc - Graph-based Visual {DO}cument Classification
https://aclanthology.org/2023.findings-acl.329
The robustness of a model for real-world deployment is decided by how well it performs on unseen data and distinguishes between in-domain and out-of-domain samples. Visual document classifiers have shown impressive performance on in-distribution test sets. However, they tend to have a hard time correctly classifying and differentiating out-of-distribution examples. Image-based classifiers lack the text component, whereas multi-modality transformer-based models face the token serialization problem in visual documents due to their diverse layouts. They also require a lot of computing power during inference, making them impractical for many real-world applications. We propose, GVdoc, a graph-based document classification model that addresses both of these challenges. Our approach generates a document graph based on its layout, and then trains a graph neural network to learn node and graph embeddings. Through experiments, we show that our model, even with fewer parameters, outperforms state-of-the-art models on out-of-distribution data while retaining comparable performance on the in-distribution test set.
# Gvdoc: Graph-Based Visual Document Classification Fnu Mohbat1∗ , Mohammed J. Zaki1, Catherine Finegan-Dollak2†**, Ashish Verma**3† 1Rensselaer Polytechnic Institute, 2University of Richmond, 3Amazon [email protected], [email protected], [email protected], [email protected] ## Abstract The robustness of a model for real-world deployment is decided by how well it performs on unseen data and distinguishes between in-domain and out-of-domain samples. Visual document classifiers have shown impressive performance on in-distribution test sets. However, they tend to have a hard time correctly classifying and differentiating out-ofdistribution examples. Image-based classifiers lack the text component, whereas multimodality transformer-based models face the token serialization problem in visual documents due to their diverse layouts. They also require a lot of computing power during inference, making them impractical for many real-world applications. We propose, GVdoc, a graph-based document classification model that addresses both of these challenges. Our approach generates a document graph based on its layout, and then trains a graph neural network to learn node and graph embeddings. Through experiments, we show that our model, even with fewer parameters, outperforms state-of-the-art models on out-of-distribution data while retaining comparable performance on the in-distribution test set. ## 1 Introduction Documents digitization and their intelligent processing in various industries such as finance, insurance, and medicines has resulted in the rapid development of structured document understanding methods, a.k.a. document AI. Document classification is one of the essential tasks in document AI for labeling documents. A number of deep convolutional neural network (CNN) and Transformerbased models have achieved superior performance on many document-AI tasks (Xu et al., 2021; Lee et al., 2021, 2022). However, they tend to employ bigger models with hundreds of millions of parameters, subsequently increasing computational demand that can be a challenge in real-world applications. Yet many of them fail to perform well on out-of-distribution (OOD) data (Larson et al., 2021, 2022). This is because, in many cases, training and testing examples are from a fixed distribution − such as a particular language, time frame, and industry. However, the layout of the documents evolves over time, and the model should perform well on such out-of-distribution data. Further, the model is expected to be able to differentiate between known and unknown categories of documents, thus minimizing false-positive predictions during testing. Initial work on document classification employed off-the-shelf image classifiers (Jain and Wigington, 2019; Bakkali et al., 2020) and models pre-trained on ImageNet (Deng et al., 2009) or similar datasets. These methods struggle to label documents having similar layouts but different text contexts. Later, focus shifted towards language models (Li et al., 2021a; Lee et al., 2022) and multimodality models (Bakkali et al., 2020; Xu et al., 2021; Lee et al., 2021; Wang et al., 2022a). These models also incorporated layout information obtained from optical character recognition (OCR). Therefore, the performance of these methods, particularly transformer-like models, degrades due to the imperfection of the OCR engine, such as errors in parsed text or the order of tokens sequence. Almost all of these methods tried to improve the performance on the in-distribution test set, neglecting the generalization for real-world applications. To confirm, recently (Larson et al., 2022) collected an OOD version of RVLCDIP dataset (Harley et al., 2015) and evaluated several image and multi-modal classifiers. However, none of them performed well on the OOD dataset. Our method, called GVdoc (for Graph-based Visual DOcument Classification), studies docu- ∗The work was partially done during an externship at IBM T. J. Watson Research Center. † was at IBM T. J. Watson Research Center. 5342 ![1_image_0.png](1_image_0.png) ment classification as a graph classification problem, where we take text words as nodes and the relationship between words as edges in a graph. We generate a document-level graph using that layout information from OCR (see Figure 1) and learn the embedding using graph neural networks (GNNs). GVdoc is more robust to changes in the test set; hence it shows improved performance on out-ofdistribution data. We make the following contributions: - We introduce graph-based document modeling that leverages both (potentially noisy) reading order and spatial layout in graph construction, and learns embeddings using GNNs. - We empirically show that compared with other systems, our model is better able to generalize to test data drawn from a different distribution than the training data. ## 2 Related Work Visual Document Classification CNNs have achieved excellent performance on natural scene images, so they became the first obvious choice for visual document classification (Das et al., 2018; Jain and Wigington, 2019; Bakkali et al., 2020). However, documents have overlapping intra-class visual and structural characteristics (Bakkali et al., 2020), which makes visual features less discriminative for classification. The semantics of text in the document and the layout are essential to understand the visual documents. A second line of work studies document classification as a sequence classification problem (Lee et al., 2022; Li et al., 2021a; Wang et al., 2022a). They follow language modeling strategies, but aside from text, they also incorporate layout information. Such approaches parse text and layout information by applying OCR on document images. Then, they train transformer-like models. StructuralLM (Li et al., 2021a) adds text and layout embeddings and trains a transformer model (similar to BERT (Devlin et al., 2018)) on specialized pretraining tasks. Some of the recent works employ multi-modal features including visual, text and layout (Xu et al., 2021; Peng et al., 2022; Lee et al., 2021). These models train a single transformer on concatenations of text and visual tokens (Xu et al., 2021) or train a separate transformer branch for both text and visual modalities (Peng et al., 2022). The methods that utilize text consider serialized tokens from OCR as an input, so their performance varies with the correctness of the OCR engine. For examples, if we replace the proprietary Microsoft Azure OCR in LayoutLMv2 (Xu et al., 2021) with Tesseract 1, an open source OCR, its performance drops for visual document classification (Larson et al., 2022). Transformer-based models consider input sequence based on OCR reading order (Xu et al., 2021; Li et al., 2021a), which may not reflect tokens in their actual reading order (Lee et al., 2021, 2022). Therefore, a few recent studies model the document as a graph by suggesting several possible edge types. Zhang et al. (2020) proposed k-Nearest Neighbors graphs, but these may contain connections with isolated tokens. Fully connected graphs employed by (Liu et al., 2019; Yu et al., 2021) do not leverage the sparsity of the document, hence their approach is similar to transformers. On the other hand, (Cheng et al., 2020) relied on a proprietary OCR technology to identify "text fields", then utilized a 360-degree line-ofsight (LoS) graph. We initially used LoS graphs but that did not show very good performance. FormNet (Lee et al., 2022) models a document as a graph using a β-skeleton graph (Kirkpatrick and Radke, 1985) and tries to minimize the serialization error by learning localized Super-Token embeddings using graph convolutions before a transformer. However, they used ETC Transformer (Ainslie et al., 2020) for schema learning from GCN-encoded structure-aware Super-Tokens. Our approach differs from prior graph-based work in two important ways: graph generation and 1https://github.com/tesseract-ocr/tesseract learning embeddings. Our unique document-level sparse graph incorporates **both** spatial layout and OCR reading order, leveraging the document's sparsity and making our model less sensitive to common mistakes in OCR reading order. Moreover, we solely use a GNN to learn embeddings. Thus, we do not require a transformer component, making our approach more memory-efficient than models that incorporate a transformer (Lee et al., 2022; Wei et al., 2020; Yu et al., 2021). Our approach also uses more expressive edge embeddings than that of Liu et al. (2019). Feature fusion Initial research simply added together the text and layout embedding (Xu et al., 2021; Hong et al., 2022), incorporated position bias in attention mechanism (Garncarek et al., 2021; Powalski et al., 2021), designed cross-modality attention layers (Wang et al., 2022a; Peng et al., 2022; Li et al., 2021b), and explored 1D position and 2D layout aware attention weights using a disentangled matrix (Peng et al., 2022). LiLT (Wang et al., 2022a) adds attention weights from layout and text embeddings and updates both types of embeddings through two separate transformers. However, adding attention weights does not fully leverage the cross-domain features. SelfDoc (Li et al., 2021b) took the Value (V) of one modality as Key (K) for the other modality while computing crossattention in transformer layers to learn dependency between language and vision features. Finally, it added features of both text and visual modalities. ## 3 Gvdoc Document Graph We now describe our approach for representing document using both textual and layout features. We represent a document D as a graph where each token is a node and edges reflect the spatial relationship between them. Nodes We define vertices for all tokens as V = {v1, v2*, ..., v*N } where features of vi are a fusion of the text and layout embeddings defined later in Equation (5). In addition, we define a virtual super node that summarizes the graph, similar to the CLS token in BERT. Edges Token sequence can be important in understanding text, but this information provided by OCR is noisy. We therefore generate edges in the document graph reflecting two types of relationships between vertices: (a) "ball-of-sight" using β-skeleton graph (Kirkpatrick and Radke, 1985) and (b) paragraph-based neighborhood. A β**-skeleton graph** (Kirkpatrick and Radke, 1985) defines an edge between two bounding boxes if both intersect a circle that does not intersect any other bounding box; the resulting "ball-ofsight" graph is sparser than one using *line*-of-sight edges (Wang et al., 2022b). Lee et al. (2021, 2022) found this useful for message passing in GNNs. The **paragraph-based neighborhood** connects tokens within the same paragraph and connects paragraphs based on OCR's reading order predictions. While we could fully connect all tokens in the same paragraph, we aim to reduce computation by increasing sparsity; therefore, we add edges for each token with the k nearest neighbors within the same paragraph. Then, for each pair of paragraphs that are adjacent in the OCR's reading order, we define an edge between the last token of the prior paragraph and the first token of the following paragraph. Finally, we define a super-node and connect it with the first and last token of each paragraph, considering them as representative tokens of the paragraph. To construct the final graph, we take the union of the edges from the β-skeleton graph and the paragraph-based neighborhood as shown in Figure 1. Thus, we generate a graph that is sparse but also has enough connections for learning node embeddings through message passing in the GNN (as evident in Table 7). For the edge between connected vertices vi and vj , we define edge features by concatenating (a) distance between all four corners and centers of token bounding boxes of vi and vj , (b) absolute distance on horizontal and vertical axes, and (c) ratio of height and width. ## 4 Gvdoc Model Our GVdoc model, shown in Figure 2, consists of input embedding, feature fusion, and task-specific prediction modules. We learn node embeddings in an end-to-end fashion through various unsupervised pre-training tasks. Then, we fine-tune the model for downstream tasks. ## 4.1 Input Embedding Text embedding: Our text embedding module is similar to BERT's (Devlin et al., 2018). To get embeddings of text (T), we add token embeddings, token type embeddings, and position embeddings, ![3_image_0.png](3_image_0.png) ## Given As $$e_{t}=e_{t o k e n}(T)+e_{t y p e}(T)+e_{1p}(T)\ \ \ \ \ (1)$$ where, etoken, e*type*, e1p are token, token type and position embedding layers, respectively, and et ∈ Rdare text embeddings. Layout embedding: OCR provides text tokens (T), their bounding boxes Tbox, and paragraph-level bounding boxes Pbox. A bounding box contains coordinates of top left corner and bottom right corner, given as [(x1, y1),(x2, y2)], of a box that covers the token or paragraph. Most document AI models employ token-level bounding boxes for layout embedding that allows the models to localize the text in the layout. StructuralLM (Li et al., 2021a) divides the images into fixed-size grids and uses cell bounding boxes instead of token bounding boxes. They show that the model can encode better contextual information using cell bounding boxes. However, dividing the image into cells might put irrelevant tokens in the same cell or might put a token in two cells. To improve reading order in layout-rich documents, some of the recent approaches (Peng et al., 2022) first detect different text components in the document image and then serialize the tokens from OCR per text component. Motivated by (Peng et al., 2022), we employ text component (paragraph) level layout information for learning layout embeddings. We concatenate the embeddings of paragraph level bounding boxes and token level bounding boxes. Then, we use one fully connected layer to map back to the hidden dimension, ## Given As: $$e_{l}=f c(e_{t l}(T_{b o x})\parallel e_{p l}(P_{b o x}),\theta)$$ $$\left(2\right)$$ where || denotes concatenation, etl is a layout embedding layer that encodes token bounding boxes in ![3_image_1.png](3_image_1.png) dimension Rd, epl is a layout embedding layer that encodes paragraph bounding boxes in dimension Rd. Finally, both layout embeddings are concatenated to yield a R2dembedding which is mapped into Rdthrough a fully connected layer. Thus, our layout embeddings el contain the coarse and fine-grained location of the tokens based on the document layout. ## 4.2 Feature Fusion Module Our cross-attention module is similar to the crossattention layer in (Li et al., 2021b), except that we explicitly compute the value representation (V) for both modalities (text and layout) by linear mappings, as shown in Figure 3. Thus, our crossattention module tries to find the most relevant layout embeddings based on text attention weights and vice versa. Formally we define our cross-attention module in Equation (5). $$\begin{array}{l c r}{{\alpha_{t}^{i j}=(e_{t}^{i}W_{t}^{h Q})(e_{t}^{j}W_{t}^{h K})/\sqrt{d_{k}}}}&{{}}&{{(3)}}\\ {{\alpha_{l}^{i j}=(e_{l}^{i}W_{l}^{h Q})(e_{l}^{j}W_{l}^{h K})/\sqrt{d_{k}}}}&{{}}&{{(4)}}\\ {{v_{i}^{h}=\sum_{j\in N_{i}}\alpha_{t}^{i j}(e_{l}^{i}W_{l}^{h V})+\alpha_{l}^{i j}(e_{t}^{i}W_{t}^{h V})}}&{{}}&{{(5)}}\end{array}$$ where the superscript h represents an attention head, dk = d/H is the projection dimension (with H being the number of the attention heads), e it and e i l are text and layout embedding vectors fused into node embeddings v h i ∈ Rdk for head h. WhQ, WhK, and WhV are Rd×dk learnable weights that linearly transform embeddings into queries (Q), keys (K) and values (V), respectively. Node embeddings from all attention heads are concatenated to yield final node embeddings of dimension d. ## 4.3 Graph Learning The generation of document graph results in node features, adjacency matrix and edge features as discussed in Section 3. We chose Graph Attention Network (GAT) (Velickovi ˇ c et al. ´ , 2017) as a message passing network for learning node embeddings. The super-node is used to predict the graph (document) label. Our model is first pre-trained in a similar fashion to most of the transformer-based document AI models. We pre-train the model on the following three tasks. ## 4.3.1 Masked Language Modeling (Mlm) Mask Language Modeling (MLM) is a widely adopted pre-training task in language modeling, involving the masking of random tokens in a text with the special token *MASK*, which the model then aims to predict. Consistent with previous studies (Xu et al., 2021; Li et al., 2021a; Lee et al., 2022), we adopt a masking strategy in which 15% of the tokens are masked. Subsequently, the model learns to estimate the masked tokens based on the information provided by their neighboring tokens. ## 4.3.2 Masked Position Modeling (Mpm) Each token in the document has its associated location information, represented by a bounding box, which aids in understanding the document's layout. Inspired by the approach presented in Saha et al. (Saha et al., 2021), we randomly replace 15% of the bounding boxes with a fixed bounding box [0, 0, 0, 0]. Subsequently, the model is tasked with predicting the masked token-level bounding boxes through a regression task. It is important to note that we do not mask the bounding boxes at the paragraph level, allowing the model to retain access to coarse-grained layout information. As a result, the model's predictions focus solely on the fine-grained layout details while utilizing the provided coarse-grained layout information. ## 4.3.3 Cell Position Prediction (Cpp) Motivated by (Li et al., 2021a), we divide the document image into a K ×K grid. A token is assigned a cell number in which the center of its bounding box lies. Then, for each token, the model is trained to predict the specific cell within the grid to which it belongs. This task helps the model to narrow down location of tokens within the layout. ## 5 Experiments We hypothesize that our GVdoc model will be more robust to changes in the test distribution than other models. We therefore designed experiments to measure how our model performed on two tasks: (a) classifying in-domain but out-of-distribution documents, and (b) distinguishing out-of-domain documents from in-domain documents. ## 5.1 Baseline Methods For baseline comparison, we chose models that cover different architectures including CNNs (VGG-16 (Simonyan and Zisserman, 2015), GoogLeNet (Szegedy et al., 2015)), image transformers (DiT) (Li et al., 2022), and models that use language modeling (LayoutLMv2 (Xu et al., 2021), LayoutLMv3 (Huang et al., 2022)). Following (Larson et al., 2022), we compare GVdoc with above mentioned models. ## 5.2 Datasets We use the RVLCDIP (Harley et al., 2015) dataset as our in-distribution and in-domain data, then use RN and RO (Larson et al., 2022) as our out-ofdistribution and out-of-domain datasets, respectively. RVLCDIP (Harley et al., 2015) is a subset of IITCDIP (Lewis et al., 2006), consisting of scanned and noisy document images from litigation involving the American tobacco industry. The images are labeled for 16 categories including forms, newspaper, *scientific publication* and so on. The dataset has 320, 000 training samples, and 40, 000 validation and testing examples, each. We fine-tune all models in this work on RVLCDIP's training set. We will use RT to refer to RVLCDIP's test set. RVLCDIP-N (RN) (Larson et al., 2022) is an out-of-distribution but in-domain set. It contains 1, 002 documents belonging to the 12 categories of RVLCDIP dataset, making it in-domain. However, they not taken from the American tobacco industry or IIT-CDIP, so the samples are from a different distribution. RVLCDIP-O (RO) (Larson et al., 2022) was collected from Google and Bing searches and the public Document Cloud 2repository. It has 3, 415 samples, and those documents do not match with any class in RVLCDIP, i.e., they are both out-ofdistribution and out-of-domain. ## 5.3 Metrics Robustness to out-of-distribution data. To test how robust each model is to a change in distribution, we compare the model's accuracy on the RVLCDIP test set (RT) and the OOD but in-domain RN. We report both micro-accuracy, calculated as ratio of true positives to total number of samples, and macro-accuracy, calculated by averaging per-class accuracy. A robust model will maintain micro- and macro- accuracy on RN that is close to what it achieved on RT. Identifying out-of-domain data. To test models' effectiveness at identifying out-of-domain data, we follow Larson et al. (2022) in using metrics that describe the separability of confidence scores for in- and out-of- domain examples. A classifier that is good at identifying out-of-domain data should assign high confidence scores to its predictions for in-domain data and low confidence scores to its predictions for out-of-domain data. If we chose a confidence threshold t, we could make a binary classifier that labels all examples with confidence ≥ t in-domain and all examples with confidence < t out-of-domain; we could then calculate its accuracy, but that accuracy would depend upon our choice of t. False positive rate at 95% true positive rate (FPR95) sets t at a level that gives 95% true positives and then measures how many negative examples (out-of-distribution) are classified as positive (in-distribution). A model with a lower FPR95 value model is better at differentiating in- versus out-of-distribution data. Area under the ROC curve (AUC), similarly, describes how different the confidences are for the inand out-of-domain examples, but, as a thresholdfree measure, is considered as a better option (Larson et al., 2022). A high AUC score (close to 1.0) means the model assigns a higher confidence score to in-domain data and a lower confidence score to out-of-domain data. An AUC score of 0.5 means the model assigns similar confidence scores to inand out-of-domain samples. We calculate FPR95 and AUC using two confidence measures: maximum softmax probability and energy score. ## Maximum Softmax Probability (Msp): Given a model, we compute logits for an example x as z = f(x) and then apply softmax to compute the confidence score per class. For i th class, the confidence score ci can be calculated as: ci = e zi PC j e zj , where C is total number of classes. MSP is the maximum confidence score out of these C scores as: MSP = max{ci}. Energy Score: Energy score (Liu et al., 2020) is defined as: E(*z, T*) = −T log PC j=1 e (zj/T) where T is a temperature parameter. For fairness, following (Larson et al., 2022), we use T = 1. ## 5.4 Experimental Setup Given a document, we use OCR to extract text tokens, their bounding boxes, and paragraph (text entity) level bounding boxes. Proprietary OCR engines such as Microsoft Azure OCR used by LayoutLMv2 (Xu et al., 2021), or CLOVA OCR API3 used by BROS (Hong et al., 2022) are meticulous, but not all users have access to these tools. Thus, following (Larson et al., 2022), we use Tesseract 4, an open source OCR engine, for parsing words and their locations from document images, and then tokenized them using BERT tokenizer. For a better start of training, we initialize text embedding layers with weights from pre-trained BERT. GVdoc uses an embedding dimension d = 768. That is, the dimension for our token embeddings, token bounding-box embeddings and paragraph bounding-box embeddings is d = 768. Token and paragraph bounding-box embeddings are concatenated and mapped to final layout embeddings of dimension d = 768. Similarly, text and layout embeddings are fused using feature fusion module to result in node embeddings of dimension d = 768. Our feature fusion module contains 4 attention heads. We use input edge features of dimension 21, which are also linearly transformed to d = 768. We use Graph Attention Network (GAT) (Velickovi ˇ c et al. ´ , 2017) with 4 layers and 4 heads. We normalized edge features and input them to GAT. In our implementation of the β-skeleton graph (Kirkpatrick and Radke, 1985), we set β = 1 3https://clova.ai/ocr 4https://github.com/tesseract-ocr/tesseract ![6_image_0.png](6_image_0.png) and consider a maximum of 25 neighbors. For the paragraph-level graph, we connect each node to a maximum of 10 nearest neighbors within the same paragraph or text entity, utilizing OCR reading order as the distance metric. We experimented with different numbers of neighbors per text entity, including 5, 10, 15, and 20, but found that selecting 10 neighbors yielded the best performance in terms of accuracy and computational efficiency. Therefore, for all our experiments, we randomly select between 2 to 10 neighbors for each token during training, while during testing, we fix the number of neighbors to 10. The code for GVdoc is publicly available at https://github.com/ mohbattharani/GVdoc. ## 5.5 Ood But In-Domain Performance On Rn Table 1 compares the number of parameters, accuracy on RT reported by their original papers achieved by (Larson et al., 2022), and accuracy on RN (the OOD but in-domain) dataset. Based on the analysis of different models shown in Table 1, almost all previous works reported more than 90% accuracy on the RT except GoogLeNet. More importantly, when these models were tested on the out-of-distribution, in-domain dataset (RN), all the models substantially dropped in accuracy. The original LayoutLMv2 (Xu et al., 2021) utilized the proprietary Microsoft Azur OCR. As a result, when it was evaluated on text parsed using Tesseract OCR, its accuracy on the test set decreased by almost 7%. Furthermore, it performed poorly on the out-of-distribution (OOD) dataset, experiencing a drop of 33% on RN. Notably, the more recent LayoutLMv3 (Huang et al., 2022) exhibited improved performance compared to LayoutLMv2, but it still experienced a drop of nearly 10% on the OOD dataset. DiT appears to have the highest accuracy than the rest on the RT, yet failed to generalize. The drop in accuracy on RN by these models imply that these models might be over-fitting on in-distribution data. Compared to the top-performing models on the test set, our GVdoc model demonstrates robust performance on RN, indicating its ability to generalize well to out-of-distribution data. Table 2 showcases the per-class accuracy on RN, where GVdoc consistently achieves higher accuracy and accurately categorizes the majority of examples. Notably, our model exhibits high consistency, outperforming or matching the leading results across all classes. In contrast, the other models shows inconsistency, with accuracy dropping below 50% on at least one class. Specifically, for the "Specification" class, our model outperforms all models except LayoutLMv3 (Huang et al., 2022). Moreover, our model achieves nearly 20% higher accuracy than DiT, despite DiT having almost twice the number of parameters as GVdoc. This highlights the effectiveness and efficiency of our model in achieving superior performance. ## 5.6 Ood And Out-Of-Domain Results On Ro Here, we compare AUC scores on RT versus RO (T-O), and RN versus RO (N-O) using three metrics: (a) AUC using Maximum Softmax Probability (MSP), (2) AUC using Energy function, and (3) FPR95. These metrics investigate the ability of a model to differentiate between in- and outdistribution data. RN vs RO (N-O): Table 3 compares AUC scores on the out-of-distribution dataset RN versus RO using MSP and energy metrics. The models are trained on the RVLCDIP training set and tested on out-of-distribution datasets − RN and RO. Then their maximum soft-max probability (MSP) and energy function based AUC scores are compared. Ideally, N-O should be more challenging as it compares in-distribution and out-of-distribution datasets (Larson et al., 2022). Among previous approaches, DiT (Li et al., 2022) has the highest test accuracy, and its micro and macro AUC scores using MSP are higher than those of VGG16, GoogLeNet, and LayoutLMv2. However, our GVdoc model outperforms DiT by 24 points on micro AUC and almost 17 points on macro AUC with MSP. Furthermore, although LayoutLMv3 (Huang et al., 2022) exhibits a test accuracy similar to that of DiT, our model surpasses it. Specifically, GVdoc outperforms LayoutLMv3 by almost 13 points on micro AUC and 9 points on macro AUC with MSP. Micro- and macro-AUC scores using the En- Model Micro Macro Budget Email Form Handwritten Invoice Letter memo News Article Questionnaire resume Scientific Pub Specification ![7_image_0.png](7_image_0.png) ![7_image_1.png](7_image_1.png) VGG-16 66.8 69.1 79.3 84.8 74.3 40.3 73.7 90.1 55.3 68.6 71.8 69.6 97.4 23.0 GoogLeNet 60.2 61.05 77.59 81.81 70.0 44.88 43.86 81.56 55.32 61.63 51.28 60.87 92.31 11.47 DiT 78.6 80.5 86.2 **97.0** 91.4 62.4 86 **95.4** 72.3 **84.9** 82.1 73.4 92.3 41.0 LayoutLMv2 55.6 60 89.7 84.8 52.9 26.1 33.3 83.6 51.1 51.2 76.9 56.5 92.3 16.4 LayoutLMv3 82.4 83.8 91.2 90.62 **91.9** 25.7 92.3 92.5 76.2 77.8 **97.8** 95.3 97.9 **76.8** GVdoc 89.9 89.1 **98.3** 82.8 89.9 85.1 **96.7** 95.1 **87.9** 81.9 97.4 **97.3** 97.4 61.7 Table 2: The per-class accuracy scores on RN (OOD but in-domain dataset) for each document classification model demonstrate the superior performance of GVdoc across various classes. Our model consistently achieves higher accuracy, outperforming or matching the best model on 10 classes and ranking as the second-best on 3 classes. Model MSP Energy Micro Macro Micro Macro VGG-16 0.649 0.706 0.648 0.707 GoogLeNet 0.592 0.679 0.587 0.689 DiT 0.728 0.780 0.753 0.792 LayoutLMv2 0.620 0.717 0.643 0.716 LayoutLMv3 0.755 0.807 0.755 0.807 GVdoc 0.865 0.888 0.997 **0.999** Table 3: AUC scores (higher better): RN versus RO. Model MSP Energy Micro Macro Micro Macro VGG-16 0.916 0.858 0.912 0.845 GoogLeNet 0.947 0.869 0.943 0.845 DiT 0.847 0.704 0.843 0.685 LayoutLMv2 0.932 0.848 0.939 0.847 LayoutLMv3 0.839 0.618 0.834 0.611 GVdoc 0.650 0.516 0.002 **0.003** Model MSP Energy ![7_image_2.png](7_image_2.png) ![7_image_3.png](7_image_3.png) ![7_image_4.png](7_image_4.png) ![7_image_5.png](7_image_5.png) Micro Macro Micro Macro VGG-16 0.881 0.895 0.922 0.930 GoogLeNet 0.838 0.859 0.847 0.869 DiT 0.893 0.902 0.888 0.902 LayoutLMv2 0.842 0.875 0.849 0.891 LayoutLMv3 0.817 0.889 0.817 0.889 GVdoc 0.898 0.907 0.955 **0.951** Table 5: AUC scores (higher better): RT versus RO. Model MSP Energy Micro Macro Micro Macro VGG-16 0.649 0.533 0.465 0.391 GoogLeNet 0.748 0.620 0.665 0.560 DiT 0.587 **0.463** 0.499 0.417 LayoutLMv2 0.717 0.592 0.753 0.574 LayoutLMv3 **0.578** 0.531 0.576 0.528 GVdoc 0.593 0.488 0.250 **0.233** ergy function do not follow the trend. GoogLeNet achieved the lowest test accuracy and has the lowest Energy AUC scores. Although VGG-16 has higher test accuracy than LayoutLMv2, it is almost 2 points lower on the Micro AUC energy score. Nevertheless, VGG-16 is almost 2 points better on the Macro AUC energy score. DiT and LayoutLMv3 have similar micro and macro scores. GVdoc achieves the highest micro- and macro-AUC scores using energy suggesting that it can effectively differentiate between the in-distribution and out-distribution datasets. Table 4 compares FPR95 scores where a model with lower score is considered better. Micro FPR95 with MSP is in low 0.90's for all the models except LayoutLMv3, DiT and ours. Unlike rest of the models, energy-based FPR95 scores for our model are almost perfect i.e., close to zero. This is evident from the distribution of energy scores in Figure 8 (see Appendix). Overall, GVdoc has lower FPR95 scores compared to the other models. Furthermore, the ROC curves in Figure 5 (see Appendix) confirm that our model can effectively differentiate negative (out-of-distribution) from positive (in-distribution) data. More details are discussed in Appendix A.3. RT vs RO (T-O): Table 5 analyzes the AUC scores of the RT versus out-domain RO data. All models in the study have MSP-based AUC scores ranging from 0.8 to 0.9. While DiT has the highest test accuracy among baselines, its MSP AUC scores are slightly lower than our model. Additionally, DiT falls behind in terms of energy-based AUC scores. Although LayoutLMv3 outperforms its predecessor, LayoutLMv2, in terms of macro MSP and energy scores, it is still unable to surpass DiT. However, GVdoc consistently outperforms all others in the study. Table 6 presents the FPR95 scores on RT versus RO. In terms of MSP-based FPR95, there is no fixed trend, yet our GVdoc model achieves the second-best FPR95 score based on Macro MSP. In terms of energy-based FPR95, GVdoc outper- ![8_image_1.png](8_image_1.png) forms the rest. VGG-16 achieves a better Micro FPR95 score, whereas GVdoc is 0.146 points better than VGG-16 in terms of Macro FPR95. Although VGG-16 has lower test accuracy than DiT, its energy-based AUC and FPR95 scores are better than DiT. Overall, GVdoc consistently performs the best in terms of AUC scores and energy-based FPR95, but it is the second-best in MSP-based Macro FPR95. To further investigate this, we plot MSP scores on RO for different models in Figure 4. We can see that our GVdoc model predicts lower confidence scores for out-domain data samples. Figure 6 (see Appendix) demonstrates that the predicted confidence scores for RN and RT are close to 1.0 for most of the examples. By selecting the proper threshold on confidence scores, we can correctly differentiate between in-domain versus out-domain, and in-distribution versus out-of-distribution data with our model. ROC curves in Figure 5 (see Appendix A.3) show that GVdoc is equivalent or even better than the other models. ## 5.7 Ablation Study Effect of graph generation methods As an ablation study, we compare the effect of different graph generation methods for visual documents. Table 7 demonstrates the importance of the β skeleton graph for document classification. Regardless graph generation method, classification accuracy on RT is almost the same. But, using only paragraph-level graphs (based on OCR reading order), the methods struggle to perform well on RN. ![8_image_0.png](8_image_0.png) However, our global graph, which combines both β skeleton and paragraph-level-graph, achieves the best accuracy on RT and RN. ## Number Of The Maximum Neighbors Per Token in graph As discussed in Section 5.4, we discard neighbors from the paragraph-level graph to make it sparse. We constraint maximum degree per node during training. For testing, we select a fixed number of neighbors per token (degree per node). Table 8 demonstrates that reducing the edges during training makes the model robust to the number of neighbors per token. Therefore, our GVdoc model shows the best performance on OOD data. ![8_image_2.png](8_image_2.png) ## 6 Conclusion In this paper, we address the limitation of existing visual document classification models by modeling a document as a graph and learning its embeddings using a graph attention network. By defining two types of edges (β skeleton and paragraph-based), we leverage the benefit of layout information while minimizing the effects of the errors from OCR reading order. Thus, effectively embracing coarse and fine-grained layout information, GVdoc generalizes better for different layouts. While most visual document classifiers tend to perform well on in-distribution data, they fail or struggle on outof-distribution data; our model does not drop its performance on OOD data. Through experiments, we demonstrate the generalization of our model on out-of-distribution data. ## 7 Limitations - We employed Tesseract OCR, an open-source OCR system, which can sometimes make errors in text detection and recognition. However, commercially available OCR engines such as Microsoft Azure OCR are more proficient in detecting text and layout from visual documents. OCR errors can propagate during training and affect the model's performance. For instance, we observed that when Tesseract OCR was used instead of Microsoft Azure OCR, LayoutLMv2 (Xu et al., 2021) experienced a 7% decrease in performance. - Our model relies on textual and layout features, neglecting the visual component. Various works (Li et al., 2021b; Xu et al., 2021) have already witnessed improvements by utilizing visual features along with textual and layout features. We plan to investigate integration of visual features. ## References Joshua Ainslie, Santiago Ontañón, Chris Alberti, Philip Pham, Anirudh Ravula, and Sumit Sanghai. 2020. Etc: Encoding long and structured data in transformers. Souhail Bakkali, Zuheng Ming, Mickaël Coustaty, and Marçal Rusiñol. 2020. Visual and textual deep feature fusion for document image classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 562–563. Mengli Cheng, Minghui Qiu, Xing Shi, Jun Huang, and Wei Lin. 2020. One-shot text field labeling using attention and belief propagation for structure information extraction. In *Proceedings of the 28th ACM* International Conference on Multimedia, pages 340– 348. Arindam Das, Saikat Roy, Ujjwal Bhattacharya, and Swapan K Parui. 2018. Document image classification with intra-domain transfer learning and stacked generalization of deep convolutional neural networks. In *2018 24th international conference on pattern* recognition (ICPR), pages 3180–3185. IEEE. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. IEEE. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*. Łukasz Garncarek, Rafał Powalski, Tomasz Stanisławek, Bartosz Topolski, Piotr Halama, Michał Turski, and Filip Gralinski. 2021. Lambert: ´ layout-aware language modeling for information extraction. In International Conference on Document Analysis and Recognition, pages 532–547. Springer. Adam W Harley, Alex Ufkes, and Konstantinos G Derpanis. 2015. Evaluation of deep convolutional nets for document image classification and retrieval. In 2015 13th International Conference on Document Analysis and Recognition (ICDAR), pages 991–995. IEEE. Teakgyu Hong, Donghyun Kim, Mingi Ji, Wonseok Hwang, Daehyun Nam, and Sungrae Park. 2022. Bros: A pre-trained language model focusing on text and layout for better key information extraction from documents. In *Proceedings of the AAAI Conference* on Artificial Intelligence, volume 36, pages 10767– 10775. Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, and Furu Wei. 2022. Layoutlmv3: Pre-training for document ai with unified text and image masking. In Proceedings of the 30th ACM International Conference on Multimedia, pages 4083–4091. Rajiv Jain and Curtis Wigington. 2019. Multimodal document image classification. In *2019 International* Conference on Document Analysis and Recognition (ICDAR), pages 71–77. IEEE. David G Kirkpatrick and John D Radke. 1985. A framework for computational morphology. In *Machine Intelligence and Pattern Recognition*, volume 2, pages 217–248. Elsevier. Stefan Larson, Gordon Lim, Yutong Ai, David Kuang, and Kevin Leach. 2022. Evaluating out-ofdistribution performance on document image classifiers. In *Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks* Track. Stefan Larson, Navtej Singh, Saarthak Maheshwari, Shanti Stewart, and Uma Krishnaswamy. 2021. Exploring out-of-distribution generalization in text classifiers trained on tobacco-3482 and rvl-cdip. In International Conference on Document Analysis and Recognition, pages 416–423. Springer. Chen-Yu Lee, Chun-Liang Li, Timothy Dozat, Vincent Perot, Guolong Su, Nan Hua, Joshua Ainslie, Renshen Wang, Yasuhisa Fujii, and Tomas Pfister. 2022. Formnet: Structural encoding beyond sequential modeling in form document information extraction. *arXiv preprint arXiv:2203.08411*. Chen-Yu Lee, Chun-Liang Li, Chu Wang, Renshen Wang, Yasuhisa Fujii, Siyang Qin, Ashok Popat, and Tomas Pfister. 2021. Rope: reading order equivariant positional encoding for graph-based document information extraction. *arXiv preprint arXiv:2106.10786*. David Lewis, Gady Agam, Shlomo Argamon, Ophir Frieder, David Grossman, and Jefferson Heard. 2006. Building a test collection for complex document information processing. In *Proceedings of the 29th* annual international ACM SIGIR conference on Research and development in information retrieval, pages 665–666. Chenliang Li, Bin Bi, Ming Yan, Wei Wang, Songfang Huang, Fei Huang, and Luo Si. 2021a. Structurallm: Structural pre-training for form understanding. *arXiv* preprint arXiv:2105.11210. Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Chaoxi Zhang, and Furu Wei. 2022. Dit: Self-supervised pre-training for document image transformer. Proceedings of the 30th ACM International Conference on Multimedia. Peizhao Li, Jiuxiang Gu, Jason Kuen, Vlad I Morariu, Handong Zhao, Rajiv Jain, Varun Manjunatha, and Hongfu Liu. 2021b. Selfdoc: Self-supervised document representation learning. In *Proceedings of* the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5652–5660. Weitang Liu, Xiaoyun Wang, John Owens, and Yixuan Li. 2020. Energy-based out-of-distribution detection. Advances in Neural Information Processing Systems, 33:21464–21475. Xiaojing Liu, Feiyu Gao, Qiong Zhang, and Huasha Zhao. 2019. Graph convolution for multimodal information extraction from visually rich documents. arXiv preprint arXiv:1903.11279. Qiming Peng, Yinxu Pan, Wenjin Wang, Bin Luo, Zhenyu Zhang, Zhengjie Huang, Teng Hu, Weichong Yin, Yongfeng Chen, Yin Zhang, et al. 2022. Ernielayout: Layout knowledge enhanced pre-training for visually-rich document understanding. *arXiv* preprint arXiv:2210.06155. Rafał Powalski, Łukasz Borchmann, Dawid Jurkiewicz, Tomasz Dwojak, Michał Pietruszka, and Gabriela Pałka. 2021. Going full-tilt boogie on document understanding with text-image-layout transformer. In International Conference on Document Analysis and Recognition, pages 732–747. Springer. Anik Saha, Catherine Finegan-Dollak, and Ashish Verma. 2021. In *Proceedings of The Second Document Intelligence Workshop at ACM SIGKDD Conference on Knowledge Discovery Data Mining (Document Intelligence Workshop at KDD)*. [link]. Karen Simonyan and Andrew Zisserman. 2015. Very deep convolutional networks for large-scale image recognition. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. 2015. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1–9. Petar Velickovi ˇ c, Guillem Cucurull, Arantxa Casanova, ´ Adriana Romero, Pietro Lio, and Yoshua Bengio. 2017. Graph attention networks. *arXiv preprint* arXiv:1710.10903. Jiapeng Wang, Lianwen Jin, and Kai Ding. 2022a. LiLT: A simple yet effective language-independent layout transformer for structured document understanding. In *Proceedings of the 60th Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 7747–7757, Dublin, Ireland. Association for Computational Linguistics. Renshen Wang, Yasuhisa Fujii, and Ashok C. Popat. 2022b. Post-OCR Paragraph Recognition by Graph Convolutional Networks. In 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pages 2533–2542. Conference Name: 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) ISBN: 9781665409155 Place: Waikoloa, HI, USA Publisher: IEEE. Mengxi Wei, Yifan He, and Qiong Zhang. 2020. Robust layout-aware IE for visually rich documents with pretrained language models. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 2367–2376. Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, et al. 2021. Layoutlmv2: Multi-modal pre-training for visually-rich document understanding. In *Proceedings of the 59th Annual* Meeting of the Association for Computational Linguistics (ACL) 2021. Wenwen Yu, Ning Lu, Xianbiao Qi, Ping Gong, and Rong Xiao. 2021. Pick: processing key information extraction from documents using improved graph learning-convolutional networks. In *2020 25th International Conference on Pattern Recognition (ICPR)*, pages 4363–4370. IEEE. Shi-Xue Zhang, Xiaobin Zhu, Jie-Bo Hou, Chang Liu, Chun Yang, Hongfa Wang, and Xu-Cheng Yin. 2020. Deep relational reasoning graph network for arbitrary shape text detection. In *Proceedings of the* IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9699–9708. ## A Appendix A.1 Training Details We pretrained the model on IITCDIP for one epoch on 64 Tesla V100 GPUs (8 nodes with 8 GPUs per node) with batch size 128 (2 per GPU). We fine tuned the model on RVLCDIP for 100 epochs on 8 GPUs with batch size of 32 (4 per GPU). We used *AdamW* optimizer with initial learning rate of 0.001 and weight decay of 0.1, for both pretraining and fine-tuning. ## A.2 Ablation Study Effect of embedding dimensions: Table 9 compares the different values for the embedding dimension d. The lowest embedding dimension (d = 128) does not have enough information for generalization. Comparing the performance on RN vs. RT, we see that using d = 128 results in a drop in performance on RN. However, for larger values, starting at d = 256, we have see better performance on RN vs. RT. We obtain better scores on RT and OOD RN for d = 768. Therefore, d = 768 is default embedding dimension for GVdoc. | d | Micro RT | Macro RT | Micro RN | Macro RN | |-----|------------|------------|------------|------------| | 128 | 86.78 | 86.67 | 85.01 | 85.16 | | 256 | 86.26 | 86.20 | 88.16 | 88.12 | | 768 | 87.60 | 87.36 | 89.90 | 89.12 | Table 9: The effect of embedding dimension: Increasing d has a positive impact on generalization. ## A.3 Roc Curve Figure 5 (left) compares Receiver Operating Characteristic curves (ROC) for in-domain RN versus out-domain RO denoted as (N-O). ROC curve of our GVdoc model is significantly better than the rest of the models. GoogLeNet has AUC score 0.59 and ROC curve close to 0.5 indicates it can not differentiate between in- and out-domain data. For RT versus RO (T-O), DiT has AUC score of 0.89 whereas our model has 0.9. The ROC curve in Figure 5 (right) demonstrates that GVdoc is close to DiT. Moreover, it has better AUC score and ROC curve than LayoutLMv2 and GoogLeNet for T-O. Overall, GVdoc can effectively differentiate between in-domain and out-of-domain data. ## A.4 Distribution Of Confidence Scores We plot prediction confidence scores in Figure 6 for RT and RN, respectively. Similar trends suggest that all model have similar confidence score on both datasets. However, Figure 7 demonstrates that our model predicts lower confidence scores on RO suggesting that it is not certain in classifying OOD samples. Whereas, DiT assigns higher confidence to fewer examples, hence incorrectly classifies them into specific class. ## A.5 Distribution Of Energy Scores When we compare the distribution of energy scores for different models, our GVdoc model has a clear separation between energy scores for RN-RO and RT-RO, as shown in Figure 8. However, from Figure 9, it is hard to differentiate the energy scores of the positive (in-distribution) and negative (outof-distribution) data samples for the DiT model. The energy scores of VGG-16 for RN and RO in Figure 10 are similar, whereas energy scores for RT-RO are clearly separable. ## A.6 Sample Document Graph Figure 11 shows an example of a combined graph ![11_image_0.png](11_image_0.png) constructed by merging both the β skeleton and OCR-based paragraph-level graph. ![12_image_0.png](12_image_0.png) ![12_image_1.png](12_image_1.png) ![12_image_2.png](12_image_2.png) ![13_image_0.png](13_image_0.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 7 ✗ A2. Did you discuss any potential risks of your work? There is no risk. ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? No response. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? No response. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? No response. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No response. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? No response. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. No response. ## C ✓ **Did You Run Computational Experiments?** 5 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 5 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 5 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 5 ✗ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? The used packages are pretty much standard. If required we will add in camera ready version. ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
li-etal-2023-sequence-sequence
A Sequence-to-Sequence{\&}Set Model for Text-to-Table Generation
https://aclanthology.org/2023.findings-acl.330
Recently, the text-to-table generation task has attracted increasing attention due to its wide applications. In this aspect, the dominant model formalizes this task as a sequence-to-sequence generation task and serializes each table into a token sequence during training by concatenating all rows in a top-down order. However, it suffers from two serious defects: 1) the predefined order introduces a wrong bias during training, which highly penalizes shifts in the order between rows; 2) the error propagation problem becomes serious when the model outputs a long token sequence. In this paper, we first conduct a preliminary study to demonstrate the generation of most rows is order-insensitive. Furthermore, we propose a novel sequence-to-sequence{\&}set text-to-table generation model. Specifically, in addition to a text encoder encoding the input text, our model is equipped with a table header generator to first output a table header, i.e., the first row of the table, in the manner of sequence generation. Then we use a table body generator with learnable row embeddings and column embeddings to generate a set of table body rows in parallel. Particularly, to deal with the issue that there is no correspondence between each generated table body row and target during training, we propose a target assignment strategy based on the bipartite matching between the first cells of generated table body rows and targets. Experiment results show that our model significantly surpasses the baselines, achieving state-of-the-art performance on commonly-used datasets.
# A Sequence-To-Sequence&Set Model For Text-To-Table Generation Tong Li1,2∗Zhihao Wang1,2∗Liangying Shao1 Xuling Zheng1,2† Xiaoli Wang1 Jinsong Su1,2† 1School of Informatics, Xiamen University, China 2Key Laboratory of Digital Protection and Intelligent Processing of Intangible Cultural Heritage of Fujian and Taiwan (Xiamen University), Ministry of Culture and Tourism, China {litong, zhwang, liangyingshao}@stu.xmu.edu.cn {xlzheng, xlwang, jssu}@xmu.edu.cn ## Abstract Recently, the text-to-table generation task has attracted increasing attention due to its wide applications. In this aspect, the dominant model (Wu et al., 2022) formalizes this task as a sequence-to-sequence generation task and serializes each table into a token sequence during training by concatenating all rows in a topdown order. However, it suffers from two serious defects: 1) the predefined order introduces a wrong bias during training, which highly penalizes shifts in the order between rows; 2) the error propagation problem becomes serious when the model outputs a long token sequence. In this paper, we first conduct a preliminary study to demonstrate the generation of most rows is order-insensitive. Furthermore, we propose a novel sequence-to-sequence&set text-to-table generation model. Specifically, in addition to a *text encoder* encoding the input text, our model is equipped with a *table header* generator to first output a table header, i.e., the first row of the table, in the manner of sequence generation. Then we use a *table body generator* with learnable row embeddings and column embeddings to generate a set of table body rows in parallel. Particularly, to deal with the issue that there is no correspondence between each generated table body row and target during training, we propose a target assignment strategy based on the bipartite matching between the first cells of generated table body rows and targets. Experiment results show that our model significantly surpasses the baselines, achieving state-of-the-art performance on commonlyused datasets.1 ## 1 Introduction Text-to-table generation is a task that aims to generate a tabular description of important information ![0_image_0.png](0_image_0.png) Figure 1: An example of text-to-table task. The input text is a report of a basketball game. In the existing dominant model (Wu et al., 2022), the output table is serialized into a token sequence during training by concatenating all rows in a top-down order. Here, ⟨s⟩ token is used to separate the cells of each row, ⟨n⟩ token is utilized to separate rows, and ⟨ ⟩ token means an empty cell. Unlike Wu et al. (2022), in this work, we model the generation of each table as a table header and then a set of table body rows. Note that these rows can be further decomposed into the first column and data cells wrapped in the red box. for a given text. As shown in Figure 1, the input text is a post-game summary of a basketball game, and the output is a table containing statistics about players. Usually, this task can be widely used to extract important structured information, such as restaurant reviews (Novikova et al., 2017), team and player statistics (Wiseman et al., 2017), Wikipedia infoboxes (Bao et al., 2018) and biographies (Lebret et al., 2016), benefiting humans understand the input text more intuitively. In this aspect, Wu et al. (2022) first propose sequence-to-sequence (seq2seq) text-to-table generation models. They first serialize each table to a token sequence by concatenating all rows in a topdown order. Back to Figure 1, they represent each table row as a cell sequence, and then represent the entire table by concatenating all rows. Then, they train seq2seq models fine-tuned from BART (Lewis et al., 2020). During inference, the model generates a table in a token-by-token manner and the generated sequence is eventually split by ⟨s⟩ and ⟨n⟩ to obtain the structured table. Despite some success, the above-mentioned sequence generation manner leads to two defects in the model. **First**, imposing the above-mentioned predefined order on generating rows in the dataset may bring wrong bias to the model training (Ye et al., 2021; Lu et al., 2022a). Here, we still take Figure 1 as an example. Each row represents the statistics of a basketball player, and there is no obvious dependency between the statistics of different players. Thus, when considering the order of generating rows, the inconsistent order between generated rows and targets will cause a large training loss, even if the generated rows and targets are exactly the same. **Second**, as the number of generated rows increases, the outputted token sequence becomes longer, which makes the seq2seq models encounter the serious error propagation problem (Ye et al., 2021; Tan et al., 2021). Besides, the seq2seq model generates a table autoregressively, of which time complexity is the number of rows times the number of columns, resulting in inefficient GPU acceleration. In this paper, we first conduct a preliminary study to inspect the effect of row generation order on seq2seq models. Specifically, we randomly reorder table body rows to construct different training datasets. Then, we use these datasets to train seq2seq models, of which performance is compared on the same dataset. Experimental results show that these models exhibit similar performance, proving that the generation of most table body rows is order-insensitive. Moreover, we propose a novel sequence-tosequence&set (Seq2Seq&set) text-to-table generation model which decomposes the table generation into two steps: generating a table header, i.e., the first row of the table, and then a set of table body rows. As shown in Figure 2, our model mainly consists of three modules: 1) *Text Encoder*. It is a vanilla Transformer encoder, encoding the input document into hidden states; 2) *Table Header Generator* that produces the table header as a token sequence; 3) *Table Body Generator* generating different table body rows in parallel, where each row is generated token by token. To generate different rows from the same text, we equip the generators with a set of learnable *Row Embeddings*. Besides, we add a set of learnable *Column Embeddings* to enhance the semantic consistency between cells in the same column. During the model training, we need to determine the correspondence between the generated table body rows and targets, so as to achieve the orderindependent generation of table body rows. To this end, we propose to use the model to generate the first cells of table body rows. Then, we efficiently determine target assignments according to the matching results between these first cells and those of targets, which can be modeled as a bipartite matching problem and solved by the Hungarian algorithm (Kuhn, 1955). Afterwards, we calculate the training loss based on the one-to-one alignments between the generated table body rows and targets. Besides, during the model inference, we force table body generator to output the same number of cells with the previously-generated generated table header. Compared with the seq2seq models (Wu et al., 2022), our model has the following advantages: 1) our model is able to not only alleviate the order bias caused by the sequence generation but also reduce the effect of error propagation on the generation of long sequences; 2) our model achieves faster generation speed since table body rows can be efficiently generated in parallel. Experiment results show that our model significantly improves the quality of the generated table, achieving state-of-the-art performance on commonly-used datasets. ## 2 Related Work Information Extraction (IE) refers to the automatic extraction of structured information such as entities, relationships between entities, and attributes describing entities from unstructured sources (Sarawagi et al., 2008). The common IE tasks include named entity recognition (NER), relation extraction (RE), event extraction (EE), etc. To achieve high-quality IE, researchers have proposed various task-specific IE methods. With the development of deep learning, researchers mainly focus on neural network based generation models, which are often seq2seq pre-trained models generating serialized structured information. Compared with traditional IE methods, these methods have achieved comparable or even superior results in RE (Zeng et al., 2018; Nayak and Ng, 2020), NER (Chen and Moschitti, 2018; Yan et al., 2021), EE (Li et al., 2021; Lu et al., 2021). Along this line, researchers resort to unified models (Paolini et al., 2021; Lu et al., 2022b) that model multiple IE tasks as the generation of sequences in a uniform format. In this work, we mainly focus on text-to-table generation that aims to generate structured tables from natural language descriptions. Note that textto-table generation can be considered as the dual task of table-to-text generation, which intends to generate a textual description conditioned on the input structured data. Usually, these structured data are represented as tables (Wiseman et al., 2017; Thomson et al., 2020; Chen et al., 2020) or sets of table cells (Bao et al., 2018; Parikh et al., 2020). Compared with traditional IE tasks, this task does not rely on predefined schemas. In this regard, Wu et al. (2022) first explore this task as a seq2seq generation task by fine-tuning a BART model (Lewis et al., 2020) for generation. By contrast, we model the generation of each table as a table header and then a set of table body rows. To this end, we propose a Seq2Seq&set text-to-table model, which can alleviate the defects caused by the sequence generation in the conventional seq2seq models. ## 3 Preliminary Study We first conduct a preliminary study to inspect the effect of row generation order on the Seq2Seq model (Wu et al., 2022). Since the table header is the first row of a table containing column names which should be generated first, so we only randomly reorder table body rows to construct different training datasets. Then, we individually train the Seq2Seq models using these datasets with the same setting as the original model, and compare their performance on the same dataset (Wiseman et al., 2017). From Table 1, we can observe that the original model and their variants exhibit similar performance. Besides, we calculate the sample standard deviation of model performance and find that all standard deviations are no more than 0.1. These results strongly demonstrate that the generation or-2Notice that in (Wu et al., 2022), they use row header F1, *column header F1* and *non-header cells F1*. Here, we individually rename these to the first column F1, table header F1 and *data cell F1*, so as to avoid ambiguity in descriptions. | Subset | Model | The first | Table | Data | |-----------|-----------|-------------|---------|--------| | column F1 | header F1 | cell F1 | | | | Origin | 94.71 | 86.07 | 82.97 | | | Random1 | 94.57 | 85.93 | 82.83 | | | Team | Random2 | 94.76 | 86.01 | 83.00 | | Random3 | 94.66 | 85.91 | 82.84 | | | STeam | 0.08 | 0.07 | 0.09 | | | Origin | 92.16 | 87.82 | 81.96 | | | Random1 | 92.23 | 87.66 | 81.79 | | | Player | Random2 | 92.33 | 87.69 | 81.84 | | Random3 | 92.13 | 87.85 | 81.99 | | | SPlayer | 0.09 | 0.07 | 0.10 | | ders of table body rows have a negligible effect on the model performance. In other words, the generation of table body rows is order-insensitive. ## 4 Our Model In this section, we describe our model in detail. As shown in Figure 2, our model is composed of three modules: Text Encoder, *Table Header Generator* and *Table Body Generator*. Then, we give a detailed description of the model training. ## 4.1 Text Encoder Our text encoder is used to encode input documents. It is identical to the BART (Lewis et al., 2020) encoder, consisting of Le Transformer encoder layers. The input document is first tokenized into X=x1, x2*, ..., x*|X| using a byte-level Byte-Pair Encoding (Wang et al., 2020) tokenizer. Then, text encoder iteratively updates the hidden states in the following way: $$\mathbf{A}_{e}^{(l)}=\mathrm{MultiHead}(\mathbf{H}_{e}^{(l)},\mathbf{H}_{e}^{(l)},\mathbf{H}_{e}^{(l)}),\tag{1}$$ $$\mathbf{H}_{e}^{(l+1)}=\mathrm{FFN}(\mathbf{A}_{e}^{(l)}),\tag{2}$$ where H (l) e ∈R|X|×dis the hidden states at the l-th layer, and d is the dimension of embeddings and hidden states. MultiHead(·) is a multi-head attention function and FFN(·) refers to a feed-forward network. We initialize H (0) e as the sum of Word(X) and Pos(X), where Word(·) is a word embedding 5360 ![3_image_0.png](3_image_0.png) Row Embeddings function and Pos(·) is a learnable position embedding function. Note that we omit the descriptions of layer normalization and residual connection in each sublayer. Please refer to (Vaswani et al., 2017; Lewis et al., 2020) for more details. The above process iterates Le times. Finally, we obtain the hidden states H (Le) e of the input document, which will provide useful information for both table header generator and table body generator via attention mechanisms. ## 4.2 Table Header Generator As mentioned previously, our model is designed to generate a table as a table header and a set of table body rows. To this end, we follow previous studies (Zhang et al., 2018; Su et al., 2019) to decomposes the table generation into two steps. We first propose table header generator, which is the same as the BART decoder and generates the table header Y0=y 0 1 , y0 2 , ..., y0 |Y0| as a token sequence. Given the encoder hidden states H (Le) e , our generator produces the table header in an autoregressive manner: $$\mathbf{A}_{d0}^{(l)}=\text{MultiHead}(\mathbf{H}_{d0}^{(l)},\mathbf{H}_{d0}^{(l)},\mathbf{H}_{d0}^{(l)}),\tag{3}$$ $$\overline{\mathbf{A}}_{d0}^{(l)}=\text{MultiHead}(\mathbf{A}_{d0}^{(l)},\mathbf{H}_{e}^{(Le)},\mathbf{H}_{e}^{(Le)}),$$ (4) $$\mathbf{H}_{d0}^{(l+1)}=\text{FFN}(\overline{\mathbf{A}}_{d0}^{(l)}),\tag{5}$$ where H (l) d0 ∈ R|Y0|×dis the hidden states at the l-th layer. In the first layer, we initialize the j-th decoder input as the sum of Word(y 0 j−1 ), Pos(y 0 j−1 ), Row(y 0 j−1 ) and Col(y 0 j−1 ), where Word(·) and Pos(·) share parameters with text encoder, row embeddings Row(·) and column embeddings Col(·) will be described in the next subsection. Finally, after iterating the above process for Ld times, we obtain the last-layer hidden states H (Ld) d0 = {H (Ld) d0,j }1≤j≤|Y0|, where H (Ld) d0,j is used to output the j-th target token: $$y_{j}^{0}=\mathrm{argmax}({\bf W}_{o}{\bf H}_{d0,j}^{(L_{d})}),\qquad\qquad(6)$$ where Wo ∈ R|V|×dis a learnable parameter matrix, and V is the target vocabulary. ## 4.3 Table Body Generator To generate a set of table body rows in parallel, we propose a novel semi-autoregression table body generator. It is also stacked with Ld layers, each of which consists of self-attention, cross-attention and feed-forward network sublayers. Particularly, it shares parameters with table header generator, so that it can directly exploit the hidden states of table header generator via selfattention. This generator produces M table body rows {Ym}1≤m≤M in parallel, under the semantic guidence of H (Le) e and {H (l) d0}1≤l≤Ld . Particularly, we use a special ⟨∅⟩ token to represent "no corresponding row". Formally, the m-th table body row, Ym = y m 1 , ym 2 , ..., ym |Ym| is generated in an auto-regressive way: $$\overline{\mathbf{H}}_{d m}^{(l)}=[\mathbf{H}_{d0}^{(l)};\mathbf{H}_{d m}^{(l)}],\tag{7}$$ $$\mathbf{A}_{d m}^{(l)}=\mathrm{MultiHead}(\mathbf{H}_{d m}^{(l)},\overline{\mathbf{H}}_{d m}^{(l)},\overline{\mathbf{H}}_{d m}^{(l)}),$$ (8) $$\overline{\mathbf{A}}_{d m}^{(l)}=\mathrm{MultiHead}(\mathbf{A}_{d m}^{(l)},\mathbf{H}_{e}^{(L_{e})},\mathbf{H}_{e}^{(L_{e})}),$$ (9) $$\mathbf{H}_{d m}^{(l+1)}=\mathrm{FFN}(\overline{\mathbf{A}}_{d m}^{(l)}),\tag{10}$$ where H (l) dm ∈ R|Ym|×dis the hidden states for generating Ym. Note that our generator exploits both hidden states of table header and previous tokens in the same row to produce a table body row. Taking the sum of word embeddings and positional embeddings as inputs, the vanilla Transformer decoder can only generate a sequence but not a set. To generate a set of table body rows in parallel, we introduce M additional learnable embeddings called *Row Embeddings* (See the green squares in Figure 2) into inputs, guiding table body generator to produce different rows. Here, M is a predefined parameter that is usually larger than the maximum number of rows in training data. Furthermore, to enhance the semantic consistency between cells in the same column, we add another learnable embeddings named *Column Embeddings* (See the blue squares in Figure 2) into inputs. Column embeddings are similar to positional embeddings but are defined at the cell level. By doing so, tokens of cells in the same column are equipped with identical column embeddings. Formally, with row and column embeddings, the initial hidden state of our generator becomes $$\begin{array}{c}{{\mathrm{H}_{d m,k}^{(0)}=\mathrm{Word}(y_{k-1}^{m})+\mathrm{Pos}(y_{k-1}^{m})}}\\ {{\qquad+\mathrm{Row}(y_{k-1}^{m})+\mathrm{Col}(y_{k-1}^{m}),}}\end{array}$$ $$(11)$$ where y m k−1 is the (k−1)-th output token at the mth table body row, and Row(·) and Col(·) are row and column embedding functions, respectively. Through Ld times of hidden state updates, we obtain the last-layer hidden states {H (Ld) dm }1≤m≤M. Finally, based on H (Ld) dm,k, we obtain the token with maximum probability as the output: $$y_{k}^{m}=\mathrm{argmax}(\mathbf{W}_{o}\mathbf{H}_{d m,k}^{(L_{d})}).\qquad(12)$$ In order to maintain an equal number of cells in every row, table body generator keeps generating a row until the number of ⟨s⟩ matches that in the header. ## 4.4 Training Training Loss As mentioned above, we decompose the generation of a table into two steps. Correspondingly, we define the training loss as: $${\mathcal{L}}=\lambda{\mathcal{L}}_{h}+(1-\lambda){\mathcal{L}}_{b},$$ $$(13)$$ where λ is a hyper-parameter to balance the effect of *the table header generation loss* Lh and the table body generation loss Lb. As for Lh, we follow common practice to define Lh as a cross-entropy loss between the predictive distributions of the generated table header and the target one: $${\mathcal L}_{h}=-\sum_{j=1}^{|\mathrm{Y}^{0}|}\mathrm{log}\hat{p}_{j}^{0}(y_{j}^{0}),\qquad\qquad(14)$$ where pˆ 0 j (·) is the predictive probability of the j-th token in the table header Y0 using teacher forcing. Target Assignments Based on the First Cells We also define Lb as a cross-entropy loss between the predictive distributions of the generated table body rows and targets. However, there is no correspondence between each generated table body row and target during training, and hence we can not directly calculate Lb. To deal with this issue, we learn from the recent studies on set generation (Carion et al., 2020; Ye et al., 2021; Xie et al., 2022) and propose to efficiently determine target row assignments according to the matching results between the first cells of generated table body rows and those of targets. Here, we use the first cell to represent the whole row, because it is usually unique and often contains the important information of a table such as a name and a primary key. Concretely, we first use our model to generate the first cells of all table body rows. During this process, we obtain generation probability distributions {P m}1≤m≤M, where P m = {p m k}1≤k≤|Pm| and p m kis the predictive distribution at the k-th timestep for the m-th table body row. Particularly, we pad the set of target table body rows to size M with ⟨∅⟩. Afterwards, we determine the target assignments via the bipartite matching between the generated table body rows and targets: $$=\operatorname{argmin}_{f\in\mathrm{F}(M)}\sum_{m=1}^{M}{\mathcal{C}}(\mathrm{Y}^{f(m)},\mathbf{P}^{m}),\quad\quad(15)$$ where F(M) denotes the set of all M! one-to-one mapping functions and f(·) is a function aligning 5362 ![5_image_0.png](5_image_0.png) the m-th generated table body row to the f(m)- th target one. The optimal matching can be efficiently determined with the Hungarian algorithm (Kuhn, 1955). More specifically, the matching cost C(·) takes into account the token level probability, which is defined as follows: $$\mathcal{C}(\mathrm{Y}^{f(m)},\mathbf{P}^{m})=-\sum_{k=1}^{N}\mathbbm{1}_{\{\mathrm{Y}\neq\varnothing\}}p_{k}^{m}(y_{k}^{f(m)}),\tag{16}$$ where $N$ is the length of the first cell in $\mathrm{Y}^{f(m)}$ and $f(m)$ p m k (y f(m) k) is the predictive probability of the k-th target token y f(m) kof the m-th table body row. We ignore the score from matching predictions with ⟨∅⟩, so as to ensure that each generated row can be aligned with a non-empty target as possible. For example, in Figure 3, our model generates the first cells of six table body rows, where the first one is ⟨∅⟩ token and the others are player names. Then we assign each generated table body row to a target one according to the above-mentioned bipartite matching. In this example, we can find an optimal matching, with a mapping function ˆf satisfying that ˆf(1) = 5, ˆf(2) = 4, ..., ˆf(6) = 1. Note that there are two "*Stephen Curry*" occurring in row 2 and row 4, but are aligned to different targets due to the one-to-one matching. In this way, we can guarantee that supervision signal for generating each table body row is unique, alleviating the generation of duplicate rows. Finally, through the above target row assignments, we can calculate Lb as follows: $$\mathcal{L}_{b}=-\sum_{m=1}^{M}\sum_{k=1}^{|\mathrm{Y}^{\hat{f}(m)}|}\log\hat{p}_{k}^{m}(y_{k}^{\hat{f}(m)}),\tag{17}$$ where pˆ m k (·) is the predictive distribution of the m-th generated table body row at the timestep k, and y fˆ(m) kis the k-th token in the assigned target row. Particularly, inspired by the recent studies (Carion et al., 2020; Ye et al., 2021; Xie et al., 2022), when y fˆ(m) k = ⟨∅⟩, we multiply its token-level loss with a predefined factor to down-weight its effect, so as to reduce the negative effect of excessive ⟨∅⟩ tokens. ## 5 Experiments 5.1 Setup Datasets Following the previous work (Wu et al., 2022), we conduct experiment on four commonlyused datasets for table-to-text generation: Rotowire (Wiseman et al., 2017), E2E (Novikova et al., 2017), WikiTableText (Bao et al., 2018), and WikiBio (Lebret et al., 2016). As the main dataset of our experiments, Rotowire has two types of tables named Team and Player. In Rotowire, each instance has multiple columns while the other three datasets have only two columns. We use the processed datasets from (Wu et al., 2022). The dataset statistics are listed in Appendix A. Implementation Details We initialize our model with the pre-trained BART-base (Lewis et al., 2020), which consists of 6 encoder layers and 6 decoder layers. The number of multi-head attention is 12, the dimension of embedding and hidden state is 768, and the dimension of feed-forward network is 3,072. We reuse the vocabulary from the pre-trained BART-base model, whose size is 51,200. We use the Adam (Kingma and Ba, 2015) optimization algorithm with a fixed maximum number of tokens as 4,096. For different datasets, we set different numbers of row embeddings according to the maximum row numbers in training sets. We train a separate model on each dataset and select the model with the lowest validation loss. Hyperparameter settings are shown in Appendix B. Baselines We compare our model with the following baselines mentioned in (Wu et al., 2022): - **Sent-level RE** This model uses an existing method of relation extraction (RE) (Zhong and Chen, 2021) to extract information based on predefined schemas. It takes the first column and data cells as entities and the types of table header cells as relations. - **Doc-level RE** It applies the same RE method, | Team Player | |---------------| Subset Model The first column F1 Table header F1 Data cell F1 Error Exact Chrf BERT Exact Chrf BERT Exact Chrf BERT rate Sent-level RE 85.28 87.12 93.65 85.54 87.99 87.53 77.17 79.10 87.48 0.00 Doc-level RE 84.90 86.73 93.44 85.46 88.09 87.99 75.66 77.89 87.82 0.00 Seq2Seq 94.71 94.93 97.35 **86.07** 89.18 88.90 82.97 84.43 90.62 0.49 Seq2Seq-c 94.97 95.20 97.51 86.02 89.24 89.05 83.36 84.76 90.80 0.00 Seq2Seq&set 96.80‡97.10‡**98.45**‡86.00 89.48 93.11‡84.33‡85.68‡**91.30**‡0.00 Sent-level RE 89.05 93.00 90.98 86.36 89.38 93.07 79.59 83.42 85.35 0.00 Doc-level RE 89.26 93.28 91.19 87.35 90.22 97.30 80.76 84.64 86.50 0.00 Seq2Seq 92.16 93.89 93.60 87.82 91.28 94.44 81.96 84.19 88.66 7.40 Seq2Seq-c 92.31 94.00 93.71 87.78 91.26 94.41 82.53 84.74 88.97 0.00 Seq2Seq&set 92.83†94.48†96.43‡88.02 91.60†95.08†83.51‡85.75‡**90.93**‡0.00 except that it predicts the relations between entities within multiple sentences. - NER It is a BERT-based (Devlin et al., 2019) entity extraction method that considers data cells in each table as entities and its first column cells as entity types. - **Seq2Seq** (Wu et al., 2022) It is a Transformer based seq2seq model that models the generation of a table as a sequence. - **Seq2Seq-c** (Wu et al., 2022) This model is a Seq2Seq variant, where the cell number of each table body row is limited to the same as that of table header. Evaluation We use the same evaluation script from (Wu et al., 2022). We adopt the F1 score as the evaluation measure, which is calculated in the following way: the precision and recall are first computed to get table-specific F1 scores, which are then averaged to obtain the final score. Here, precision is defined as the percentage of correctly predicted cells among the generated cells, and recall is defined as the percentage of correctly predicted cells among target cells. Particularly, the F1 score is calculated in three ways: *exact match* that matches two cells exactly, *chrf score* that calculates character-level n-gram similarity, and *BERTscore* that calculates the similarity between BERT embeddings of two cells. For Rotowire, we report the F1 scores of the first column, table header and data cells, which refer to row header F1, *column header F1* and non-header cells F1 in (Wu et al., 2022). For the other three datasets, there are only fixed two columns, so the F1 score of table header is not calculated. For data cells, we use not only the content but also the table header/the first column cells to ensure that the cell is on the right column/row. Note that these metrics are insensitive to the orders of rows and columns. Besides, we calculate the error rate to represent the percentage of erroneous format tables. ## 5.2 Main Results Table 2 reports the results on the Team and Player subsets of Rotowire. We observe that our model consistently outperforms all baselines in terms of three kinds of F1 scores. Particularly, in terms of data cell F1, which is the most difficult of the three kinds of F1 scores, ours achieves significant improvements. Besides, note that both Seq2Seq-c and our model enforce the number of cells in each table body row to be the same as that of table header, so their error rates are 0. Table 4 shows the results on E2E, WikiTableText and WikiBio. Likewise, our model outperforms almost all baselines. We also provide a case in Appendix C to visually show the effectiveness of our model. ## 5.3 Inference Efficiency | Model | # Sentences per second (speedup) Team Player E2E | | | |-------------|----------------------------------------------------|--------------|--------------| | Seq2Seq | 1.24 (1.00×) | 0.32 (1.00×) | 1.66 (1.00×) | | Seq2Seq-c | 1.22 (0.98×) | 0.30 (0.94×) | 1.62 (0.98×) | | Seq2Seq&set | 1.84 (1.48×) | 1.09 (3.41×) | 5.96 (3.59×) | We compare the inference efficiency of different models. From Table 3, we observe that ours is significantly more efficient than baselines, due to its advantage in the parallel generation of table body rows. | Dataset | Model | The first column F1 | Data cell F1 | Error | | | | | |---------------|-----------|-----------------------|----------------|---------|--------|--------|-------|------| | Exact | Chrf | BERT | Exact | Chrf | BERT | rate | | | | NER | 91.23 | 92.40 | 95.34 | 90.80 | 90.97 | 92.20 | 0.00 | | | Seq2Seq | 99.62 | 99.69 | 99.88 | 97.87 | 97.99 | 98.56 | 0.00 | | | E2E | Seq2Seq-c | 99.63 | 99.69 | 99.88 | 97.88 | 98.00 | 98.57 | 0.00 | | Seq2Seq&set | 99.62 | 99.69 | 99.83 | 98.65‡ | 98.70‡ | 99.08‡ | 0.00 | | | NER | 59.72 | 70.98 | 94.36 | 52.23 | 59.62 | 73.40 | 0.00 | | | Seq2Seq | 78.15 | 84.00 | 95.60 | 59.26 | 69.12 | 80.69 | 0.41 | | | WikiTableText | Seq2Seq-c | 78.16 | 83.96 | 95.68 | 59.14 | 68.95 | 80.74 | 0.00 | | Seq2Seq&set | 78.67‡ | 84.21‡ | 95.88 | 59.94‡ | 69.59‡ | 81.67‡ | 0.00 | | | NER | 63.99 | 71.19 | 81.03 | 56.51 | 62.52 | 61.95 | 0.00 | | | Seq2Seq | 80.53 | 84.98 | 92.61 | 68.98 | 77.16 | 76.54 | 0.00 | | | WikiBio | Seq2Seq-c | 80.52 | 84.96 | 92.60 | 69.02 | 77.16 | 76.56 | 0.00 | | Seq2Seq&set | 81.03† | 85.44† | 93.02† | 69.51‡ | 77.53‡ | 77.13‡ | 0.00 | | ![7_image_0.png](7_image_0.png) To investigate the effect of speedup on different types of tables, we carry out experiments on the Rotowire Player dataset. We define row-to-column ratio as the row number divided by the column number and measure the speedup with different row-to-column ratios. The results depicted in Figure 4 demonstrate that our model exhibits a linear improvement as the row-to-column ratio increases. ## 5.4 Error Propagation Analysis As analyzed in Introduction, our model is able to better alleviate the error propagation issue than previous models. To verify this, we conduct experiments on the longest dataset Rotowire Player, and then compare the performance of our model and Seq2Seq-c. From Figure 5, we observe that ours always outperforms the baseline model. Particularly, with the number of table tokens increasing, our model exhibits more significant performance advantages over Seq2Seq-c. ![7_image_1.png](7_image_1.png) | Subset | Model | The first | Table | Data | |-----------------------------|-----------------|-------------|---------|--------| | column F1 header F1 cell F1 | | | | | | Seq2Seq&set | 96.80 | 86.00 | 84.33 | | | w/o row embed. | 66.91 | 85.73 | 57.25 | | | w/o col. embed. | 96.75 | 86.04 | 83.32 | | | w/o tgt. assign. | 92.87 | 85.87 | 79.43 | | | w/o header gen. | 95.34 | 85.23 | 59.02 | | | Team | Seq2Seq&set | 92.83 | 88.02 | 83.51 | | w/o row embed. | 31.13 | 87.83 | 26.20 | | | Player | w/o col. embed. | 90.06 | 87.47 | 80.18 | | w/o tgt. assign. | 67.71 | 87.99 | 60.22 | | | w/o header gen. | 92.45 | 86.83 | 58.39 | | ## 5.5 Ablation Study We conduct an ablation study on Rotowire to verify the effectiveness of different components of our model. The results are shown in Table 5, involving four variants: w/o Row Embeddings. We remove row embeddings in this variant. From lines 3 and 8, we observe that this variant is completely collapsed. This is reasonable that without row embeddings, the row-specific inputs of table body generator become exactly the same, resulting in the generator failing to generate distinct rows. w/o Column Embeddings. We remove column embeddings in this variant. From lines 4 and 9, we observe that the data cell F1 decreases a lot. Thus, we also confirm that column embeddings are indeed useful in enhancing the semantic consistency between cells in the same column. The other two scores changed very little, which we believe is due to the fact that table headers and the first columns have no need to refer to the other rows. w/o Target Assignments. In this variant, we discard the target assignments based on the first cells during training, which makes our model learn to generate table body rows in the original order of targets. As shown in lines 5 and 10, our model exhibits a significant performance drop. w/o Table Header Generator. In this variant, we simultaneously generate table header and table body rows in parallel. Consequently, the variant can not leverage the information of generated table header during the process of generating table body rows, and thus exhibits worse performance than the original model. This result proves that it is reasonable to distinguish the generation of table header and table body rows. ## 6 Conclusion In this paper, we propose a Seq2Seq&set model for text-to-table generation, which first outputs a table header, and then table body rows. Most importantly, unlike the previous study (Wu et al., 2022), we model the generation of table body rows as a set generation task, which is able to alleviate not only the wrong bias caused by the predefined order during training, but also the problem of error propagation during inference. Experimental results show that our model gains significant performance improvements over the existing SOTA. ## Limitations Our model is currently suitable for generating ordinary tables with attribute names and records, but it may struggle with more complex table formats that involve merged cells. To improve the flexibility of our model, we plan to investigate more versatile forms of table representation. Another limitation of our model is that our model training involves longer training time, compared with seq2seq baselines. This may be due to the inherent instability of target assignments. In the future, we will explore refining the model training by reducing the target assignment instability. The existing datasets for this task is relatively simple, and in the future we will conduct experiments on more complex datasets that require reasoning, such as WebNLG (Gardent et al., 2017). ## Ethics Statement This paper proposes a Seq2Seq&set model for textto-table generation. We take ethical considerations seriously and ensure that the research and methods used in this study are conducted in an ethical and responsible manner. The datasets used in this paper are publicly available and have been widely adopted by researchers for testing the performance of text-to-table generation. This study does not involve any data collection or release, and thus there exist no privacy issues. We also take steps to ensure that the findings and conclusions of this study are reported accurately and objectively. ## Acknowledgement The project was supported by National Natural Science Foundation of China (No. 62276219), Natural Science Foundation of Fujian Province of China (No. 2020J06001), Youth Innovation Fund of Xiamen (No. 3502Z20206059). We also thank the reviewers for their insightful comments. ## References Junwei Bao, Duyu Tang, Nan Duan, Zhao Yan, Yuanhua Lv, Ming Zhou, and Tiejun Zhao. 2018. Table-totext: Describing table region with natural language. In *Proc. of AAAI*. Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. 2020. End-to-end object detection with transformers. In *Proc. of ECCV*. Lingzhen Chen and Alessandro Moschitti. 2018. Learning to progressively recognize new named entities with sequence to sequence models. In *Proc. of COLING*. Wenhu Chen, Jianshu Chen, Yu Su, Zhiyu Chen, and William Yang Wang. 2020. Logical natural language generation from open-domain tables. In *Proc. of* ACL. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proc. of NAACL-HLT*. Bradley Efron and Robert J Tibshirani. 1994. *An introduction to the bootstrap*. CRC press. Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017. The WebNLG challenge: Generating text from RDF data. In *Proc.* of ICNLG. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In *Proc. of* ICCV. Harold W Kuhn. 1955. The hungarian method for the assignment problem. *Naval research logistics quarterly*, 2(1-2):83–97. Rémi Lebret, David Grangier, and Michael Auli. 2016. Neural text generation from structured data with application to the biography domain. In *Proc. of* EMNLP. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proc. of ACL*. Sha Li, Heng Ji, and Jiawei Han. 2021. Document-level event argument extraction by conditional generation. In *Proc. of NAACL-HLT*. Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, and Pontus Stenetorp. 2022a. Fantastically ordered prompts and where to find them: Overcoming fewshot prompt order sensitivity. In *Proc. of ACL*. Yaojie Lu, Hongyu Lin, Jin Xu, Xianpei Han, Jialong Tang, Annan Li, Le Sun, Meng Liao, and Shaoyi Chen. 2021. Text2Event: Controllable sequence-tostructure generation for end-to-end event extraction. In *Proc. of ACL*. Yaojie Lu, Qing Liu, Dai Dai, Xinyan Xiao, Hongyu Lin, Xianpei Han, Le Sun, and Hua Wu. 2022b. Unified structure generation for universal information extraction. In *Proc. of ACL*. Tapas Nayak and Hwee Tou Ng. 2020. Effective modeling of encoder-decoder architecture for joint entity and relation extraction. In *Proc. of AAAI*. Jekaterina Novikova, Ondˇrej Dušek, and Verena Rieser. 2017. The E2E dataset: New challenges for end-toend generation. In *Proc. of SIGDIAL*. Giovanni Paolini, Ben Athiwaratkun, Jason Krone, Jie Ma, Alessandro Achille, RISHITA ANUBHAI, Cicero Nogueira dos Santos, Bing Xiang, and Stefano Soatto. 2021. Structured prediction as translation between augmented natural languages. In *Proc. of* ICLR. Ankur Parikh, Xuezhi Wang, Sebastian Gehrmann, Manaal Faruqui, Bhuwan Dhingra, Diyi Yang, and Dipanjan Das. 2020. ToTTo: A controlled table-to-text generation dataset. In *Proc. of EMNLP*. Sunita Sarawagi et al. 2008. Information extraction. Foundations and Trends® *in Databases*, 1(3):261– 377. Jinsong Su, Xiangwen Zhang, Qian Lin, Yue Qin, Junfeng Yao, and Yang Liu. 2019. Exploiting reverse target-side contexts for neural machine translation via asynchronous bidirectional decoding. Artificial Intelligence, 277:103168. Zeqi Tan, Yongliang Shen, Shuai Zhang, Weiming Lu, and Yueting Zhuang. 2021. A sequence-to-set network for nested named entity recognition. In Proc. of IJCAI. Craig Thomson, Ehud Reiter, and Somayajulu Sripada. 2020. SportSett:basketball - a robust and maintainable data-set for natural language generation. In Proc. of the Workshop on IntelLanG. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Proc. of NeurIPS*. Changhan Wang, Kyunghyun Cho, and Jiatao Gu. 2020. Neural machine translation with byte-level subwords. In *Proc. of AAAI*. Sam Wiseman, Stuart Shieber, and Alexander Rush. 2017. Challenges in data-to-document generation. In *Proc. of EMNLP*. Xueqing Wu, Jiacheng Zhang, and Hang Li. 2022. Textto-table: A new way of information extraction. In Proc. of ACL. Binbin Xie, Xiangpeng Wei, Baosong Yang, Huan Lin, Jun Xie, Xiaoli Wang, Min Zhang, and Jinsong Su. 2022. WR-One2Set: Towards well-calibrated keyphrase generation. In *Proc. of EMNLP*. Hang Yan, Tao Gui, Junqi Dai, Qipeng Guo, Zheng Zhang, and Xipeng Qiu. 2021. A unified generative framework for various NER subtasks. In *Proc. of* ACL. Jiacheng Ye, Tao Gui, Yichao Luo, Yige Xu, and Qi Zhang. 2021. One2Set: Generating diverse keyphrases as a set. In *Proc. of ACL*. Xiangrong Zeng, Daojian Zeng, Shizhu He, Kang Liu, and Jun Zhao. 2018. Extracting relational facts by an end-to-end neural model with copy mechanism. In Proc. of ACL. Xiangwen Zhang, Jinsong Su, Yue Qin, Yang Liu, Rongrong Ji, and Hongji Wang. 2018. Asynchronous bidirectional decoding for neural machine translation. In *Proc. of AAAI*. Zexuan Zhong and Danqi Chen. 2021. A frustratingly easy approach for entity and relation extraction. In Proc. of NAACL-HLT. ## A Dataset Statistics Table 6 shows the statistics of the datasets we used. We list the numbers of instances in training, validation, and test sets and the average number of BPE tokens per instance. We also give the average numbers of rows and columns per instance. ## B Hyper-Parameter Settings Table 7 shows the hyper-parameter settings in our experiments. We set the hyper-parameters by referring to the existing work and choosing values that result in the best performance (measured in data cell F1) on the validation sets. ## C Case Study Figure 6 shows a case comparison between Seq2Seq-c and our Seq2Seq&set. Although the first three table body rows generated by Seq2Seq-c are almost correct, the others are duplicated, which also frequently occurs in other text generation tasks. In contrast, our model can handle this case correctly because ours generates table body rows in parallel and thus is not affected by other rows. | Ground-truth table Assists Field goals attempted | Field goals made | Points | Total rebounds | | |------------------------------------------------------------------------------------------|--------------------|----------------|------------------|----| | Matt Barnes | 14 | 9 | | | | Zaza Pachulia | 11 | 12 | | | | Ian Clark | 21 | 15 | 36 | 5 | | Kyle Anderson | 6 | 13 | 8 | | | Patty Mills | 4 | 21 | 2 | | | Pau Gasol | 3 | 10 | 7 | | | Davis Bertans | 13 | | | | | The table generated by Seq2Seq-c Assists Field goals attempted Field goals made | Points | Total rebounds | | | | Matt Barnes | 5 | 14 | 9 | | | Zaza Pachulia | 11 | 12 | | | | Ian Clark | 21 | 15 | 36 | 5 | | Matt Barnes | 5 | 14 | 9 | | | Zaza Pachulia | 11 | 12 | | | | Ian Clark | 21 | 15 | 36 | 5 | | The table generated by Seq2Seq&set Assists Field goals attempted Field goals made Points | Total rebounds | | | | | Davis Bertans | 13 | | | | | Matt Barnes | 5 | 14 | 9 | | | Ian Clark | 21 | 15 | 36 | 5 | | Zaza Pachulia | 11 | 12 | | | | Kyle Anderson | 6 | 13 | 8 | | | Patty Mills | 4 | 21 | 2 | | | Pau Gasol | 3 | 10 | 7 | | Figure 6: A case study of Seq2Seq-c and our Seq2Seq&set model. Incorretly-generated texts are marked in red. Dataset Train Valid Test Avg. # of tokens Avg. # of rows Avg. # of columns Rotowire-Team 3.4k 727 728 351.05 2.71 4.84 Rotowire-Player 3.4k 727 728 351.05 7.26 8.75 E2E 42.1k 4.7k 4.7k 24.90 4.58 2.00 WikiTableText 10.0k 1.3k 2.0k 19.59 4.26 2.00 WikiBio 582.7k 72.8k 72.7k 122.30 4.20 2.00 Table 6: Statistics of Rotowire, E2E, WikiTableText and WikiBio datasets, including the number of instances in training, validation and test sets, the average number of BPE tokens per instance, and the average number of rows and columns per instance. Table 7: The hyper-parameter settings in our experiments. | Dataset | M | λ | ⟨∅⟩ scale | batch size | lr | warmup ratio | |-----------|-----|-----|-------------|--------------|------|----------------| ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 7 ✓ A2. Did you discuss any potential risks of your work? 8 ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** 5 ✓ B1. Did you cite the creators of artifacts you used? 5 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? 5 ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? 5 ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? 8 ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? 5 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. 10 ## C ✓ **Did You Run Computational Experiments?** 5 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 5 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 5 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 5 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 5 ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
imperial-kochmar-2023-automatic
Automatic Readability Assessment for Closely Related Languages
https://aclanthology.org/2023.findings-acl.331
In recent years, the main focus of research on automatic readability assessment (ARA) has shifted towards using expensive deep learning-based methods with the primary goal of increasing models{'} accuracy. This, however, is rarely applicable for low-resource languages where traditional handcrafted features are still widely used due to the lack of existing NLP tools to extract deeper linguistic representations. In this work, we take a step back from the technical component and focus on how linguistic aspects such as mutual intelligibility or degree of language relatedness can improve ARA in a low-resource setting. We collect short stories written in three languages in the Philippines{---}Tagalog, Bikol, and Cebuano{---}to train readability assessment models and explore the interaction of data and features in various cross-lingual setups. Our results show that the inclusion of CrossNGO, a novel specialized feature exploiting n-gram overlap applied to languages with high mutual intelligibility, significantly improves the performance of ARA models compared to the use of off-the-shelf large multilingual language models alone. Consequently, when both linguistic representations are combined, we achieve state-of-the-art results for Tagalog and Cebuano, and baseline scores for ARA in Bikol.
# Automatic Readability Assessment For Closely Related Languages Joseph Marvin ImperialΩ,Λ **Ekaterina Kochmar**Υ ΩNational University, Philippines ΛUniversity of Bath, UK ΥMBZUAI, UAE [email protected] [email protected] ## Abstract In recent years, the main focus of research on automatic readability assessment (ARA) has shifted towards using expensive deep learningbased methods with the primary goal of increasing models' accuracy. This, however, is rarely applicable for low-resource languages where traditional handcrafted features are still widely used due to the lack of existing NLP tools to extract deeper linguistic representations. In this work, we take a step back from the technical component and focus on how linguistic aspects such as mutual intelligibility or *degree* of language relatedness can improve ARA in a low-resource setting. We collect short stories written in three languages in the Philippines - Tagalog, Bikol, and Cebuano - to train readability assessment models and explore the interaction of data and features in various crosslingual setups. Our results show that the inclusion of CROSSNGO, a novel specialized feature exploiting n-gram overlap applied to languages with high mutual intelligibility, significantly improves the performance of ARA models compared to the use of off-the-shelf large multilingual language models alone. Consequently, when both linguistic representations are combined, we achieve state-of-the-art results for Tagalog and Cebuano, and baseline scores for ARA in Bikol. We release our data and code at github.com/imperialite/ara-close-lang ## 1 Introduction Automatic readability assessment (ARA) is the task that aims to approximate the difficulty level of a piece of literary material using computer-aided tools. The need for such application arises from challenges related to the misalignment of difficulty labels when humans with various domain expertise provide annotations, as well as to the difficulty of manual extraction of complex text-based features (Deutsch et al., 2020). At the same time, readability assessment tools often use different definitions of complexity levels based on (a) age level (Vajjala and Meurers, 2012; Xia et al., 2016), (b) grade level (Imperial and Ong, 2020, 2021a), or on established frameworks such as (c) the Common European Framework of Reference for Languages (CEFR)1(François and Fairon, 2012; Pilán et al., 2016; Xia et al., 2016; Reynolds, 2016; Vajjala and Rama, 2018). In recent years, deep learning methods and large language models (LLMs) have gained popularity in the research community. Often studies using these methodologies focus primarily on improving the performance across various metrics. This is particularly manifest in ARA research in languages with a high number of accessible and publicly-available readability corpora such as English (Heilman et al., 2008; Flor et al., 2013; Vajjala and Luciˇ c´, 2018) and German (Hancke et al., 2012; Weiss et al., 2021; Weiss and Meurers, 2022) to name a few. At the same time, existing studies focusing on low-resource languages such as Cebuano (Imperial et al., 2022) and Bengala (Islam et al., 2012; Islam and Rahman, 2014) are still at the stage of primarily using traditional features such as word and sentence lengths to train predictive models. We identify two problems that are related to the use of complex neural-based approaches: the success of such models depends on (a) whether there is enough available data to train a model using a customized deep neural network, and (b) in the case of LLMs, whether there exists an available off-the-shelf pre-trained model for a low-resource language of interest. Imperial et al. (2022) have recently shown that merely integrating extracted embeddings from a multilingual BERT model as features for Cebuano, a low-resource Philippine language, *does not outperform* models trained with orthographic features such as syllable patterns customized for the language. These challenges provide motivation for researchers to further explore methods that do not rely on the availability of large amounts of data or complex pre-trained models and investigate simpler, more interpretable models instead of black box architectures. In this paper, we take a step back and focus on the data available for low-resource Philippine languages and the features extracted from them rather than on the algorithmic aspects. Specifically, we explore a scenario where small readability corpora are available for languages that are *closely related* or belong to one major language family tree. To the best of our knowledge, incorporating the degree of language closeness or relatedness has not been explored before in any cross-lingual ARA setup. In this study, we make the following contributions: 1. We conduct an extensive pioneer study on readability assessment in a cross-lingual setting using three closely related Philippine languages: Tagalog, Bikolano, and Cebuano. 2. We extract various feature sets ranging from linguistically motivated to neural embeddings, and empirically evaluate how they affect the performance of readability models in a singular, pairwise, and full cross-lingual setup. 3. We introduce cross-lingual Character N-gram Overlap (CROSSNGO), a novel feature applicable to readability assessment in closely related languages. 4. We also introduce and release a new readability corpus for Bikolano, one of the major languages in the Philippines. 5. Finally, we set a baseline for ARA in Bikol and report state-of-the-art results for Tagalog and Cebuano. ## 2 Background 2.1 The Philippine Linguistic Profile The Philippines is a linguistically diverse country in Southeast Asia (SEA) with over 180 languages spoken by over 100 million people. Languages in the Philippines can be best described as morphologically rich due to their free-word order structures and high number of possible inflections, full and partial duplications, and compound words (Go and Nocon, 2017). In addition, following lexicostatistical studies, languages are divided into two subgroups, *northern* and *central*, wherein the major ![1_image_0.png](1_image_0.png) languages Ilokano, Pangasinan, and Kapampangan belong to the northern subgroup, and Tagalog, Bikol, Hiligaynon, and Cebuano are allocated to the central subgroup (Walton, 1979; Constantino, 1998). Figure 1 illustrates the central subgroup of the Philippine language family tree. In this study, our readability experiments focus on three major Philippine languages, Tagalog, *Cebuano*, and *Bikol*, which we refer to further in the paper with their corresponding ISO-639-2 language codes as TGL, CEB, and BCL, respectively. ## 2.2 Mutual Intelligibility Preliminary linguistic profiling studies of the main Philippine languages such as by McFarland (2004) show that Tagalog, Bikol, and Cebuano are more closely related to one another than any languages in the northern family tree. A language's closeness or its *degree of relatedness* to another language from the same family (sub)tree is commonly referred to as *mutual intelligibility* (Bloomfield, 1926). Such similarities can be seen across multiple aspects, including, for example (a) syllable patterns where all three languages have similar three case-marking particles - ang (En: the), ng (En: of), and sa (En: at) for Bikol and Tagalog, and ug instead of sa for Cebuano; and (b) shared words, e.g. *mata* (En: eye) and *tubig* (En: *water*). For languages belonging to one greater subgroup in the case of Central Philippine for Tagalog, Bikol, and Cebuano, showing stronger quantitative evidence of mutual intelligibility may provide additional proof that these languages are indeed, at some level, closely related to each other. Thus, to contribute towards further understanding of mutual intelligibility in the Philippines language space, we apply two linguistic similarity-based measures using character n-gram overlap and genetic distance which we discuss in the sections below. TGL BCL CEB ENG TGL 1.000 0.810 0.812 0.270 BCL 0.810 1.000 0.789 0.263 CEB 0.812 0.789 1.000 0.213 ENG 0.270 0.263 0.213 1.000 (a) Bigram Character Overlap TGL BCL CEB ENG TGL 1.000 0.588 0.628 0.121 BCL 0.588 1.000 0.533 0.144 CEB 0.628 0.533 1.000 0.090 ENG 0.121 0.144 0.090 1.000 (b) Trigram Character Overlap Character N-Gram Overlap. For our first measure, we use the overlap in character bigrams and trigrams for every pair from the selected set of languages. To do this, we simply extract and rank the top occurring character bigrams and trigrams for a given language and calculate the Rank-Biased Overlap (RBO)2(Webber et al., 2010). RBO provides a measure of similarity between two lists while preserving the ranking. We also add English (ENG) as an unrelated control language not belonging to the Philippine family tree for comparison. We use the CommonCore readability dataset (Flor et al., 2013) for English as it also has three readability levels, and the level distribution is the most similar to the dataset of the three Philippine languages. Further information on the datasets in Tagalog, Bikol, and Cebuano can be found in Section 3. For all languages, we extract the top 25% of the most frequently occurring bigrams and trigrams for analysis. The top 40 most frequent bigrams and trigrams can be found in the Appendix. Table 1 presents character overlap for bigrams and trigrams in a pairwise manner. These results show that all three Philippine languages have character overlap greater than 75% for bigrams among themselves while overlap with English is below 27%. This pattern is observed again 2https://github.com/changyaochen/rbo in trigrams with the overlap levels of 53.3% to 62.8% between Tagalog, Bikol, and Cebuano and those below 15% for English. These ranges of mutual intelligibility values for bigram and trigram overlap serve as an estimate of the degree of relatedness between the three Philippine languages, with the values for English serving as a baseline for an unrelated language. Genetic Distance. As a secondary measure of mutual intelligibility, we calculate the genetic distance score (Beau and Crabbé, 2022) for each pair of languages studied in this work. Similar to the character n-gram overlap analysis, we add English for comparison purposes. Genetic distance (Beaufils and Tomin, 2020) is an automatic measure for quantifying the distance between two languages without the need for human judgments. This metric requires a list of words and their equivalent translations for any two languages of interest and calculates the number of exact consonant matches using the following formula: $ \text{GeneticDistance}=100-(\frac{match(l_1,l_2)}{n})$ (1) ... where l1 and l2 are a pair of languages, n is the total number of words for analysis (usually 100), and match(·) is a function for extracting the consonant patterns for each word from the list as described in Beaufils and Tomin (2020). The metric is measured as a distance; thus, the values closer to 100 denote higher dissimilarity or non-relatedness. Table 2 shows the calculated genetic distance scores for each pair of languages including English. The mapping provided in the table is the prescribed guide from Beaufils and Tomin (2020). | TGL | BCL | CEB | ENG | | |--------------------|---------------------------------|--------|--------|--------| | TGL | 0.000 | 37.083 | 24.846 | 95.690 | | BCL | 37.083 | 0.000 | 31.933 | 70.735 | | CEB | 24.846 | 31.933 | 0.000 | 90.970 | | ENG | 95.690 | 70.735 | 90.970 | 0.000 | | Range | Meaning | | | | | Between 1 and 30 | Highly related languages | | | | | Between 30 and 50 | Related languages | | | | | Between 50 and 70 | Remotely related languages | | | | | Between 70 and 78 | Very remotely related languages | | | | | Between 78 and 100 | No recognizable relationship | | | | Judging by these results, the Philippine languages have genetic distance scores within the **related** and **highly related languages** range with the Tagalog–Cebuano pair showing the closest language distance of 24.846. Meanwhile, genetic distance scores between all considered Philippine languages and English fall within the **very remotely related** to **no recognizable relationship** categories, with the Tagalog–English pair showing the highest distance from each other. Similar to the character n-gram overlap, these results strengthen our initial observation and provide empirical evidence for mutual intelligibility between Tagalog, Bikol, and Cebuano languages which, beyond this study, may also be used in future linguistic research. ## 3 Readability Corpora In Philippine Languages We have compiled open source readability datasets for Tagalog, Cebuano, and Bikol from online library websites and repositories. Each data instance in this study is a fictional short story. Table 3 shows the statistical breakdown and additional information on the levels in each readability dataset across different languages. Tagalog and Cebuano. Datasets in these languages have already been used in previous research, including Imperial et al. (2019); Imperial and Ong (2020); Imperial (2021); Imperial and Ong (2021a); Imperial et al. (2022). We use the same datasets as in previous research and incorporate them into this study for comparison. For Tagalog, we have assembled 265 instances of children's fictional stories from Adarna House3and the Department of Education (DepED)4. For Cebuano, we use the dataset collected by Imperial et al. (2022) from Let's Read Asia5and Bloom Library6, which were funded by the Summer Institute of Linguistics (SIL International) and BookLabs to make literary materials in multiple languages available to the public. Bikol. There are no pre-compiled datasets available for readability assessment in Bikol yet. For this, we collected all available Bikol short stories from Let's Read Asia and Bloom Library totaling 150 instances split into 68, 27, and 55 for ## Levels 1 To 3 Respectively. All collected data for this study follows the standard leveling scheme for early-grade learners or the first three grades from the K-12 Basic Curriculum in the Philippines.7 Each instance has been annotated by experts with a level from 1 to 3 as seen in Table 3. We use these annotations as target labels in our experiments. Finally, all datasets used in this study can be manually downloaded from their respective websites (see footnotes for links) under the Creative Commons BY 4.0 license. ## 4 Experimental Setup 4.1 Ml Setup In this study, our primary focus is on the depth of analysis of the traditional and neural features used in a cross-lingual setting applied to closely related languages. Thus, we use a vanilla Random Forest model which has been previously shown to be the best-performing monolingual-trained model for ARA in Tagalog and Cebuano (Imperial and Ong, 2021a; Imperial et al., 2022). We leave the technical breadth of exploring other supervised algorithms to future work. We use a stratified k-fold approach with k=5 to have well-represented samples per class for a small-dataset scenario used in this study. We report accuracy as the main evaluation metric across all experiments for the ease of performance comparison with previous work (see Section 5). We use WEKA 3.8 (Witten et al., 1999) 8for all our modeling and evaluation and set hyperparameters of the Random Forest algorithm to their default values as listed in the Appendix. ## 4.2 Linguistic Features We extract and consider a wide variety of features inspired by: (a) handcrafted predictors from previous work, (b) representations from a multilingual Transformer-based model (mBERT), and (c) CROSSNGO, a novel feature applicable to readability assessment in closely related languages. We discuss each feature group below. ## Traditional Handcrafted Features (Trad). We integrate available traditional surface-based and | Source | Language | Level | Doc Count | Sent Count | Vocab | |-----------------------------------|------------|---------|-------------|--------------|---------| | L1 | 72 | 2774 | 4027 | | | | Adarna and DepED | TGL (265) | L2 | 96 | 4520 | 7285 | | L3 | 97 | 10957 | 12130 | | | | L1 | 68 | 1578 | 2674 | | | | Let's Read Asia and Bloom Library | BCL (150) | L2 | 27 | 1144 | 2009 | | L3 | 55 | 3347 | 5509 | | | | L1 | 167 | 1173 | 2184 | | | | Let's Read Asia and Bloom Library | CBL (349) | L2 | 100 | 2803 | 4003 | | L3 | 82 | 3794 | 6115 | | | syllable pattern-based features in this study as predictors of text complexity. These features have been widely used in previous research on ARA in Tagalog and Cebuano (Imperial and Ong, 2020; Imperial et al., 2022). For Bikol, this is the first-ever study to develop a readability assessment model. In the case of low resource languages similar to those used in this study, these predictors are still the go-to features in ARA and have been empirically proven effective for Tagalog and Cebuano (Imperial and Ong, 2021b). We have extracted a total of 18 traditional features for each language, including: 1. The total number of words, phrases, and sentences (3). 2. Average word length, sentence length, and the number of syllables per word (3). 3. The total number of polysyllable words of more than 5 syllables (1). 4. Density of consonant clusters or frequency of consonants without intervening vowels in a word (e.g. Tagalog: *sastre*, En: *dressmaker*) (1). 5. Densities of syllable patterns using the following templates {v, cv, vc, cvc, vcc, ccv, cvcc, ccvc, ccvcc, ccvccc}, where v and c are vowels and consonants respectively (10). Multilingual Neural Embeddings (mBERT). In addition to the surface-based features, we explore contextual representations from a multilingual Transformer-based large language model via mBERT (Devlin et al., 2019). Previous research on probing BERT has shown convincing evidence that various types of linguistic information (e.g. semantic and syntactic knowledge) are distributed within its twelve layers (Tenney et al., 2019; Rogers et al., 2020). Applying this to ARA, Imperial (2021) showed that BERT embeddings could act as a *substitute* feature set for lower-resource languages such as Filipino, for which NLP tools like POS taggers are lacking. For this study, we specifically chose mBERT as this particular model has been trained using Wikipedia data in 104 different languages including Tagalog and Cebuano. Bikol is not included in any available off-the-shelf Transformer-based language models due to extremely limited online resources not large enough for training. Nonetheless, we still used the representations provided by mBERT noting its high intelligibility with Tagalog and Cebuano. Feature-wise, we use the meanpooled representations of the entire twelve layers of mBERT via the sentence-transformers library (Reimers and Gurevych, 2019). Each instance in our readability data has an mBERT embedding representation of 768 dimensions. Cross-lingual Character N-Gram Overlap (CROSS**NGO).** N-gram overlap has been used previously in various NLP tasks applied to Philippine language data such as language identification (Oco et al., 2013a; Cruz et al., 2016), spell checking and correction (Cheng et al., 2007; Octaviano and Borra, 2017; Go et al., 2017), and clustering (Oco et al., 2013b). Drawing inspiration from this fact and from the quantitative evidence of mutual intelligibility between Philippine languages presented in Section 2, we posit that a new feature designed specifically for closely related language data might improve the performance of the readability assessment models. Thus, we introduce CROSSNGO, which quantifies linguistic similarity using character overlap from a curated list of high-frequency n-grams within languages of high mutual intelligibility. We propose the following formula for calculating this metric: $$\mathrm{CrossNGO}_{L,n}={\frac{m(L)\bigcap m(d)}{\mathrm{count}(m(d))}}\qquad(2)$$ where n ∈ {2, 3} denotes bigrams and trigrams, and m(·) is a function that extracts unique n-grams from a document instance d and compares them to a list of top n-grams from a specific language L. For each instance in a dataset, a vector containing three new features will be added representing the overlap between the text and the top n-grams from each of the three languages. We apply this calculation to both bigrams and trigrams using the n-gram lists for Tagalog, Bikol, and Cebuano obtained from the preliminary experiments, which results in a total of 6 new features. While we presented two quantitative methods of mutual intelligibility in Section 2, only CROSSNGO is applied as a metric and a feature for this study. Staying faithful to the work of Beaufils and Tomin (2020), we did not use Genetic Distance to generate another set of features as it was originally developed as a language-to-language metric. Thus, we use it only as additional secondary evidence of language similarity. At the same time, we note that the proposed CROSSNGO bears certain conceptual similarities to Genetic Distance as it measures the frequency of n-gram overlap with other languages. We perform an ablation study and demonstrate the contribution of individual feature sets in Section 5. ## 5 Results And Discussion Table 4 shows the accuracy values obtained when training Random Forest models with various combinations of feature groups for each language of interest. The experiments were divided into three setups: (a) *singular cross-lingual* (l1→l2), (b) *pairwise cross-lingual* ([l1 + l2]→l3), and (c) *full crosslingual* ([l1 + l2 + l3]→l1), each corresponding to a separate subsection of Table 4. We use the term cross-lingual in this context when a model is trained with a readability corpus from a chosen language ln or a combination of languages and evaluated with a test set from another language lm as is demonstrated in Table 4. Similar to our preliminary experiments (Section 2), we include English using the CommonCore dataset as counter-evidence for comparison with closely related languages. ## 5.1 Low-Resource Languages Benefit From Specialized Cross-Lingual Features For the singular cross-lingual experiments, the effectiveness of exploiting the bigram and trigram overlap via CROSSNGO is demonstrated by high scores for Bikol and Cebuano (75.862 and 78.270) and comparable performance for Tagalog (50.100). Moreover, only for this setup, there is an observed trend where traditional features combined with CROSSNGO outperform mBERT embeddings or the combination of all features for the respective language pair l1→l2. For Tagalog, this results in 50.100 vs. 26.921 and 23.077; for Bikol - 75.862 vs. 68.965 and 69.000; for Cebuano - 78.270 vs. 71.015 and 73.913. In terms of cross-linguality, in the case of Tagalog, using a model trained with Bikol data proves to be more effective than training with the original Tagalog data with approximately 5.8-point difference in accuracy. However, we still recommend the Tagalog model using all features with 50.000 accuracy since the 0.1 difference is not a significant improvement. Consequently, this trend is not observed in the Bikol and Cebuano experiments where the best-performing models of readability assessment are trained on the data from the same language l1→l1. To further confirm if the addition of the CROSSNGO feature statistically improves models' performance as compared to the representations from mBERT for low-resource languages, we aggregate the scores from the TRAD+CROSSNGO group and compare them with the scores obtained when we use mBERT embeddings only, conducting a t-test. We did not include the scores using the combination of all types of features as it would confound the significance test. We achieve statistical significance at α = 0.01 level (p = 0.006) which shows that using traditional handcrafted features extended with CROSSNGO significantly improves ARA models for low-resource languages, *provided* the availability of data in a closely related language in the case of non-availability of multilingual LLMs (e.g., lack of mBERT model in Bikol). ## 5.2 Inclusion Of A Closely Related Language In Data Produces More Confident Predictions For Pairwise Cross-Lingual Experiments, We Investigate The Effect Of Adding A Closely Related Language | Model | TGL | BCL | CEB | | | | | | | | | | |---------|-----------------|--------------|--------|--------|-----------------|--------------|--------|--------|-----------------|--------------|--------|--------| | TRAD | TRAD + CrossNGO | mBERT Embdng | ALL | TRAD | TRAD + CrossNGO | mBERT Embdng | ALL | TRAD | TRAD + CrossNGO | mBERT Embdng | ALL | | | TGL | 43.153 | 44.231 | 46.100 | 50.000 | 55.172 | 41.379 | 20.689 | 24.137 | 53.623 | 57.971 | 47.826 | 50.725 | | BCL | 50.000 | 50.100 | 26.921 | 23.077 | 74.620 | 75.862 | 68.965 | 69.000 | 63.768 | 62.320 | 60.869 | 66.667 | | CEB | 32.692 | 38.462 | 34.615 | 42.308 | 51.720 | 65.517 | 48.276 | 44.823 | 74.058 | 78.270 | 71.015 | 73.913 | | ENG* | 26.923 | 44.230 | 28.846 | 26.923 | 48.275 | 37.681 | 48.250 | 48.275 | 46.375 | 62.018 | 43.478 | 43.376 | | TGL+BCL | 51.101 | 51.923 | 40.384 | 57.692 | 72.441 | 69.965 | 69.000 | 68.966 | 56.521 | 60.869 | 62.318 | 69.565 | | BCL+CEB | 48.077 | 50.000 | 42.307 | 48.076 | 68.956 | 72.414 | 75.611 | 75.862 | 74.400 | 75.362 | 75.362 | 79.710 | | CEB+TGL | 44.230 | 36.538 | 48.076 | 48.100 | 52.720 | 55.172 | 41.379 | 34.483 | 77.711 | 76.811 | 73.913 | 74.464 | | ALL | 50.000 | 52.910 | 46.153 | 32.692 | 72.413 | 79.113 | 65.517 | 79.328 | 77.710 | 78.000 | 78.261 | 75.630 | on a model's performance using confusion matrices. As the middle section of Table 4 demonstrates, there are three possible pairwise combinations of Tagalog, Bikol, and Cebuano tested on each individual language. As there can be numerous ways to analyze the table, we highlight the results of the cross-lingual models with the top-performing pair and their utilized feature groups and compare them to their equivalent models in the singular crosslingual experiment. Figure 2 illustrates this method of comparison for each language. In the case of the Tagalog–Tagalog pair, most misclassifications occur between grades 1 and 2 in both training and test data using all features. This, in turn, is alleviated by incorporating the Bikol dataset in the training data, which reduces the level of confusion by approximately 7%. The inclusion of Bikol also improves classification between grades 2 and 3 by three instances. In the case of the Bikol test data, the same finding is observed for the combined Bikol and Cebuano model using all features, where confusion in classifying grades 1 and 3 is reduced by two instances. Lastly, for Cebuano, the top-performing model in the pairwise cross-lingual setup includes Bikol data and uses all features. For this model, misclassifications in predicting grade 1 against the other two levels are reduced, and performance for predicting grade 3 is improved. We further corroborate our observations that pairwise cross-lingual models outperform singular cross-lingual models by aggregating the scores from the two setups and running a t-test. Further to the results reported in the previous section, we observe statistically significant difference at the α = 0.01 level (p = 0.003) when pairwise crosslingual models are compared to singular crosslingual models. Overall, our findings provide solid empirical evidence that including a closely related language in the training data for a low-resource language significantly improves performance. ## 5.3 Combining Specialized Cross-Lingual Features With Multilingual Neural Embeddings Achieves Sota Results While the previous sections highlight the significant increase in performance when using traditional features with CROSSNGO as compared to mBERT embeddings only, we now discuss results and contributions when both linguistic representations are combined. As is demonstrated in Table 4, the scores obtained using the combined features applied to Tagalog and Cebuano achieve state-ofthe-art results for ARA in these languages. For Tagalog, our model's accuracy of 57.692 outperforms the SVM with 57.10 accuracy and the Random Forest model with 46.70 presented in Imperial (2021). For Cebuano, our model achieves 79.710 beating the Random Forest model presented in Imperial et al. (2022) with a score of 57.485 with both models utilizing the same Cebuano dataset. Lastly, as there are no automated readability assessment models yet for Bikol, we report a baseline accuracy of 79.328, which is achieved using a model with a combination of traditional features (extended with CROSSNGO) and mBERT embeddings extracted from data in all three Philippine languages. ## 5.4 Conventional Fine-Tuning Of Mbert Underperforms For Low Resource Cross-Lingual Ara While the main focus of our work is on using traditional machine learning models with Random Forest, we explore if the standard approach for fine- ![7_image_0.png](7_image_0.png) ![7_image_1.png](7_image_1.png) ![7_image_2.png](7_image_2.png) ![7_image_3.png](7_image_3.png) Model TGL BCL CEB TGL 0.420 0.500 0.333 BCL 0.420 0.633 0.575 CEB 0.520 0.500 **0.697** TGL+BCL 0.440 0.566 0.469 BCL+CEB 0.400 **0.637** 0.666 CEB+TGL 0.480 0.500 0.590 *ALL 0.460 0.633 0.636 tuning LLMs such as mBERT can produce comparable performance. We use the same uncased mBERT model as presented in Section 4. Table 5 shows the performance of singular, pairwise, and full cross-lingual setups formatted similarly to Table 4. These results confirm the findings of Ibañez et al. (2022), who have applied a similar setup to monolingual Tagalog ARA using a Tagalog BERT model. Judging by their results, the conventional fine-tuning approach proved to be inferior to the traditional way of extracting linguistic features from text and training a machine learning model like SVM or Random Forest. For this study, the highest-performing setups for Tagalog and Cebuano use Cebuano data only, and that for Bikol uses the combined Cebuano + Bikol datasets. None of the fine-tuned models outperform those presented in Table 4 using combinations of traditional features and CROSSNGO. While previous work in cross-lingual ARA by Lee and Vajjala (2022) and Madrazo Azpiazu and Pera (2020) achieved relatively high performance with non-closely related languages using LLMs, we obtain less promising results which we can attribute to: (a) the use of datasets of substantially smaller sizes (a total of 13, 786 documents used in Azpiazu and Pera (2019) and 17, 518 in Lee and Vajjala (2022) vs. only 764 in out study), and (b) lack of diverse data sources since only Wikipedia dumps were used for Tagalog and Cebuano for training the mBERT model. ## 6 Conclusion In this work, we took a step back from the trend of exploring various technical components of the complex, deep learning models and, instead, focused on studying the potential effectiveness of linguistic characteristics such as mutual intelligibility for ARA in closely related Philippine languages - Tagalog, Bikol, and Cebuano. We implemented three cross-lingual setups to closely study the effects of interaction between the three languages and proposed a new feature utilizing n-gram overlap, CROSSNGO, which is specially developed for cross-lingual ARA using closely related languages. Our results show that: (a) using CROSSNGO combined with handcrafted features achieves significantly higher performance than using mBERT embeddings, (b) the inclusion of another closely related Philippine language reduces model confusion, and (c) using the conventional fine-tuning for LLMs like mBERT in this setup still does not outperform models with traditional features. Consequently, we come to the conclusion that using languages with high intelligibility is more suited for cross-lingual ARA. This is demonstrated in experiments with English added as an example of a non-related language, in which we do not achieve a substantial increase in performances for Tagalog, Cebuano, and Bikol. Our results agree with the findings of previous studies in cross-lingual ARA such as those of Madrazo Azpiazu and Pera (2020) using English, Spanish, Basque, Italian, French, Catalan, and Weiss et al. (2021) using English and German, that also showed that the inclusion of additional language data can improve ARA results on other languages. However, our work is primarily motivated by the degree of language relatedness: we show that better results can be achieved for ARA in low-resource languages if we use closely related languages rather than any language, including nonrelated ones like English. Our study also provides an encouragement for researchers to consider approaches grounded in linguistic theories which can potentially be used to improve the performance in NLP tasks rather than always resorting to models that are expensive to train and hard to interpret. ## 7 Limitations We discuss some limitations of our current work which can be further explored in the future. On Data Format. We specifically use fictional short stories as our primary data for the study since we require gold standard labels for this document classification task. Moreover, fictional short stories are easier to find as they often come with a specified grade level compared to other types of literary texts such as magazines or web articles written in any of the three Philippine languages. We do not claim that our models are able to generalize on these other types of literary materials or on other types of closely related language pairs unless a full study is conducted which is outside the scope of this work. On Handcrafted Features. We were only able to use traditional handcrafted features covering countbased predictors such as sentence or word count and syllable pattern-based features for training the Random Forest models. We did not extract other feature sets one may find in the previous work on English such as lexical density or discourse-based features since such features require NLP tools that are able to extract POS, named entities, relations, and discourse patterns that do not yet exist for all three Philippine languages used in this study. The work of Imperial and Ong (2021b) covered a small set of lexical features such as *type–token ratio* and compound word density for readability assessment in Tagalog. Still, we cannot use this approach since all languages would need to have the same number of features as is a standard practice in model training. On Model Training. Our choice of the Random Forest algorithm for training the ARA models is based on the substantial amount of previous work supporting the application of this method to low-resource ARA, e.g., to Tagalog and Cebuano in a monolingual setup (Imperial and Ong, 2020, 2021a; Imperial, 2021; Imperial et al., 2022), where it achieved better results than other algorithms such as SVM or Logistic Regression. One can consider these algorithms for comparison but the analysis of each ARA model trained with various algorithms to the same level of depth and focus that we have given to the Random Forest classifier in the present study would require a considerable amount of time as well as a higher page limit. On Current Measures of Mutual Intelligibility. The majority of existing literature in linguistics, specifically on the topic of mutual intelligibility in Philippine languages, discusses examples in the context of speech communication. As such, one might claim that Cebuano and Tagalog are not mutually intelligible by giving an example where a Tagalog speaker may not fully comprehend (or only recognize a few common words) another speaker if they are talking in Cebuano. While this is certainly true, in this study, we specifically focus on the mutual intelligibility of languages at a word and character level via written texts such as children's fiction books. From this, we see a substantial degree of *closeness* between Tagalog, Cebuano, and Bikol compared to English. Thus, based on our results, we posit that mutual intelligibility may be used as an additional feature (see CROSSNGO in Section 4) for text-based tasks such as readability assessment. We leave the exploration of our proposed novel feature in the speech communication ## 8 Ethical Considerations We foresee no ethical issues related to the study. ## Acknowledgements We thank the anonymous reviewers and area chairs for their constructive and helpful feedback. We also thank the communities and organizations behind the creation of open-source datasets in Philippine languages used in this research: DepED, Adarna House, Bloom Library, Let's Read Asia, SIL, and BookLabs. JMI is supported by the UKRI CDT in Accountable, Responsible, and Transparent AI of the University of Bath and by the Study Grant Program of the National University Philippines. ## References Ion Madrazo Azpiazu and Maria Soledad Pera. 2019. Multiattentive recurrent neural network architecture for multilingual readability assessment. Transactions of the Association for Computational Linguistics, 7:421–436. Nathanaël Beau and Benoit Crabbé. 2022. The impact of lexical and grammatical processing on generating code from natural language. In *Findings of the Association for Computational Linguistics: ACL 2022*, pages 2204–2214, Dublin, Ireland. Association for Computational Linguistics. Vincent Beaufils and Johannes Tomin. 2020. Stochastic approach to worldwide language classification: the signals and the noise towards long-range exploration. SocArXiv. Leonard Bloomfield. 1926. A set of postulates for the science of language. *Language*, 2(3):153–164. Charibeth Cheng, Cedric Paul Alberto, Ian Anthony Chan, and Vazir Joshua Querol. 2007. SpellChef: spelling checker and corrector for Filipino. Journal of Research in Science, Computing and Engineering, 4(3):75–82. Ernesto A Constantino. 1998. Current topics in Philippine linguistics. In Revised version of the paper read at the meeting of the Linguistic Society of Japan held in Yamaguchi University, Yamaguchi, Japan, on 31 0ctober. Angelica Dela Cruz, Nathaniel Oco, Leif Romeritch Syliongka, and Rachel Edita Roxas. 2016. Phoneme inventory, trigrams and geographic location as features for clustering different philippine languages. In 2016 Conference of The Oriental Chapter of International Committee for Coordination and Standardization of Speech Databases and Assessment Techniques (O-COCOSDA), pages 137–140. IEEE. Tovly Deutsch, Masoud Jasbi, and Stuart Shieber. 2020. Linguistic features for readability assessment. In *Proceedings of the Fifteenth Workshop on Innovative Use* of NLP for Building Educational Applications, pages 1–17, Seattle, WA, USA → Online. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Michael Flor, Beata Beigman Klebanov, and Kathleen M. Sheehan. 2013. Lexical tightness and text complexity. In *Proceedings of the Workshop on Natural Language Processing for Improving Textual Accessibility*, pages 29–38, Atlanta, Georgia. Association for Computational Linguistics. Thomas François and Cédrick Fairon. 2012. An "AI readability" formula for French as a foreign language. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 466–477. Matthew Phillip Go and Nicco Nocon. 2017. Using Stanford part-of-speech tagger for the morphologically-rich Filipino language. In Proceedings of the 31st Pacific Asia Conference on Language, Information and Computation, pages 81–88. The National University (Phillippines). Matthew Phillip Go, Nicco Nocon, and Allan Borra. 2017. Gramatika: A grammar checker for the lowresourced Filipino language. In TENCON 2017-2017 IEEE Region 10 Conference, pages 471–475. IEEE. Julia Hancke, Sowmya Vajjala, and Detmar Meurers. 2012. Readability classification for German using lexical, syntactic, and morphological features. In Proceedings of COLING 2012, pages 1063–1080, Mumbai, India. The COLING 2012 Organizing Committee. Michael Heilman, Kevyn Collins-Thompson, and Maxine Eskenazi. 2008. An analysis of statistical models and features for reading difficulty prediction. In *Proceedings of the Third Workshop on Innovative Use* of NLP for Building Educational Applications, pages 71–79, Columbus, Ohio. Association for Computational Linguistics. Michael Ibañez, Lloyd Lois Antonie Reyes, Ranz Sapinit, Mohammed Ahmed Hussien, and Joseph Marvin Imperial. 2022. On Applicability of Neural Language Models for Readability Assessment in Filipino. In International Conference on Artificial Intelligence in Education, pages 573–576. Springer. Joseph Marvin Imperial. 2021. BERT embeddings for automatic readability assessment. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021), pages 611–618, Held Online. INCOMA Ltd. Joseph Marvin Imperial and Ethel Ong. 2020. Exploring hybrid linguistic feature sets to measure Filipino text readability. In *2020 International Conference on* Asian Language Processing (IALP), pages 175–180. IEEE. Joseph Marvin Imperial and Ethel Ong. 2021a. Diverse linguistic features for assessing reading difficulty of educational Filipino texts. *arXiv preprint* arXiv:2108.00241. Joseph Marvin Imperial and Ethel Ong. 2021b. Under the microscope: Interpreting readability assessment models for Filipino. In *Proceedings of the 35th Pacific Asia Conference on Language, Information and* Computation, pages 1–10, Shanghai, China. Association for Computational Lingustics. Joseph Marvin Imperial, Lloyd Lois Antonie Reyes, Michael Antonio Ibanez, Ranz Sapinit, and Mohammed Hussien. 2022. A baseline readability model for Cebuano. In Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022), pages 27–32, Seattle, Washington. Association for Computational Linguistics. Joseph Marvin Imperial, Rachel Edita Roxas, Erica Mae Campos, Jemelee Oandasan, Reyniel Caraballo, Ferry Winsley Sabdani, and Ani Rosa Almaroi. 2019. Developing a machine learning-based grade level classifier for Filipino children's literature. In 2019 International Conference on Asian Language Processing (IALP), pages 413–418. IEEE. Zahurul Islam, Alexander Mehler, and Rashedur Rahman. 2012. Text readability classification of textbooks of a low-resource language. In Proceedings of the 26th Pacific Asia Conference on Language, Information, and Computation, pages 545–553. Zahurul Islam and Rashedur Rahman. 2014. Readability of Bangla news articles for children. In *Proceedings of the 28th Pacific Asia Conference on Language,* Information and Computing, pages 309–317. Justin Lee and Sowmya Vajjala. 2022. A neural pairwise ranking model for readability assessment. In Findings of the Association for Computational Linguistics: ACL 2022, pages 3802–3813, Dublin, Ireland. Association for Computational Linguistics. Ion Madrazo Azpiazu and Maria Soledad Pera. 2020. Is cross-lingual readability assessment possible? *Journal of the Association for Information Science and* Technology, 71(6):644–656. Curtis D McFarland. 2004. The Philippine language situation. *World Englishes*, 23(1):59–75. Nathaniel Oco, Joel Ilao, Rachel Edita Roxas, and Leif Romeritch Syliongka. 2013a. Measuring language similarity using trigrams: Limitations of language identification. In 2013 International Conference on Recent Trends in Information Technology (ICRTIT), pages 478–481. IEEE. Nathaniel Oco, Leif Romeritch Syliongka, Rachel Edita Roxas, and Joel Ilao. 2013b. Dice's coefficient on trigram profiles as metric for language similarity. In 2013 International Conference Oriental COCOSDA held jointly with 2013 Conference on Asian Spoken Language Research and Evaluation (O-COCOSDA/CASLRE), pages 1–4. IEEE. Manolito Octaviano and Allan Borra. 2017. A spell checker for a low-resourced and morphologically rich language. In *TENCON 2017-2017 IEEE Region 10* Conference, pages 1853–1856. IEEE. Ildikó Pilán, Sowmya Vajjala, and Elena Volodina. 2016. A readable read: Automatic assessment of language learning materials based on linguistic complexity. arXiv preprint arXiv:1603.08868. Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong, China. Association for Computational Linguistics. Robert Reynolds. 2016. Insights from Russian second language readability classification: complexitydependent training requirements, and feature evaluation of multiple categories. In Proceedings of the 11th Workshop on Innovative Use of NLP for Building Educational Applications, pages 289–300, San Diego, CA. Association for Computational Linguistics. Anna Rogers, Olga Kovaleva, and Anna Rumshisky. 2020. A primer in BERTology: What we know about how BERT works. *Transactions of the Association* for Computational Linguistics, 8:842–866. Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. BERT rediscovers the classical NLP pipeline. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4593– 4601, Florence, Italy. Association for Computational Linguistics. Sowmya Vajjala and Ivana Luciˇ c. 2018. ´ OneStopEnglish corpus: A new corpus for automatic readability assessment and text simplification. In Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 297–304, New Orleans, Louisiana. Association for Computational Linguistics. Sowmya Vajjala and Detmar Meurers. 2012. On improving the accuracy of readability classification using insights from second language acquisition. In Proceedings of the seventh workshop on building educational applications using NLP, pages 163–173. Sowmya Vajjala and Taraka Rama. 2018. Experiments with universal CEFR classification. In Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 147–153, New Orleans, Louisiana. Association for Computational Linguistics. Charles Walton. 1979. A Philippine language tree. *Anthropological linguistics*, 21(2):70–98. William Webber, Alistair Moffat, and Justin Zobel. 2010. A similarity measure for indefinite rankings. ACM Transactions on Information Systems (TOIS), 28(4):1– 38. Zarah Weiss, Xiaobin Chen, and Detmar Meurers. 2021. Using broad linguistic complexity modeling for crosslingual readability assessment. In *Proceedings of* the 10th Workshop on NLP for Computer Assisted Language Learning, pages 38–54, Online. LiU Electronic Press. Zarah Weiss and Detmar Meurers. 2022. Assessing sentence readability for German language learners with broad linguistic modeling or readability formulas: When do linguistic insights make a difference? In *Proceedings of the 17th Workshop on Innovative* Use of NLP for Building Educational Applications (BEA 2022), pages 141–153, Seattle, Washington. Association for Computational Linguistics. Ian H Witten, Eibe Frank, Leonard E Trigg, Mark A Hall, Geoffrey Holmes, and Sally Jo Cunningham. 1999. Weka: Practical machine learning tools and techniques with Java implementations. Menglin Xia, Ekaterina Kochmar, and Ted Briscoe. 2016. Text readability assessment for second language learners. In *Proceedings of the 11th Workshop* on Innovative Use of NLP for Building Educational Applications, pages 12–22, San Diego, CA. Association for Computational Linguistics. ## A Appendix | Hyperparameter | Value | |------------------|---------------------------| | batchSize | 100 | | bagSizePercent | 100 | | maxDepth | unlimited | | numIterations | 100 | | numFeatures | int(log(#predictors) + 1) | | seed | 1 | Table 6: Hyperparameter settings for the Random Forest algorithm used for training the models in WEKA. These are default values and the 3.8.6 version of WEKA would have these already preset. | Hyperparameter | Value | |------------------|-------------------------| | max seq length | 300 | | batch size | 8 | | dropout | 0.01 | | optimizer | Adam | | activation | ReLu | | layer count | 1 (768 x 256) | | loss | Negative Log Likelihood | | learning rate | 0.002 | | epochs | 50 | Table 7: Hyperparameter settings for the mBERT model used for fine-tuning. Please refer to Ibañez et al. (2022) for more information on these values. | TGL | CEB | BCL | | | | |--------|-------|--------|-------|--------|-------| | bigram | count | bigram | count | bigram | count | | ng | 43215 | an | 15636 | an | 12562 | | an | 39268 | ng | 14451 | na | 7315 | | na | 22041 | sa | 8311 | ng | 6754 | | in | 18449 | na | 7167 | in | 6138 | | ma | 16501 | ga | 6714 | sa | 5753 | | sa | 16037 | ka | 5951 | ka | 5176 | | la | 15283 | la | 5638 | ag | 4558 | | ka | 14263 | ma | 4889 | ma | 4452 | | ag | 12386 | ni | 4701 | on | 3490 | | at | 12380 | ta | 4692 | ga | 3462 | | pa | 12171 | in | 4591 | pa | 3453 | | al | 11521 | pa | 4333 | ni | 3416 | | ga | 10818 | ag | 4247 | ak | 3291 | | ay | 10771 | on | 4113 | ar | 3012 | | ak | 10271 | ay | 3799 | si | 2957 | | ni | 9814 | si | 3636 | da | 2920 | | ta | 9738 | ya | 3603 | ya | 2886 | | si | 9126 | al | 3406 | ta | 2796 | | ya | 8724 | at | 3150 | la | 2676 | | on | 8288 | ba | 3099 | al | 2658 | | ba | 7402 | ak | 3062 | ba | 2613 | | it | 7288 | ha | 2729 | ra | 2518 | | am | 6667 | iy | 2634 | as | 2447 | | iy | 6339 | ug | 2531 | at | 2315 | | as | 6210 | il | 2511 | ay | 2187 | | ko | 5928 | un | 2502 | ab | 1893 | | ha | 5885 | gi | 2460 | ai | 1843 | | il | 5857 | li | 2413 | ko | 1840 | | ar | 5848 | am | 2327 | ha | 1763 | | li | 5696 | ah | 2251 | li | 1697 | | ap | 5190 | it | 2059 | ad | 1679 | | ab | 5000 | ad | 1834 | ro | 1574 | | ra | 4867 | as | 1801 | am | 1544 | | da | 4777 | da | 1793 | un | 1316 | | aw | 4598 | us | 1781 | ti | 1293 | | ti | 4577 | ko | 1771 | nd | 1202 | | wa | 4572 | to | 1770 | ap | 1172 | | ah | 4410 | aw | 1767 | mg | 1165 | | um | 4391 | ab | 1690 | ah | 1164 | | bi | 4382 | yo | 1667 | it | 1160 | | is | 4286 | ki | 1615 | bi | 1146 | | to | 4248 | hi | 1589 | ku | 1140 | | mi | 4179 | ap | 1516 | aw | 1139 | | un | 4168 | mg | 1504 | wa | 1086 | | TGL | CEB | BCL | | | | |---------|-------|---------|-------|---------|-------| | trigram | count | trigram | count | trigram | count | | ang | 22650 | ang | 7941 | ang | 3350 | | ala | 6120 | nga | 3283 | nag | 1721 | | ing | 5456 | iya | 2547 | kan | 1518 | | ong | 5036 | ing | 1697 | aka | 1507 | | iya | 4761 | ala | 1534 | ing | 1434 | | lan | 3880 | mga | 1479 | nin | 1389 | | ina | 3481 | ila | 1474 | ong | 1374 | | aka | 3266 | ana | 1395 | ara | 1210 | | nan | 3151 | lan | 1317 | mga | 1164 | | ama | 3021 | ong | 1315 | man | 1103 | | ara | 3007 | ata | 1306 | yan | 979 | | ata | 2976 | usa | 1286 | sin | 947 | | ila | 2965 | tan | 1276 | ala | 940 | | mga | 2867 | yan | 1172 | iya | 928 | | nag | 2797 | han | 1139 | asi | 897 | | niy | 2795 | ali | 1061 | sai | 853 | | pag | 2793 | nag | 1043 | aba | 835 | | yan | 2757 | pag | 982 | ina | 833 | | apa | 2716 | aka | 975 | aga | 824 | | aga | 2694 | ayo | 933 | ini | 816 | | ali | 2622 | aha | 931 | mag | 812 | | man | 2574 | nan | 928 | aro | 730 | | aha | 2450 | siy | 916 | ako | 730 | | uma | 2412 | ako | 868 | gan | 718 | | aki | 2376 | pan | 863 | par | 705 | | nga | 2281 | ama | 847 | nbs | 702 | | mag | 2269 | man | 831 | bsp | 702 | | aba | 2253 | ini | 830 | ata | 683 | | awa | 2249 | ita | 827 | nga | 683 | | kan | 2219 | una | 811 | pag | 639 | | tin | 2208 | ina | 763 | ati | 605 | | asa | 2142 | aba | 758 | lan | 582 | | ako | 2130 | kin | 744 | ion | 576 | | hin | 2119 | nak | 727 | nda | 574 | | ito | 2033 | ung | 718 | lin | 569 | | aya | 2000 | kan | 716 | sak | 567 | | ana | 1993 | san | 700 | ano | 553 | | gan | 1973 | nah | 700 | ban | 547 | | ami | 1934 | ngo | 679 | ind | 538 | | san | 1913 | kat | 675 | ron | 530 | | nak | 1896 | gan | 665 | apa | 527 | | abi | 1878 | ula | 636 | ana | 526 | | tan | 1844 | ano | 626 | ili | 524 | | siy | 1835 | uot | 611 | ent | 508 | | ani | 1773 | ahi | 605 | ada | 502 | ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 7 A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** We Used Weka As Discussed In Section 4. ✓ B1. Did you cite the creators of artifacts you used? Section 4 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Section 3 for the collected Philippine language data and Section 4 for WEKA B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? The dataset is not scraped from social media sites. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 2-3 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 4 ## C ✓ **Did You Run Computational Experiments?** Left Blank. ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Appendix ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 5 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 4 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
zhou-etal-2023-towards-robust
Towards Robust Ranker for Text Retrieval
https://aclanthology.org/2023.findings-acl.332
A neural ranker plays an indispensable role in the de facto {`}retrieval {\&} rerank{'} pipeline, but its training still lags behind due to the weak negative mining during contrastive learning. Compared to retrievers boosted by self-adversarial (i.e., in-distribution) negative mining, the ranker{'}s heavy structure suffers from query-document combinatorial explosions, so it can only resort to the negative sampled by the fast yet out-of-distribution retriever. Thereby, the moderate negatives compose ineffective contrastive learning samples, becoming the main barrier to learning a robust ranker. To alleviate this, we propose a multi-adversarial training strategy that leverages multiple retrievers as generators to challenge a ranker, where i) diverse hard negatives from a joint distribution are prone to fool the ranker for more effective adversarial learning and ii) involving extensive out-of-distribution label noises renders the ranker against each noise distribution, leading to more challenging and robust contrastive learning. To evaluate our robust ranker (dubbed R2anker), we conduct experiments in various settings on the passage retrieval benchmarks, including BM25-reranking, full-ranking, retriever distillation, etc. The empirical results verify the new state-of-the-art effectiveness of our model.
# Towards Robust Ranker For Text Retrieval Yucheng Zhou1∗, Tao Shen1, Xiubo Geng2, Chongyang Tao2**, Can Xu**2, Guodong Long1, Binxing Jiao2, **Daxin Jiang**2† 1Australian AI Institute, School of CS, FEIT, University of Technology Sydney 2Microsoft [email protected], {tao.shen, guodong.long}@uts.edu.au {xigeng,chongyang.tao,can.xu,binxjia,djiang}@microsoft.com ## Abstract A neural ranker plays an indispensable role in the de facto 'retrieval & rerank' pipeline, but its training still lags behind due to the weak negative mining during contrastive learning. Compared to retrievers boosted by selfadversarial (i.e., in-distribution) negative mining, the ranker's heavy structure suffers from query-document combinatorial explosions, so it can only resort to the negative sampled by the fast yet out-of-distribution retriever. Thereby, the moderate negatives compose ineffective contrastive learning samples, becoming the main barrier to learning a robust ranker. To alleviate this, we propose a multi-adversarial training strategy that leverages multiple retrievers as generators to challenge a ranker, where i) diverse hard negatives from a joint distribution are prone to fool the ranker for more effective adversarial learning and ii) involving extensive out-of-distribution label noises renders the ranker against each noise distribution, leading to more challenging and robust contrastive learning. To evaluate our robust ranker (dubbed R 2ANKER), we conduct experiments in various settings on the passage retrieval benchmarks, including BM25-reranking, full-ranking, retriever distillation, etc. The empirical results verify the new state-of-the-art effectiveness of our model. ## 1 Introduction Text retrieval plays a crucial role in many applications, such as web search (Brickley et al., 2019) and recommendation (Zhang et al., 2019). Given a text query, it aims to retrieve all relevant documents from a large-scale collection1(Qu et al., 2021; Gao and Callan, 2022). For a better efficiencyeffectiveness trade-off, the text retrieval *de facto* paradigm relies on a 'retrieval & rerank' pipeline ∗Work is done during internship at Microsoft. †Corresponding author. 1while each collection entry could be a sentence, passage, document, etc., we adopt *document* for a clear demonstration. ![0_image_0.png](0_image_0.png) (Guo et al., 2022). That is, 'retrieval' is to use a fast retriever to fetch a set of top document candidates given a query, while 'rerank' is to re-calculate the relevance of the query to each candidate by a heavy yet effective ranker for better results. Differing from most natural language understanding (NLU) tasks defined as categorical classification (Zhang et al., 2015), training retrieval models, including the retriever and the ranker, are usually formulated as a contrastive learning problem. However, there are merely positive querydocument pairs provided in most applications, regardless of negative samples. Hence, a critical prerequisite of the training is to sample negative documents from the collection for training queries. As random sampling is prone to mine trivial negatives and proven less effective in training (a.k.a. in-batch negatives), a primary sampling method was proposed to leverage BM25 (Karpukhin et al., 2020) to fetch relatively challenging negatives for more effective training. In contrast to such an *out-of-distribution* negative sampling technique where the negatives mined by one retriever are used to train another, recent advanced negative mining methods resort to *in-distribution* sampling technique that leverages the retriever being trained to obtain the challenging-so-far negative documents from the collection. It has been proven in-distribution sampling is superior to outof-distribution one as the former offers more modelspecific contrastive learning samples towards the nuance among the positive and negatives for a given query (Qu et al., 2021; Ren et al., 2021b). Nonetheless, a necessary condition of such an in-distribution sampling technique is the efficiency of the target model in large-scale retrieval. Unfortunately, compared to the bi-encoder based retriever that satisfies the efficiency requirement of largescale retrieval, a cross-encoder based ranker suffers from combinatorial explosion brought by applying the heavy cross-encoder (e.g., the Transformer encoder) to every query-document concatenation. Thereby, the training of ranker can depend merely on out-of-distribution negative sampling technique - either by the BM25 retriever (Nogueira and Cho, 2019) or a trainable semantic retriever (Ren et al., 2021b) - leading to sub-optimal ranker due to a lack of adversarial training samples towards the expressively powerful cross-encoder. In this paper, we aim to train a robust ranker by mining more challenging negatives and thus more effective contrastive samples. To this end, we propose a simple yet effective multi-adversarial training framework towards a robust ranker (R 2ANKER), where multiple retrievers as generators are integrated to mine diverse hard negatives and challenge a single ranker as the discriminator. As such, R 2ANKER has certain merits regarding the robustness of its model training. First, intuitively, sampling negative over a joint distribution of various retrievers is more likely to offer more challenging hard negatives, which compensates the weakness of the previous single-retriever generator and makes the adversarial learning more robust. Second, as the false negatives are closely subject to the relevance distribution over the collection by a specific retriever, various negative generators achieved by different retrievers are prone to sample out-of-distribution or open-set label noise (Wei et al., 2021) to each other. In light of '*insufficient capacity*' assumption (Arpit et al., 2017), such open-set noise has been proven effective in improving robustness (Wei et al., 2021) when learning a ranker with open-set noises. In experimentswe adopt several passage benchmark datasets (Nguyen et al., 2016) to evaluate our proposed model in various settings. Specifically, our method achieves new state-of-the-art performance on BM25 reranking and full-ranking on passage retrieval. Meantime, to verify the expressive power of our ranker model, we conduct an experiment to distill our well-trained model to a retriever, which shows state-of-the-art first-stage in terms of passage retrieval performance. Moreover, our extensive analyses unveil the essence regarding negative distributions to reach a robust ranker and also compare with negatives sampled by redistribution. ## 2 Related Work Ranker for Information Retrieval. To achieve both efficiency and effectiveness, a de facto pipeline for large-scale retrieval is 'retrieval & rerank' (Guo et al., 2022). The 'retrieval' is to use a bi-encoder based retriever to encode queries and documents into dense representations and fetch out candidate documents relevant to the query through a lightweight metric (Gao and Callan, 2021). The 'rerank' aims to conduct a more accurate ranking on pairs of query and candidate documents by a crossencoder based ranker (Ren et al., 2021b). Thereby, the ranker is a crucial part of the pipeline and directly affects the final performance of passage or document retrieval (Ren et al., 2021b; Zhou et al., 2022). In addition, rankers are currently widely used as a teacher in retriever training. The scores derived by the ranker are demonstrated that they can guide the retriever learning through knowledge distillation (Ren et al., 2021b; Zhang et al., 2022a). Moreover, rankers can be used to filter out topretrieved documents that are likely to be false negatives (Qu et al., 2021). Therefore, the ranker not only directly affects the final performance of information retrieval but also can improve the performance of the retriever through knowledge distillation and false negative filtering. In this work, we propose a simple yet effective multi-adversarial training framework toward a robust ranker. Ranker Training. Using negatives to train a ranker has proven effective in many works (Zhang et al., 2022a; Ren et al., 2021b). Since the ranker is based on the cross-encoder structure, a query and document need to be concatenated and passed to the ranker for relevance calculation. However, directly using the ranker to sample negatives on the collection suffers from combinatorial explosion. Therefore, many methods (Khattab and Zaharia, 2020; Qu et al., 2021) adopt the static hard negatives sampled from a retriever, which are fixed during ranker training. In addition, some methods (Zhang et al., 2022a; Ren et al., 2021b) introduce a joint training approach for dense passage retrieval and passage reranking, which dynamically update both the parameters of the ranker and the retriever. Nevertheless, these methods can depend merely on out-of-distribution negative sampling techniques leading to sub-optimal rankers due to a lack of adversarial training samples towards the expressively powerful cross-encoder. Therefore, we integrate multiple retrievers regarded as generators to mine diverse hard negatives and challenge a single ranker as the discriminator. Hard Negative Mining. Hard negative mining (Khattab and Zaharia, 2020; Zhang et al., 2022a; Qu et al., 2021) has been proven very effective in contrastive learning for text representation of retrievers. In contrast to random or in-batch negative sampling, it can find more challenging negatives for a pair of an anchor (i.e., query) and its positive example. They compose effective contrastive samples to help models learn against contextual nuance between the positive and negatives. At early stage, a large number of works employ the off-theshelf BM25 retriever to fetch negative from a large collection (Karpukhin et al., 2020), which greatly boosts the retrievers. Furthermore, recent works (Gao and Callan, 2021, 2022) leverage a retriever to sample retriever-specific hard negatives for each query, which are considered the most challenging negatives. In this study, we sample negatives over a joint distribution of various retrievers, which is likely to offer more challenging hard negatives. ## 3 R2**Anker: Robust Ranker** Task Formulation. Given a text query q, a ranker model, K(*q, d*), is responsible for calculating a relevance score between q and an arbitrary document d from a large-scale collection D (i.e., d ∈ D). It usually serves as a downstream module for an efficient retriever, R, to compose a 'retrieval & rerank' pipeline, where a lightweight retriever R (e.g., biencoder) is to retrieve top candidates and then a relatively heavy-structured K (says cross-encoder (Devlin et al., 2019)) to make the results better. ## 3.1 **Contrastive Learning For Retrieval Model** Formally, the ranker K(*q, d*) is usually built upon a deep Transformer encoder for dense interactions in a pair of query and document (so called crossencoder, or one-stream encoder), i.e., $$\begin{array}{l}{{\mathcal{K}(q,d):=s^{(\mathrm{ce})}=}}\\ {{\mathrm{~Transfm-Enc([CLS]_{}q[SEP]_{}d[SEP]_{};\theta^{(\mathrm{ce})}).}}}\end{array}$$ As each text query q must be concatenated with its every candidate document d to pass into the heavy Transformer encoder, it is impossible in terms of computation overheads to apply a ranker to largescale retrieval (i.e., millions to billions of candidates). In contrast, a retriever R(*q, d*) is usually defined as a bi-encoder (a.k.a. dual-encoder, twostream encoder, and Siamese encoder) to derive counterpart-agnostic representation vectors, i.e., $$\begin{array}{l}{{\mathcal{R}(q,d):=s^{\mathrm{(bi)}}=<\mathbf{u},\mathbf{v}>,\mathrm{~where,}}}\\ {{\mathbf{u}=\mathrm{Transfm-Enc}([\texttt{CLS}]q[\texttt{SEP}];\theta^{(b e)}),}}\\ {{\mathbf{v}=\mathrm{Transfm-Enc}([\texttt{CLS}]d[\texttt{SEP}];\theta^{(b e)}).}}\end{array}$$ Here, < ·, · > denotes a lightweight relevance metric, e.g., dot-product and cosine similarity. As such, all the documents in D can be independently embedded and used for large-scale retrieval via the fast relevance metric. Despite heterogeneous neural structures, training the retrieval models, i.e., the ranker in Eq.(1) and the retriever in Eq.(2), are both formulated as a contrastive learning problem. However, only positive document(s), d q +, is provided for each training query q ∈ Q(trn), regardless of its negative ones, i.e., N q = {d q −}, for contrastive learning. Note that if no confusion arises, we omit the subscript q indicating a specific q for clear writing. Therefore, to train a retrieval model, a prerequisite is determining a negative sampling strategy to make the training procedure more effective, i.e., $$\mathbb{N}=\{d|d\sim P(\mathbb{D}\backslash\{d_{+}\}|q;\theta^{(\mathrm{smp})})\},\qquad(3)$$ where P denote a probability distribution over D, which can be either non-parametric (i.e., θ (smp) = ∅) or parametric (i.e., θ (smp) ̸= ∅). Then, we take the ranker training for a demonstration: it calculates a probability distribution over {d+} ∪ N, i.e., $$P(\mathbf{d}|q,\{d_{+}\}\cup\mathbb{N};\theta^{(\mathrm{ce})})={\frac{\exp({\mathcal{K}}(q,d))}{\sum_{d^{\prime}\in\{d_{+}\}\cup\mathbb{N}}\exp({\mathcal{K}}(q,d^{\prime}))}}.\tag{4}$$ Lastly, the ranker is trained via a contrastive learning objective, whose training loss is defined as $$L{=}{-}{{\sum}_{q,d_{+}}}\log P(\mathrm{d}{=}d_{+}|q,\{d_{+}\}\cup\mathbb{N};\theta^{(\mathrm{ce})}).\quad(5)$$ ## 3.2 Multi-Adversarial Ranker Training A large amount of previous works (Qu et al., 2021; Ren et al., 2021b) have proven that the quality of negative mining strategy significantly affects the performance of contrastive learning. As exhaustive training (i.e., N = D\{d+}) is infeasible in practice, how to train the model effectively with limited computation resources remains an open question. Instead of random sampling, i.e., N (rdm) = {d|d ∼ Uniform(D\{d+}|q)}, a recent trend is to leverage a retrieval model, especially a retriever R(·, ·), to fetch the model-specific top-challenging negatives to train the retrieval model itself. This strategy is also known as self-adversarial training or hard negative mining (Zhang et al., 2022a; Qu et al., 2021). Formally, such a self-adversarial training technique to sample in-distribution negatives to train a retrieval model (i.e., θ (smp) in Eq.(3) equaling to θ (ce) of K) can be written as $$J=\max_{\theta^{(*)}}\mathbb{E}_{\mathbb{N}^{(*)}}=\{d|d\sim P(d|q,\mathbb{D}\setminus\{d_{+}\};\theta^{(*)})\}\big{[}$$ $$\log P(d=d_{+}|q,\{d_{+}\}\cup\mathbb{N}^{(*)};\theta^{(*)})\big{]},\tag{6}$$ where θ (*) parameterizes a retrieval model. Despite efficacy, this self-adversarial technique cannot be applied to our targeted ranker training as it depends on the retrieval model's capability of large-scale retrieval, i.e., feasibility of calculating P(d|q, D\{d+}; θ (*)) in Eq.(7) where D is huge. This is because K as a sampler over P(d|q, D\{d+}; θ (ce)) suffers from a combinatorial explosion problem brought from the cross-encoder, leading to intractable computation overheads. Practically, θ (smp) is must as efficient as possible to circumvent the problem, which could be a heuristic strategy (e.g., uniform sampling), lightweight termbased retriever (e.g., BM25), or later-interaction representation models (e.g., Siamese encoder). As a remedy, the ranker training can resort to adversarial training (Zhang et al., 2022a), where an efficient retriever is used to sample top-hard outof-distribution negatives for challenging the ranker. This can be formally written as $$J^{\mathcal{R}^{*},\mathcal{K}^{*}}=\min_{\theta^{\rm(be)}}\max_{\theta^{\rm(ce)}}$$ $$\mathbb{E}_{\mathbb{N}^{\rm(be)}=\{d|d\sim P(d|q,\mathbb{D}\backslash\{d_{+}\};\theta^{\rm(be)})\}}\big{[}$$ $$\log P(d=d_{+}|q,\{d_{+}\}\cup\mathbb{N}^{\rm(be)};\theta^{\rm(ce)})\big{]},\tag{7}$$ where the θ (be)-parameterized R can be either a frozen and well-trained (Qu et al., 2021) or a jointly optimized (Zhang et al., 2022a) retriever. Although learning from (adversarial) hard negatives has been proven effective to obtain a highperforming ranker (Ren et al., 2021b; Zhang et al., 2022a), a single retriever R, even well-trained with various advanced techniques (Qu et al., 2021; Lu et al., 2022), is hard to provide hard enough negatives to challenge the ranker R for robust training. Hence, we propose a multi-adversarial training strategy for ranker, where multiple heterogeneous retrievers are integrated to jointly sample negatives and challenge the only ranker for effective learning. As such, this ranker learning strategy is defined as $$J^{\{\mathcal{R}^{*}_{j}\}^{M}_{j=1},\mathcal{K}^{*}}=\min_{\Theta^{(\text{mul})}=\{\theta^{(\text{bc})}\}^{M}_{j=1}}\max_{\theta^{(\text{cc})}}$$ $$\mathbb{E}_{\mathbb{N}^{(\text{mul})}=\{d|d\sim P(d|q,\mathbb{D}\setminus\{d_{+}\};\Theta^{(\text{mul})})\}}\big{[}$$ $$\log P(\text{d}=d_{+}|q,\{d_{+}\}\cup\mathbb{N}^{(\text{mul})};\theta^{(\text{cc})})\big{]},\tag{8}$$ where $\theta^{(\text{bc})}_{j}$ parameterizes a retriever $\mathcal{R}_{j}$ (if appli jparameterizes a retriever Rj (if applicable) and N (mul) is a set of negatives sampled by $$P({\bf d}|q,\mathbb{D}\backslash\{d_{+}\};\Theta^{(\mathrm{null})})=$$ $$\prod_{\theta_{j}^{(\mathrm{be})}\in\Theta^{(\mathrm{null})}}P({\bf d}|q,\mathbb{D}\backslash\{d_{+}\};\theta_{j}^{(\mathrm{be})}).\tag{9}$$ Remark. Using heterogeneous retrievers can provide more diverse hard negatives - seen as different negatives distributions - prone to be more challenging for ranker training - leading to robust ranker. Here, the heterogeneity can assure by matching paradigm (term-based matching (Yang et al., 2017) v.s. semantic match (Gao and Callan, 2021)), representing paradigm (dense-vector embedding (Gao and Callan, 2022) v.s. lexicon-weighting embedding (Formal et al., 2021)), etc. By Zhang et al. (2022b), dense and lexical negatives are proven to provide more diverse views cf. BM25 negatives. As verified in experiments, the joint negative distribution is closer to the negative distribution under θ (ce), i.e., in-distribution P(d|q, D\{d+}; θ (ce)). ## 3.3 Robustness By Open-Set Noise Due to limited crowd-sourcing resources, it is impossible to exhaustively annotate the relevance of every query ∀q ∈ Q(trn) to every document ∀d q + ∈ D. As such, sampling negatives by a strong retriever usually introduces a label-noise problem. Fortunately, from the view of noise-labeling problem, we could employ the concept of 'open-set noise' to verify robustness of our ranker. As proven by a recent "*insufficient capacity*" (Arpit et al., 2017) assumption, learning a variety of out-ofdistribution (OOD) or open-set noise can improve robustness against inherent label noises that are subject to one dataset or one distribution. Therefore, as multiple heterogeneous retrievers are involved in our multi-adversarial training framework to provide mutually-OOD hard negatives, the ranker trained on these samples can be robust to the noises from every single retriever, resulting in a superior ranker after the training. Please see §A for more details. ## 3.4 Framework Grounding Considering either diversity of hard negatives or distributions of open-set noises, the choice of heterogeneous retrievers in our multi-adversarial ranker training framework is vitally important. Based on the taxonomy of modern retrievers along with two axes, i.e., matching and representing paradigms, we opt in three representative retrievers: i) **BM25** retriever: A simple term-based BM25 retrieval model built on the whole collection; ii) Den retriever: A dense-vector semantic retriever, coCondenser (Gao and Callan, 2022); and iii) Lex retriever: A lexicon-weighing semantic retriever, SPLADE (Formal et al., 2021). Besides, we detail retriever training and *negatives sampling* in §B. ## 4 Experiments We evaluate our ranker on following 3 retrieval tasks, and refer to §C for setups of *ranker training*, retriever distillation, and *retriever pre-training*. BM25 Reranking Task. BM25 reranking uses a ranker to re-rank the top 1000 passages by BM25 for each query. Here, we adopt the popular passage retrieval datasets, MS-Marco (Nguyen et al., 2016) and TREC Deep Learning 2019 (Craswell et al., 2020), as well as their official BM25 retrieval candidates. Following previous work, we report the performance in MRR@10 on MS-Marco and NDCG@10 on TREC Deep Learning 2019. Full Ranking Task. Full ranking leverages a ranker to rank the top-1000 passages retrieved from the full collection by a specific retriever. We adopt the MS-Marco dataset on which we will specify the retriever and report the performance in MRR@10. Large-scale Retrieval Task. Large-scale retrieval uses a retriever to fetch top-relevant passages from a collection. Here, a ranker only plays a teaching role for knowledge distillation into a biencoder based retriever. The retriever is then used to perform this task on MS-Marco dataset, where MRR@10 and R@50 are used as metrics. ## 4.1 Main Results BM25-Reranking on MS-Marco Dev. The results of BM25-reranking on MS-Marco Dev are listed in Table 1. Based on BM25-retrieved top-1000 candidates (w/ an evaluation metric of 85.7% Recall@1000), it is shown that the proposed R 2ANKER outperforms the best previously reported result, 40.1% MRR@10, from RocketQAv2, and delivers state-of-the-art BM25 reranking performance with 41.1 % MRR@10. BM25-Reranking on TREC Deep Learning 2019. Furthermore, we also compare our method with a strong baseline (RocketQAv2) on TREC Deep Learning 2019 dataset for BM25-reranking. As shown in Table 2, it is observed that our method significantly outperforms RocketQAv2 by absolute 2.6% on NDCG@10. It further demonstrates the effectiveness of our approach. Full Ranking. To comprehensively verify the effectiveness of R2ANKER, we compare our method with other strong baselines with different retrievers other than BM25. As shown in the second part of Table 1, our method outperforms other models by about 1.4% ∼ 2.4% on MRR@10. Moreover, we can observe that, our R2ANKER still performs better than the ranker in RocketQA and RocketQAv2 even though ours is associated with the weakest retriever (i.e., coCondenser† whose R@50=83.5%), demonstrating superiority of our R2ANKER. Large-scale Retrieval. Recently, using a ranker for knowledge distillation into a retriever becomes a prevalent technique for large-scale retriever (Ren et al., 2021b; Zhang et al., 2022a). This distillation process can also be employed to evaluate a ranker based on whether the ranker can provide valid and effective relevance scores for retriever training. As shown in Table 3, we integrated our R2ANKER into the training pipeline of two popular dense-vector retrievers, i.e., coCondenser and SimLM. It is observed that compared to the methods (i.e., coCondenser and AR2) with coCondenser as initialization, the retriever distilled from our R2ANKER significantly upgrades: i) compared to coCondenser w/o distillation, our distillation improves MRR@10 by 1.6%; and ii) compared to AR2 co-training even with a retrieverspecific ranker, our distilled retriever can offer a 0.5% MRR@10 lift. On the other hand, we also integrated our ranker into a state-of-the-art bottleneck pre-training framework, ED-MLM (Wang et al., 2022), which demonstrates our method is capable of surpassing the previous carefully-designed retrieval methods. Moreover, it is noteworthy that | Ranker | Retriever | #Cand&Recall | MRR@10 | |-----------------------------------------|-------------------------------------|----------------|----------| | BM25-Reranking BM25 (Yang et al., 2017) | BM25 | 1000 (85.7) | 18.7 | | BERTbase (Qiao et al., 2019) | BM25 | 1000 (85.7) | 33.7 | | ColBERT (Khattab and Zaharia, 2020) | BM25 | 1000 (85.7) | 34.9 | | SAN + BERTbase (Liu et al., 2018) | BM25 | 1000 (85.7) | 37.0 | | RocketQA (Qu et al., 2021) | BM25 | 1000 (85.7) | 37.0 | | Multi-stage (Nogueira et al., 2019) | BM25 | 1000 (85.7) | 39.0 | | CAKD (Hofstätter et al., 2020) | BM25 | 1000 (85.7) | 39.0 | | RocketQAv2 (Ren et al., 2021b) | BM25 | 1000 (85.7) | 40.1 | | R 2ANKER (Ours) | BM25 | 1000 (85.7) | 41.1 | | Full-Ranking RocketQA (Qu et al., 2021) | RocketQA (Qu et al., 2021) | 50 (85.5) | 40.9 | | RocketQAv2 (Ren et al., 2021b) | RocketQA (Qu et al., 2021) | 50 (85.5) | 41.8 | | RocketQAv2 (Ren et al., 2021b) | RocketQAv2 (Ren et al., 2021b) | 50 (86.2) | 41.9 | | R 2ANKER (Ours) | coCondenser† (Gao and Callan, 2022) | 50 (83.5) | 42.7 | | R 2ANKER (Ours) | coCondenser§ (Gao and Callan, 2022) | 50 (86.4) | 43.3 | | R 2ANKER (Ours) | SPLADE† (Formal et al., 2021) | 50 (84.3) | 43.0 | | R 2ANKER (Ours) | SPLADE§ (Formal et al., 2021) | 50 (86.1) | 43.3 | Table 1: BM25-reranking and full-ranking results on MS-Marco Dev. '\#Cand&Recall' denotes the number of retriever-provided top candidates for reranking, as well as the retriever's top-N recall metric (%). † denotes the retriever trained on BM25 negatives, whereas § denotes the retriever trained on hard negatives sampled by its corresponding † retriever. | Method | NDCG@10 | |--------------------------------|-----------| | RocketQAv2 (Ren et al., 2021b) | 71.4 | | R 2ANKER (Ours) | 73.0 | Table 2: BM25-reranking results on TREC 2019. our R2ANKER is not retriever specific and does not depend on retriever-ranker co-training, making it flexible and applicable enough to any retriever training pipeline for superior performance. ## 4.2 Various Negative Generators First of all, we need to detail more about the retrievers, i.e., coCondenser (Den) and SPLADE (Lex), involved in this work. In particular, training both retrievers following the pipeline Gao and Callan (2022), where the model is first trained over BM25 negatives (i.e., Den-BN & Lex-BN) and then continually trained over self-adversarial hard negatives (i.e., Den-HN & Lex-HN). In contrast to the mere use of Den-HN & Lex-HN in the above main results, we also involve the first-stage retrievers for extensive analyses. In the remainder, D1, D2, L1, and L2 denote Den-BN, Den-HN, Lex-BN, and Lex-HN retrievers, respectively. Ranker-Retrievers Combinations. In Table 4, we first split the results into two parts: given various retrievers (i.e., the columns), we compare the re-ranking performance with different model structures - the bi-encoder based retriever and crossencoder based ranker. It is obvious that any trained ranker beats all the retrievers by a large margin, which explains why the hard negatives generated by one single retriever cannot effectively fool a ranker. Meanwhile, we find that in contrast to stacking more retrievers, the best strategy to sample negatives and train a ranker is combining the best from every world (i.e., BM25+D2+L2), which achieves the best re-ranking results upon various retrievers. Adversary w/ Stronger Retrievers. To check whether using more sophisticated retrievers as generators could improve the multi-adversarial process and boost the ranker's performance, we introduce two latest retrievers (before ranker-distillation), i.e., ED-MLM (Wang et al., 2022) for dense-vector retrieval (+1.3% cf. D2) and LexMAE (Shen et al., 2022) for lexicon-weighting retrieval (+2.0% cf. L2), for negative mining. But, as listed in Table 5, the two trained rankers achieve very competitive results, demonstrating that our method is not sensitive to retrievers' performance but their types. ## 4.3 Training-Test Distribution Shift A key evaluation metric for rankers is BM25 reranking, where BM25 retrieval is used to provide model-agnostic top-1000 negatives for reranking. Due to generality of BM25, it is more likely its top-1000 results include the hard negatives (even if few) that are also considered hard for an arbitrary retriever (Karpukhin et al., 2020; Chen et al., 2021). Therefore, we would like to figure out the correlation between divergence of training-test dis- | Method | Pre-train | Teacher | Specific? | Co-train? | MRR@10 | R@50 | |-------------------------------------|-----------------|--------------|-------------|-------------|----------|--------| | BM25 (Yang et al., 2017) | - | - | - | 18.7 | 59.2 | | | ANCE (Xiong et al., 2021) | RoBERTabase | - | - | 33.0 | - | | | ColBERT (Khattab and Zaharia, 2020) | BERTbase | - | - | 36.0 | 82.9 | | | RocketQA (Qu et al., 2021) | ERNIEbase | ERNIEbase | ✓ | ✓ | 37.0 | 85.5 | | COIL (Gao et al., 2021) | BERTbase | - | - | 35.5 | - | | | ME-BERT (Luan et al., 2021) | BERTlarge | - | - | 33.8 | - | | | PAIR (Ren et al., 2021a) | ERNIEbase | - | - | 37.9 | 86.4 | | | DPR-PAQ (Oguz et al., 2021) | BERTbase | - | - | 31.4 | - | | | Condenser (Gao and Callan, 2021) | Condenserbase | - | - | 36.6 | - | | | coCondenser (Gao and Callan, 2022) | coCondenserbase | - | - | 38.2 | - | | | RocketQAv2 (Ren et al., 2021b) | RocketQAbase | ERNIEbase | ✓ | ✓ | 38.8 | 86.2 | | AR2 (Zhang et al., 2022a) | coCondenserbase | ERNIElarge | ✓ | ✓ | 39.5 | 87.8 | | ERNIE-Search (Lu et al., 2022) | ERNIEbase | ERNIElarge | ✓ | ✓ | 40.1 | 87.7 | | SimLM (Wang et al., 2022) | SimLMbase | ELECTRAbase | ✓ | 41.1 | 87.8 | | | Ours | coCondenserbase | R 2ANKERbase | 40.0 | 87.6 | | | | Ours | ED-MLMbase | R 2ANKERbase | 41.4 | 88.6 | | | Table 3: First stage retrieval performance on MS-Marco Dev, where a non-empty 'Teacher' column denotes that the retriever is trained with a ranker. The 'Co-Train' denotes the ranker is also updated during training, otherwise frozen. The 'Specific' denotes whether the ranker is (co-)trained for a specific retriever, and note that all the rankers in our competitors are also fully trained. | Retriever | BM25 | D1 | D2 | L1 | L2 | Negative | MRR@10 | |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------|-------------------------|------|------|------|------------|----------| | Retriever as reranker. BM25 | 21.23 | 22.50 21.90 21.61 21.58 | | | | | | | Den-BN (abbr. D1) | 35.51 | 36.15 36.14 36.28 36.24 | | | | | | | Den-HN (abbr. D2) | 36.76 | 38.12 38.12 38.14 38.14 | | | | | | | Lex-BN (abbr. L1) | 35.31 | 36.17 36.20 36.11 36.13 | | | | | | | Lex-HN (abbr. L2) | 36.86 | 38.24 38.26 38.18 38.18 | | | | | | | Our ranker trained with retriever(s) as reranker. 2ANKER R - BM25 39.82 41.38 41.44 41.39 41.41 - D1 40.50 42.81 42.82 42.78 42.82 - D2 40.47 42.62 42.67 42.60 42.62 - L1 39.81 41.56 41.56 41.92 41.77 - L2 40.78 41.51 41.60 42.92 42.88 - D1,D2 40.71 42.96 43.00 42.93 42.94 - L1,L2 40.44 42.19 42.03 42.93 42.81 - D1,L1 40.70 42.98 42.92 42.98 42.98 - D2,L2 40.82 42.88 42.92 42.89 42.92 - BM25,D2,L2 41.12 43.24 43.26 43.28 43.29 - D1,L1,D2,L2 41.00 43.21 43.22 43.21 43.24 - BM25,D1,L1,D2,L2 40.78 42.99 42.95 42.97 42.99 | BM25,D2,L2 | 41.1 | | | | | | | BM25,D',L' | 41.0 | | | | | | | | Table 5: BM25 reranking results with a stronger negative generator. D' & L' denotes ED-MLM & LexMAE, respectively. bm25,D2,L2 D1,L1,D2,L2 D2,L2 D1,L1 L1,L2 41.0 40.8 40.6 40.4 40.2 40.0 39.8 0.0 0.2 0.4 0.6 0.8 1.0 KL divergence L2 D1,D2 bm25,D1,L1,D2,L2 MRR@10 D2 D1 bm25 L1 Figure 2: BM25-reranking performance by various rankers that were trained on negatives sampled from retrievers' (joint) distributions. 'KL divergence' denotes the difference between | | | | | | | | Table 4: Full-ranking results in terms of MRR@10 by different retriever-ranker combinations. tributions and performance of final trained rankers, where the training distribution is generated by an adversarial generator upon one or more retrievers. Correlation of BM25 test w/ Adversarial Train. Straightforwardly, the divergence can be easily calculated by applying a discrete KL divergence between the top retrieved results from a (joint) retriever and the BM25. As shown in Figure 2, the ranker achieves roughly better performance when the distribution of its training data is closer to that of test data, except for the BM25. This is possibly because i) the consistency of training-test data Figure 2: BM25-reranking performance by various rankers that were trained on negatives sampled from retrievers' (joint) distributions. 'KL divergence' denotes the difference between the retrievers' (joint) distribution and BM25 retriever's, i.e., KL(P(·|q; Θ(be))| BM25(·|q; D)), which is used to measure negatives' distribution. For example, the point 'bm25,D2,L2' denotes that i) the KL between its joint retriever's distribution and BM25 retriever's distribution is round 0.4, and ii) a ranker trained on that joint negative distribution can achieve 41.1 MRR@10 on BM25 reranking. distribution avoids the distribution shift problem and achieves greater performance, and ii) though training a ranker on BM25 negatives seems perfect in distribution matching, the negatives are not challenging enough for effective training (Zhan et al., 2021; Ren et al., 2021b; Zhang et al., 2022a). BM25-Constrained Negative Mining. To avoid the trivial (non-challenging) negatives from only BM25 and further investigate the impact of BM25- Table 6: BM25-dependent negative sampling for ranker training, where the last denotes BM25-constrained sampling. | 41.0 | | | | | | | | | |------------------|-------|-------------|-----|----|-----|-----|-----|-----| | 40.8 | | | | | | | | | | 40.6 | | | | | | | | | | 40.4 | | | | | | | | | | 40.2 | | | | | | | | | | 40.0 | | | | | | | | | | 39.8 | 0.0 | | 0.2 | | 0.4 | 0.6 | 0.8 | 1.0 | | | | on | | | | | | | | | | KL | | | | | | | | | | Divergence | | | | | | | | | | bm25,D2,L2 | | D1,L1,D2,L2 | | | | | | | | D2,L2 | D1,L1 | | | | | | | | | L1,L2 | | | | | | | | | | L2 | | D1,D2 | | | | | | | | bm25,D1,L1,D2,L2 | | | | | | | | | | MRR@10 | | D2 | | D1 | | | | | | bm25 | | L1 | | | | | | | sourced negatives for ranker training, we present a BM25-constrained negative mining strategy. Here, a negative document, sampled from a generator comprised of D2 and L2, will be kept only if it also appears in BM25 top-1000 results. As such, the resulting training samples for the ranker would not be as trivial as those based on top BM25 negatives, where the trained ranker is prone to distinguish hard negatives in BM25 reranking. However, the training and dev results shown in Table 6 demonstrate that i) consistent with the above paragraph, the ranker trained on BM25 is likely to be underfitting and ii) compared to a joint of diverse retrievers as the negative generator (i.e., BM25,D2,L2), focusing only on test-related BM25-constrained negatives leads to more severe over-fitting problem. This is likely because BM25 top-1000 reranking is general to evaluate a ranker trained on any negative distribution, and its key is to involve diverse, challenging negatives to make the ranker robust. But, open questions remain about what kinds of negative sampling distribution matter and whether sampling in-distribution of a ranker leads to better performance, which we will answer in the next. Table 7: Comparison of BM25 reranking with a ranker trained with re-distributed hard negatives (i.e., the 3rd one). ## 4.4 Ranker-Aware Sampling Distribution As in Introduction, the in-distribution negative sampling strategy has been proven effective in retrieval model training but is non-applicable to ranker training because of the combinatorial explosion. Therefore, we propose to approximately analyze the impact of in-distribution sampling for the ranker. ## Correlation Of Ranker W/ Negative Generator. As shown in Figure 3, we propose an approximate calculation method, which leverages the general BM25 results as mediums, to compare the distributions between a negative generator and a trained ranker. It is observed that there is a clear negative correlation between distance of the two distributions and performance of the corresponding trained ranker. This demonstrates that a generator could achieve more robust performance roughly when its distribution is more similar to that of a ranker. Ranker-redistributed Negative Sampling. Although sampling negatives from the in-distribution of a ranker is practically impossible due to the combinatorial explosion, we present a re-distribution strategy to simulate the in-distribution. Specifically, we first leverage all the retrievers to retrieve top-1000 negatives from the collection individually and then combine them into a negative pool for a query. Next, we apply a BM25-trained ranker to the pool for self-adversarial sampling. Last, we employ the sampled negatives to train a new ranker and report its result in Table 7. As we can see, the re-distribution result is the worst among the three models despite its promising in terms of the negatives' difficulty. This is because the top candidates by the strong ranker are full of false negatives (this explains why some previous works use a ranker to de-noise (Qu et al., 2021; Formal et al., 2021)), verifying the ineffectiveness of in-distribution negatives for ranker training. In contrast, our multiadversarial training strategy by a joint generator can provide mutually out-of-distribution negatives and thus benefits from open-domain noises for robust training (Wei et al., 2021). | Negatives to | Train | Dev | ∆ | |----------------|---------|--------|------| | Train Ranker | MRR@10 | MRR@10 | | | BM25 | 46.9 | 39.8 | -7.1 | | BM25,D2,L2 | 48.7 | 41.1 | -7.6 | | BM25 ∩ (D2,L2) | 48.7 | 40.6 | -8.1 | | Negative | MRR@10 | |-----------------------------------------|----------| | BM25,D2,L2 | 41.1 | | BM25,D1,D2,L1,L2 | 40.8 | | Re-dist(BM25 ∪ D1 ∪ D2 ∪ L1 ∪ L2; θ ce) | 39.6 | ![8_image_0.png](8_image_0.png) ![8_image_1.png](8_image_1.png) ![8_image_2.png](8_image_2.png) ## 4.5 Case Study To investigate how to learn an effective ranker, we show the results of rankers trained from three retrievers, i.e., BM25, D2, and L2. In contrast to welltrained D2 and L2, we can see the BM25 negatives cannot challenge a high-capable ranker built upon pre-trained language models with cross-encoder structure, leading to sub-optimal results (see the BM25 point in Figure 4(left)). However, a strong well-trained retriever as the negative sampler is more likely to introduce label noises, a.k.a. false negative labels in the retrieval field (Qu et al., 2021). Please refer to Figure 4(right) for several examples. Therefore, we leverage multiple retrievers as the generator to learn an effective ranker, where extensive out-of-distribution label noises from retrievers render the ranker against each noise distribution. ## 5 Conclusion In this work, we propose a multi-adversarial strategy for robust ranker training where a negative generator built upon a joint of diverse retrievers is proposed to sample challenging hard negatives for effective adversarial learning. Empirically, our proposed ranker, R2ANKER, achieves state-of-theart performance in both BM25-reranking and fullranking tasks on benchmark datasets. And empowered by our R2ANKER as a teacher for distillation, previous basic retrievers can now deliver state-ofthe-art results in large-scale retrieval tasks. Moreover, our insightful analysis also reveals the minor impact of training-test distribution shift in BM25 reranking due to the generality of BM25 retriever. Meantime, we also find that there is a negative correlation between the ranker-generator distance and performance of the ranker, and re-distribution towards a ranker is toxic due to false negative labels. ## Limitations The limitations of our R2ANKER includes i) *Performance Bottleneck*: As verified in our experiments, the performance of multi-adversarial ranker training depends more on types of the comprising retrievers than their performance. Since the number of the types is very limited, there is a performance bottleneck of our method. and ii) *Compromised* Adversary: Due to computation overheads, the adversarial process is compromised in our training framework in terms of real-time retriever updating. This would negatively affect the performance of the framework. ## References Devansh Arpit, Stanislaw Jastrzebski, Nicolas Ballas, David Krueger, Emmanuel Bengio, Maxinder S. Kanwal, Tegan Maharaj, Asja Fischer, Aaron C. Courville, Yoshua Bengio, and Simon Lacoste-Julien. 2017. A closer look at memorization in deep networks. In *Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney,* NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pages 233–242. PMLR. Dan Brickley, Matthew Burgess, and Natasha F. Noy. 2019. Google dataset search: Building a search engine for datasets in an open web ecosystem. In The World Wide Web Conference, WWW 2019, San Francisco, CA, USA, May 13-17, 2019, pages 1365–1375. ACM. Xilun Chen, Kushal Lakhotia, Barlas Oguz, Anchit Gupta, Patrick S. H. Lewis, Stan Peshterliev, Yashar Mehdad, Sonal Gupta, and Wen-tau Yih. 2021. Salient phrase aware dense retrieval: Can a dense retriever imitate a sparse one? *CoRR*, abs/2110.06918. Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: pretraining text encoders as discriminators rather than generators. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Campos, and Ellen M. Voorhees. 2020. Overview of the TREC 2019 deep learning track. *CoRR*, abs/2003.07820. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171–4186. Association for Computational Linguistics. Thibault Formal, Carlos Lassance, Benjamin Piwowarski, and Stéphane Clinchant. 2021. SPLADE v2: Sparse lexical and expansion model for information retrieval. *CoRR*, abs/2109.10086. Luyu Gao and Jamie Callan. 2021. Condenser: a pretraining architecture for dense retrieval. In *Proceedings of the 2021 Conference on Empirical Methods* in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 711 November, 2021, pages 981–993. Association for Computational Linguistics. Luyu Gao and Jamie Callan. 2022. Unsupervised corpus aware language model pre-training for dense passage retrieval. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 2843–2853. Association for Computational Linguistics. Luyu Gao, Zhuyun Dai, and Jamie Callan. 2021. COIL: revisit exact lexical match in information retrieval with contextualized inverted list. In *Proceedings of* the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 3030–3042. Association for Computational Linguistics. Jiafeng Guo, Yinqiong Cai, Yixing Fan, Fei Sun, Ruqing Zhang, and Xueqi Cheng. 2022. Semantic models for the first-stage retrieval: A comprehensive review. ACM Trans. Inf. Syst., 40(4):66:1–66:42. Sebastian Hofstätter, Sophia Althammer, Michael Schröder, Mete Sertkan, and Allan Hanbury. 2020. Improving efficient neural ranking models with cross-architecture knowledge distillation. *CoRR*, abs/2010.02666. Yen-Chang Hsu, Zhaoyang Lv, Joel Schlosser, Phillip Odom, and Zsolt Kira. 2019. Multi-class classification without multi-class labels. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick S. H. Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In *Proceedings of* the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 6769–6781. Association for Computational Linguistics. Omar Khattab and Matei Zaharia. 2020. Colbert: Efficient and effective passage search via contextualized late interaction over BERT. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, SIGIR 2020, Virtual Event, China, July 25-30, 2020, pages 39–48. ACM. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In *3rd International Conference on Learning Representations,* ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Xiaodong Liu, Kevin Duh, and Jianfeng Gao. 2018. Stochastic answer networks for natural language inference. *CoRR*, abs/1804.07888. Yuxiang Lu, Yiding Liu, Jiaxiang Liu, Yunsheng Shi, Zhengjie Huang, Shikun Feng, Yu Sun, Hao Tian, Hua Wu, Shuaiqiang Wang, Dawei Yin, and Haifeng Wang. 2022. Ernie-search: Bridging cross-encoder with dual-encoder via self on-the-fly distillation for dense passage retrieval. *CoRR*, abs/2205.09153. Yi Luan, Jacob Eisenstein, Kristina Toutanova, and Michael Collins. 2021. Sparse, dense, and attentional representations for text retrieval. Trans. Assoc. Comput. Linguistics, 9:329–345. Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A human generated machine reading comprehension dataset. In Proceedings of the Workshop on Cognitive Computation: Integrating neural and symbolic approaches 2016 co-located with the 30th Annual Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain, December 9, 2016, volume 1773 of *CEUR* Workshop Proceedings. CEUR-WS.org. Rodrigo Frassetto Nogueira and Kyunghyun Cho. 2019. Passage re-ranking with BERT. *CoRR*, abs/1901.04085. Rodrigo Frassetto Nogueira, Wei Yang, Kyunghyun Cho, and Jimmy Lin. 2019. Multi-stage document ranking with BERT. *CoRR*, abs/1910.14424. Barlas Oguz, Kushal Lakhotia, Anchit Gupta, Patrick S. H. Lewis, Vladimir Karpukhin, Aleksandra Piktus, Xilun Chen, Sebastian Riedel, Wen-tau Yih, Sonal Gupta, and Yashar Mehdad. 2021. Domainmatched pre-training tasks for dense retrieval. *CoRR*, abs/2107.13602. Yifan Qiao, Chenyan Xiong, Zhenghao Liu, and Zhiyuan Liu. 2019. Understanding the behaviors of BERT in ranking. *CoRR*, abs/1904.07531. Yingqi Qu, Yuchen Ding, Jing Liu, Kai Liu, Ruiyang Ren, Wayne Xin Zhao, Daxiang Dong, Hua Wu, and Haifeng Wang. 2021. Rocketqa: An optimized training approach to dense passage retrieval for opendomain question answering. In *Proceedings of the* 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 5835–5847. Association for Computational Linguistics. Ruiyang Ren, Shangwen Lv, Yingqi Qu, Jing Liu, Wayne Xin Zhao, Qiaoqiao She, Hua Wu, Haifeng Wang, and Ji-Rong Wen. 2021a. PAIR: leveraging passage-centric similarity relation for improving dense passage retrieval. In *Findings of the Association for Computational Linguistics: ACL/IJCNLP* 2021, Online Event, August 1-6, 2021, volume ACL/IJCNLP 2021 of *Findings of ACL*, pages 2173– 2183. Association for Computational Linguistics. Ruiyang Ren, Yingqi Qu, Jing Liu, Wayne Xin Zhao, Qiaoqiao She, Hua Wu, Haifeng Wang, and Ji-Rong Wen. 2021b. Rocketqav2: A joint training method for dense passage retrieval and passage re-ranking. In *Proceedings of the 2021 Conference on Empirical* Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 2825–2835. Association for Computational Linguistics. Tao Shen, Xiubo Geng, Chongyang Tao, Can Xu, Xiaolong Huang, Binxing Jiao, Linjun Yang, and Daxin Jiang. 2022. Lexmae: Lexicon-bottlenecked pretraining for large-scale retrieval. *CoRR*, abs/2208.14754. Yu Sun, Shuohuan Wang, Yu-Kun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, and Hua Wu. 2019. ERNIE: enhanced representation through knowledge integration. *CoRR*, abs/1904.09223. Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, and Furu Wei. 2022. Simlm: Pre-training with representation bottleneck for dense passage retrieval. *CoRR*, abs/2207.02578. Hongxin Wei, Lue Tao, Renchunzi Xie, and Bo An. 2021. Open-set label noise can improve robustness against inherent label noise. In Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 7978–7992. Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul N. Bennett, Junaid Ahmed, and Arnold Overwijk. 2021. Approximate nearest neighbor negative contrastive learning for dense text retrieval. In *9th International Conference on Learning* Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net. Peilin Yang, Hui Fang, and Jimmy Lin. 2017. Anserini: Enabling the use of lucene for information retrieval research. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, Shinjuku, Tokyo, Japan, August 7-11, 2017, pages 1253–1256. ACM. Jingtao Zhan, Jiaxin Mao, Yiqun Liu, Jiafeng Guo, Min Zhang, and Shaoping Ma. 2021. Optimizing dense retrieval model training with hard negatives. In SIGIR '21: The 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, Virtual Event, Canada, July 11-15, 2021, pages 1503–1512. ACM. Hang Zhang, Yeyun Gong, Yelong Shen, Jiancheng Lv, Nan Duan, and Weizhu Chen. 2022a. Adversarial retriever-ranker for dense text retrieval. Kai Zhang, Chongyang Tao, Tao Shen, Can Xu, Xiubo Geng, Binxing Jiao, and Daxin Jiang. 2022b. LED: lexicon-enlightened dense retriever for large-scale retrieval. *CoRR*, abs/2208.13661. Shuai Zhang, Lina Yao, Aixin Sun, and Yi Tay. 2019. Deep learning based recommender system: A survey and new perspectives. *ACM Comput. Surv.*, 52(1):5:1–5:38. Xiang Zhang, Junbo Jake Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 649–657. Yucheng Zhou, Tao Shen, Xiubo Geng, Chongyang Tao, Guodong Long, Can Xu, and Daxin Jiang. 2022. Fine-grained distillation for long document retrieval. CoRR, abs/2212.10423. ## A From The View Of Open-Set Noise In this section, we introduce noise-labeling problem caused by false-negative samples, and then elaborate on why our proposed multi-adversarial learning strategy can mitigate the problem by openset noise (Wei et al., 2021) and improve robustness. Due to limited crowd-sourcing resources, it is impossible to comprehensively annotate the relevance of every query ∀q ∈ Q(trn) to every document ∀d q + ∈ D. In general, the annotating process can be roughly described as i) using the best on-hand retriever (e.g., a commercial search engine) to fetch top document candidates for a query q, and then ii) distinguish positive document(s), d+, associated to q from the very top candidates. Therefore, constrained by the retriever in the annotation process, there exists positive documents for q not included in the top candidates, which are regarded as negative by mistake - false negative label - degrading standard ranker training. As such, sampling hard negatives by a strong retriever usually introduces the label-noise problem. Prior works focus on 'co-teaching' or/and 'boosting' strategies (Qu et al., 2021; Zhang et al., 2022a), but they assume a ranker is robust enough for anti-noise while only denoise for more fragile retrievers by the ranker. Taking a step further, we could also formulate the search problem (both retrieval and rerank) as a many-class many-label classification problem, where the number of classes equals to |D|, i.e., the number of documents in D. And |D| is usually very large, ranging from millions to billions. Thus, the current solutions of the search problem are analogous to *label semantic matching* paradigm for many-class classification problems (Hsu et al., 2019). As such, the mis-labeled class caused by a single θ (be) j-parameterized retriever Rj will be subject to the following distribution: $$y^{\prime}\sim P^{\mathrm{(FN)}}({\sf d}|q,{\mathbb{D}}\backslash\{d_{+}\};\theta_{j}^{\mathrm{(be)}}),\tag{10}$$ where P (FN)(·|·; θ $\mathrm{FN})\left(\cdot|\cdot;\theta_j^{\mathrm{(be)}}\right)\,$ denotes a ## J) Denotes An Inherent Label Noise Distribution By The Retriever Θ (Be) J. Fortunately, From The View Of Noise-Labeling Problem In Many-Class Many-Label Classification, We Could Employ The Concept Of 'Open-Set Noise' To Verify Robustness Of Our Ranker. As Proven By A Recent "*Insufficient Capacity*" (Arpit Et Al., 2017) Assumption2, Learning A Variety Of Out-Of-Distribution (Ood) Or Open-Set Noise Can Improve Robustness Against Inherent Label Noises That Are Subject To One Dataset Or One Distribution. Therefore, As Multiple Heterogeneous Retrievers Are Involved In Our Multi-Adversarial Training Framework To Provide Mutually-Ood Hard Negatives, The Ranker Trained On These Samples Can Be Robust To The Noises From Every Single Retriever, Resulting In A Superior Ranker After The Training. B Framework Grounding Detials Retriever Training. In line with (Clark et al., 2020) and (Ren et al., 2021b), we do not seek for 2By Wei et al. (2021), "increasing the number of examples while keeping representation capacity fixed would increase the time needed to memorize the data set. Hence, the larger the size of auxiliary dataset is, the more time it needs to memorize the open-set noises in the auxiliary dataset as well as the inherent noises in the training set, relative to clean data." updating the generators (i.e., the retrievers in our method) w.r.t performance of the discriminator (i.e., the ranker) with two considerations: On the one hand, we try to avoid heavy computation overheads to train the retrievers jointly and update the largescale index synchronously. On the other hand, due to their intrinsic discrepancy in model structure, the generators hardly fool the discriminator, making the adversarial process less effective. As verified in (Zhang et al., 2022a), cooperative learning (training the retriever towards the reranker, regularized by a Kullback–Leibler divergence) is also necessary for competitive performance. Sampling Negatives. The strategy to sample a negative distribution in Eq.(3) plays an important role in our method. Instead of directly sampling from the softmax distribution over D\{d+} that inclining to the very top candidates, we follow the previously common practice to cap top-N (says N=200 in our experiments) candidates and then conduct a uniform sampling to ensure its diversity. As for sampling over the joint negative distribution in Eq.(9), we combine the capped top-N candidates from multiple generators (retrievers) without deduplication to better simulate the joint distribution. ## C Training Setup Ranker Training. We use the ERNIE-2.0-enbase model (Sun et al., 2019) as the initialization of our ranker. To provide more diverse hard negatives for ranker's robust training, we sample them from multiple retrievers: we use three kinds of retrievers, including BM25 (Yang et al., 2017) for term-based retrieval, coCondenser (Gao and Callan, 2022) for dense-vector retrieval models and SPLADE (Formal et al., 2021) for lexicon-weighting retrieval models. During ranker training, we sample 40 hard negatives for each query. The maximum training epoch, batch size and learning rate are set to 2, 12 and 1 × 10−5. The maximum sequence length is set to 128 and the random seed is fixed to 42. For model optimizing, we use Adam optimizer (Kingma and Ba, 2015) and a linear warmup. The warmup proportion is 0.1, and the weight decay is 0.1. All experiments are conducted on an A100 GPU. Retriever Distillation. To distill our trained ranker to a retriever for first-stage retrieval, we adopt the two-stage coCondenser retriever (Gao and Callan, 2022) and apply our ranker scores to the second stage of coCondenser fine-tuning. Specifically, instead of the mere contrastive learning, we leverage training data by Ren et al. (2021b) and replace contrastive learning loss in coCondenser with a simple KL divergence loss. Its learning rate, batch size, and epoch number are set to 5 × 10−5, 16× (1 positive and 10 negatives), and 4, respectively. Retriever Pre-training. To make the distilled results more competitive, we also involve the recently sophisticated bottleneck pre-training technique, called ED-MLM (Wang et al., 2022). Upon an initialization from BERT-base (Devlin et al., 2019) and data from MS-Marco collection, the learning rate is set to 1 × 10−4, the batch size is set to 2048, the number of training epochs is set to 20, max sequence length is set to 144, and the random seed is set to 42. The other parameters are strictly following Wang et al. (2022). Such a corpus-aware pre-training procedure takes about 13 hours on eight A100 GPUs. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitations section ✗ A2. Did you discuss any potential risks of your work? The topic of the paper deals only with text retrieval ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and Introduction section ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. ✓ B1. Did you cite the creators of artifacts you used? 4 Experiments section ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? MS-Marco and TREC Deep Learning 2019 are open-source datasets ✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Our use of MS-Marco and TREC Deep Learning 2019 was consistent with their intended use. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. 4 Experiments section ## C ✓ **Did You Run Computational Experiments?** 4 Experiments Section ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix D Training Setup The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Appendix D Training Setup ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 4 Experiments section ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 4 Experiments section D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
hosseini-caragea-2023-semi
Semi-Supervised Domain Adaptation for Emotion-Related Tasks
https://aclanthology.org/2023.findings-acl.333
Semi-supervised domain adaptation (SSDA) adopts a model trained from a label-rich source domain to a new but related domain with a few labels of target data. It is shown that, in an SSDA setting, a simple combination of domain adaptation (DA) with semi-supervised learning (SSL) techniques often fails to effectively utilize the target supervision and cannot address distribution shifts across different domains due to the training data bias toward the source-labeled samples. In this paper, inspired by the co-learning of multiple classifiers for the computer vision tasks, we propose to decompose the SSDA framework for emotion-related tasks into two subcomponents of unsupervised domain adaptation (UDA) from the source to the target domain and semi-supervised learning (SSL) in the target domain where the two models iteratively teach each other by interchanging their high confident predictions. We further propose a novel data cartography-based regularization technique for pseudo-label denoising that employs training dynamics to further hone our models{'} performance. We publicly release our code.
# Semi-Supervised Domain Adaptation For Emotion-Related Tasks Mahshid Hosseini Cornelia Caragea Computer Science University of Illinois at Chicago [email protected] [email protected] ## Abstract Semi-supervised domain adaptation (SSDA) adopts a model trained from a label-rich source domain to a new but related domain with a few labels of target data. It is shown that, in an SSDA setting, a simple combination of domain adaptation (DA) with semi-supervised learning (SSL) techniques often fails to effectively utilize the target supervision and cannot address distribution shifts across different domains due to the training data bias toward the source-labeled samples. In this paper, inspired by the co-learning of multiple classifiers for the computer vision tasks, we propose to decompose the SSDA framework for emotion-related tasks into two subcomponents of unsupervised domain adaptation (UDA) from the source to the target domain and semi-supervised learning (SSL) in the target domain where the two models iteratively teach each other by interchanging their high confident predictions. We further propose a novel data cartography-based regularization technique for pseudo-label denoising that employs training dynamics to further hone our models' performance. We release our code.1 ## 1 Introduction Large pre-trained language models (Devlin et al., 2019; Liu et al., 2019) have significantly improved many natural language processing (NLP) task performances with the help of large quantities of labeled training data and have become the de facto model for NLP applications. However, obtaining vast troves of annotated data for training in many real-world scenarios is costly and challenging. For example, reliable large annotated emotion or empathy data might not exist in a computer-assisted therapy session (Hosseini and Caragea, 2021a,b, 2023). On top of that, it is shown that a shift in data distribution can substantially affect the performance of such text classification models (Ngo et al., 2022; Blitzer et al., 2006); a model trained 1https://github.com/Mahhos/CotrainingTrainingDynamics on a source domain does not perform well on a dataset from another domain. This deficiency is due to the domain shift across the datasets (Tzeng et al., 2017), which is a problem that is commonly encountered in NLP. To relieve the unsupervised domain adaptation (UDA) bottleneck for textual tasks, recent works attempt to align distributions between source and target domains by extracting the domain-invariant representations (Ganin et al., 2016). However, despite the recent progress, the UDA methods are still impractical as they may yield different new domain-sensitive particularities for large-scale language models. Recent works showed that the presence of few labeled samples from the target domain in a semisupervised domain adaption (SSDA) setup can positively impact and significantly boost the performance of the neural models (Qin et al., 2020; Kim and Kim, 2020; Saito et al., 2019). There exists prior work on supervised domain adaptation (Daumé III, 2007) and multi-task learning (Sun et al., 2011) for natural language processing (NLP) tasks. However, despite the importance of semisupervised domain adaption, in NLP, only a few studies have focused on this problem (Daumé III et al., 2010; Cheng and Pan, 2014). Daumé III et al. (2010) expanded an existing fully supervised domain adaptation technique (Daumé III, 2007) to semi-supervised domain adaptation settings using co-regularization, which originated in the context of multi-view learning (Rosenberg and Bartlett, 2007; Sindhwani and Rosenberg, 2008). Cheng and Pan (2014) also framed the semi-supervised domain adaptation problem as learning with a transformation function and a prediction model under manifold constraints. Given a large number of labeled data provided in the source domain and only a few target labeled data with the inherent distributional difference, it is shown that in an SSDA setting, a single classifier may likely be dominated by the source domain 5402 (Yang et al., 2021). In this paper, we extend the co-training strategy (Blum and Mitchell, 1998), a semi-supervised learning approach for multi-view data, to a single-view setting for NLP tasks to effectively use unlabeled target data. Co-training trains two classifiers from each view and employs the most confident predictions of the unlabeled data for the two classifiers to teach each other. Inspired by co-training, we propose to decompose the SSDA framework and learn two distinct classifiers to teach each other so that both classifiers can excel in the target domain. Particularly, we employ an unsupervised domain adaptation setup where we leverage the labeled source data and the unlabeled target data to learn one classifier. Furthermore, we employ a semi-supervised learning setup where we learn another classifier using the labeled target data together with the unlabeled target data. We further propose a novel data cartography-based regularization technique for pseudo-label denoising that employs training dynamics (Swayamdipta et al., 2020) to further hone our models' performance. Our preliminary results on three emotion-related NLP tasks show that transferring knowledge between the two classifiers and incorporating training dynamics to help denoising the generated pseudolabels can effectively improve the performance on the target domain. ## 2 Approach 2.1 Co-Training With Task Decomposition Co-training (Blum and Mitchell, 1998) is a wellknown semi-supervised learning (SSL) technique that employs two different views of an example (e.g., audio and video) and learns two predictive models that are trained separately on each view. Assuming that each view is sufficient for correct classification, co-training confers the models the capability to teach each other by adding the highly confident predictions of one model (on new unlabeled examples) to the training set of the other. In this way, co-training helps boost a learning algorithm's performance when only a small set of labeled examples is available. Recently, Chen et al. (2011) and Qiao et al. (2018) proposed techniques to perform co-training using single-view data, but they still need additional tasks or objective functions to apply co-training. Along these lines, Yang et al. (2021) proposed to use only single-view data (i.e., only images) to perform co-training in a semisupervised domain adaptation setup for computer vision tasks, where they leverage the supervision of the labeled data from the source and target domains and combine them with the unlabeled samples from the target domain. Here we propose our co-training with task decomposition and cartography-based mixup approach, which greatly benefits the text-based emotion-related classification tasks in the fewshot setting. Task decomposition in a co-training paradigm has been initially introduced for computer vision tasks by Yang et al. (2021) to enhance the performance of image classification models. With that, our work is greatly inspired by Yang et al. (2021). Task Setup. Given the labeled data from the source DS = {(si, yi)} NS i=1 and the target DT = {(ti, yi)} NT i=1 domains such that |DS*| ≫ |D*T| and |DT| = K × |y| 2in the few-shot setting, and the unlabeled data from the target domain DU = {(ui)} NU i=1, we first construct two sub-tasks in the semi-supervised domain adaptation setup: one using our DSand DU to train an *unsupervised domain adaptation* (UDA) model θ uda, and one using our DTand DU to train a *semi-supervised learning* (SSL) model θ ssl. We conduct our tasks using mini-batch SGD to update the models' weights in our experiments. In each iteration, we make predictions on U = {ub}B b=1 (which is sampled from our unlabeled set DU with mini-batch size B) using our two models θ uda and θ ssl, and generate pseudolabel sets U ssl and U uda that will be filtered based on a threshold τ to update θ ssl and θ uda: $\mathcal{U}^{xxl}=\{(u_{b},y^{\prime}_{b}=argmax\ p(c|u_{b};\theta^{uda}));$ if $max\ p(c|u_{b};\theta^{uda}))>\tau\}$ $\mathcal{U}^{uda}=\{(u_{b},y^{\prime}_{b}=argmax\ p(c|u_{b};\theta^{xsl}));$ if $max\ p(c|u_{b};\theta^{xsl}))>\tau\}$ where p(c|ub; .) is the predicted probability of the unlabeled sample ub ∈ U for a class c. In essence, if the prediction confidence of one model (e.g., θ ssl) on ub is greater than our pseudo-label selection threshold τ , 3it would be added to the U uda to train the other model θ uda. In other words, a model imparts the other model with confident pseudolabels to learn from and uses the other model's confident pseudo-labels to learn from. ## 2.2 Data Cartography-Based Regularization For Pseudo-Label Denoising It is inevitable to obtain noisy pseudo-labels from each model. First due to the domain shift (in our UDA component), and second given that only a few labeled samples (i.e., K = [4, 8, 16] per class) are available from the target domain (in our SSL component), which in turn impacts the supervision process. To mitigate noise in the generated pseudolabels and hone our models' performance, we propose a cartography-based mixup strategy that helps to effectively denoise an incorrect pseudo-label. Mixup Training. Mixup augments the training data by linearly interpolating training samples and their corresponding labels, based on a simple rule proposed by Zhang et al. (2018): (xeij , yeij ) := (λxi + (1 − λ)xj *, λy*i + (1 − λ)yj ) (1) where λ is a mixing ratio sampled from a Beta(α, α) distribution with a hyper-parameter α and (xi, yi) and (xj , yj ) are two input examples that are randomly drawn from the training set. Proposed Approach. Few-shot learning methods generally assume that the training sets *always* include accurately labeled samples. However, this assumption can sometimes be unrealistic. No matter how small, training sets can still contain mislabeled samples (Liang et al., 2022). In other words, it could not be guaranteed that the few-shot training sets were carefully selected to represent their class. In fact, even carefully annotated and selected datasets often hold mislabeled samples (Northcutt et al., 2021; Yang et al., 2020) due to several reasons like ambiguity, automated weakly supervised annotation, or human error. Here, we propose to use a novel Mixup data augmentation technique on the target training data and the pseudo-labels generated by the source domain that is informed by training dynamics to further surpass the noisy data bottleneck and improve the target domain performance in few-shot setting. Our proposed mixup creates vicinal distribution steered by the data maps (Swayamdipta et al., 2020) as described below. We first characterize each training sample of our few-shot target domain training set DTinto three groups of easy-to-learn, ambiguous, and hard-tolearn, based on how they contribute to the model learning (i.e., training dynamics). We then sample examples with specific characteristics (emanated from the previous step) to interpolate with the generated pseudo-labels by the source domain U ssl during our cartography-based mixup process. In our experiments, we measure the statistics using a RoBERTa-base model. Sample (xi, yi) training dynamics are measured as statistics called confidence and variability computed across the E epochs (Swayamdipta et al., 2020). Confidence is computed as the mean model probability of the true label yi across epochs, µˆi =1E XE e=1 pθe(yi|xi); where θ indicates the model parameters and pθe denotes the model's probability at the end of the eth epoch. Variability is measured as the standard deviation of the ground-truth probabilities pθe (yi|xi) across different epochs, σˆi = q PE e=1(pθe (yi|xi)−µˆi) E. Intuitively, samples to which the model confidently (i.e., high confidence) and constantly (i.e., low variability) assigns the true, and the same label corresponds to easy-to-learn examples (for the model). On the other hand, samples with low confidence and low variability resemble hard-to-learn examples (for the model), which usually are referred to as mislabeled samples, and examples with high variability that the model is uncertain about during training are ambiguous (to the model). Using these statistics, we select the easy-to-learn samples (i.e., samples that the model *consistently* predicts *correctly* across epochs) to interpolate with the pseudo-labels generated by the source domain. By employing such particularities, our goal is to ensure that we effectively denoise an incorrect pseudo-label by mixing it with the most informative data samples (Swayamdipta et al., 2020), samples with high confidence and low variability which are detected to be *actually* correct. Our mixup approach combines samples at the level of the hidden state representations generated by the task-specific layer on top of the pre-trained language model. ## 3 Experiments 3.1 Datasets We perform evaluations on three text classification tasks of emotion detection, sentiment analysis, and empathy detection. We analyze tasks with challenging domain shifts where out-of-domain performance is considerably lower. Furthermore, it is shown that detecting empathy or emotions from text without visual or acoustic information is challenging due to the subjective nature of the annotations (Hosseini and Caragea, 2021b), which makes it difficult to accurately label and interpret the emotions or empathy expressed. Additionally, | Model | News to Health | Reddit to TV Series | Yelp to IMDB | | | | | | | |---------------------------------------------------|------------------|-----------------------|----------------|-------|--------|-------|-------|--------|-------| | K = 4 | K = 8 | K = 16 | K = 4 | K = 8 | K = 16 | K = 4 | K = 8 | K = 16 | | | Source-Only ** | 66.63∗∗ | 24.51∗∗ | 52.12∗∗ | | | | | | | | Target-Only | 42.39 | 44.66 | 46.00 | 16.69 | 18.09 | 19.96 | 48.98 | 50.69 | 51.55 | | UDA ** | 45.26∗∗ | 21.15∗∗ | 51.69∗∗ | | | | | | | | SSL | 44.59 | 47.85 | 49.22 | 18.90 | 21.08 | 21.75 | 50.16 | 53.24 | 54.50 | | SSL + MixText (Chen et al., 2020) | 46.67 | 50.19 | 53.36 | 19.82 | 22.38 | 23.16 | 51.10 | 55.38 | 55.78 | | SSL + FliText (Liu et al., 2021) | 44.61 | 46.37 | 50.23 | 19.16 | 19.87 | 20.67 | 50.26 | 55.10 | 54.92 | | Unsupervised Data Augmentation (Xie et al., 2020) | 47.05 | 50.10 | 55.64 | 20.92 | 23.75 | 23.67 | 52.19 | 56.07 | 55.89 | | Co-training with TD | 64.85 | 67.02 | 71.00 | 22.10 | 24.45 | 25.87 | 52.26 | 57.47 | 59.53 | | Co-training with TD + Mixup (Yang et al., 2021) | 67.65 | 70.56 | 73.63 | 21.46 | 22.71 | 24.53 | 53.06 | 57.76 | 58.10 | | Co-training with TD + Ours | 69.33 | 72.66 | 77.33 | 25.83 | 27.54 | 28.85 | 55.67 | 61.44 | 63.66 | | Supervised Learning-Source ** | 68.27∗∗ | 68.55∗∗ | 95.78∗∗ | | | | | | | | Supervised Learning-Target (full)** | 84.33∗∗ | 63.02∗∗ | 92.12∗∗ | | | | | | | most datasets, particularly in empathy detection, are limited in size, with only a few exceptions. However, given the significance of these tasks and the profound impact emotions have on our behavior and daily lives, our objective is to enhance the performance of such tasks and achieve improved detection of emotion-related information from text. We explain our source and target domains datasets below. Empathy Detection. NewsEmp is a dataset of empathic reactions to news stories, including empathy binary labels released by Buechel et al. (2018) which we use as our source domain dataset. TwittEmp Dataset (Hosseini and Caragea, 2021a) contains perceived empathy annotated by empathy direction (seeking vs. providing) in the health domain, which we use as our target domain dataset. Emotion Detection. GoEmotions is an emotion detection dataset from Reddit comments where we use the six basic emotions (joy, anger, fear, sadness, disgust, and surprise) and neutral as our source domain dataset. Meld (Poria et al., 2019) contains dialogues from the popular Friends TV series annotated with the same set of emotion labels, which we use as the target domain dataset. Sentiment Analysis. Yelp (Zhang et al., 2015) is a dataset for binary sentiment classification, consisting of reviews from Yelp, which is used as our source domain. Our target domain dataset is IMDB movie reviews (Maas et al., 2011), containing sentences of movie reviews and their sentiment. ## 3.2 Baseline Methods The details of the experiments are as follows. We use the BERT-base model and K = [4, 8, 16] in all the experiments.4 We contrast our proposed approach on emotion, empathy, and sentiment classification tasks with the following baselines: (1) Source-Only, which uses the source domain for fine-tuning BERT (the training portion) and the target domain for the evaluation (the test portion), which is the same for all our three settings; (2) Target-Only, uses the target domain for both training and evaluation of BERT with few-shot data; (3) UDA unsupervised domain adaptation where source domain training set is used to train a model and make predictions on unlabeled data from the target domain. Then, the generated pseudo-labels are added to the source domain training set iteratively based on the selection threshold; (4) SSL semi-supervised learning, where the target domain training set is used to train a model and make predictions on unlabeled data from the target domain. Then, the generated pseudo-labels are added to the target domain training set iteratively based on the selection threshold; (5) MixText (Chen et al., 2020) which guesses low-entropy labels for unlabeled target data and uses Mixup to interpolate labeled and unlabeled samples; (6) FliText leverages convolution networks to achieve faster and lighter semi-supervised text classification; (7) Unsupervised data augmentation enhances training by augmenting unlabeled data and promoting consistency between augmented versions; (8) Co-training with task decomposition where the SSDA is decomposed to two components of UDA and SSL; (9) Cotraining with task decomposition and mixup (Yang et al., 2021) where the generated pseudo-labels with the source domain classifier are interpolated 4BERT-base yields the best results in our experiments, so we only report the results using this model. | Domain | Sample | Label | |-----------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------| | Health | Yes. I lost my first wife to cancer at 31 and was wrecked with guilt that I didn't do enough to help her. After a while, I finally realized that we all have a time and when it's up, no one or nothing can change that. | not empathetic | | TV Series | Will you marry me? | Fear | | IMDB | This solid little horror film is actually one of Renny Harlin's best. The story is pretty routine stuff, but the atmosphere is what really makes it come alive; in fact, the ghost story is almost an afterthought. The real horror comes from the prison setting itself, and Renny H. spares no detail in showing us how bad the conditions are inside that crumbling, leaking, rat-infested old hellhole (with a sadistic warden, too!) Viggo Mortensen is excellent as usual in the lead role, supported by some very authentic-looking prisoners (there are no pretty boys in this cast.) Horror fans should check this one out. | Negative | with the target training data; and (10) Standard supervised learning with the full training and test sets of the domains separately (as an upper bound of performance). ## 3.3 Results Table 1 compares our proposed approach and baseline methods on our classification tasks for different few-shot settings. We report the average performance on 3 distinct randomly sampled training and development splits with three random seeds to provide a robust measure of our few-shot performance. We make a few remarks below. As we can see from Table 1, our proposed method achieves higher accuracy on all the fewshot settings than any baseline using task decomposition and cartography-based mixup. The results suggest that incorporating cartography-based mixup to effectively denoise the generated pseudolabels (see Ours in the tables) results in constant improvement over all the few-shot settings. For example, on empathy (i.e., News to Health) with K = 4, our proposed approach increased the performance by a factor of 1.63% compared to the Target-Only and by 1.55% compared to the SSL. Interestingly, we also observe that our proposed approach outperforms standard mixup (i.e., Cotraining with TD + Mixup), which signifies the importance and effectiveness of our proposed strategy in using training dynamics to characterize data and identify the correctly-labeled samples (easy-tolearn samples) for denoising the generated pseudolabels through the mixup process. In the standard mixup, the interpolation process occurs between the generated pseudo-labels and all examples in the target labeled set, where it is possible for some of these examples to have incorrect labels. Table 2 shows examples with erroneous labels from all of our few-shot target labeled sets. Erroneous labels can occur due to human errors in annotations, even with small datasets when using crowdsourcing techniques or relying on human annotators. It is apparent from the table that the standard supervised learning using the full training and test sets from the respective domains results in an increase in performance. 4 Conclusion In this work, we extend the co-training strategy as a semi-supervised learning approach for multiview data to a single-view setting for NLP tasks and propose to decompose the SSDA framework and learn two distinct classifiers (one in an semisupervised setup and another one in a domain adaptation setup) to teach each other so that both classifiers can excel in the target domain. We further propose a novel data cartography-based regularization technique for pseudo-label denoising that employs training dynamics to further hone our models' performance. Our preliminary results show that denoising the pseudo-labels of unlabeled target data using high-quality labeled target data within a cotraining framework yields improvements in performance over multiple baselines. ## 5 Limitations One potential limitation of our method is that it induces an extra cost of estimating training dynamic statistics of the data samples to characterize them (e.g., easy-to-learn or ambiguous) based on how they incorporate into the model's learning. This may be more expensive for tasks and datasets with a large number of classes. In the future, we will focus on approaches to characterize the training examples on the fly. ## Acknowledgments We acknowledge NSF for support from grants IIS1912887, IIS-2107487, and ITE-2137846. We also thank our reviewers for their insightful feedback and comments. ## References John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain adaptation with structural correspondence learning. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, pages 120–128, Sydney, Australia. Association for Computational Linguistics. Avrim Blum and Tom M. Mitchell. 1998. Combining labeled and unlabeled data with co-training. In *Proceedings of the Eleventh Annual Conference on Computational Learning Theory, COLT 1998, Madison,* Wisconsin, USA, July 24-26, 1998, pages 92–100. ACM. Sven Buechel, Anneke Buffone, Barry Slaff, Lyle H. Ungar, and João Sedoc. 2018. Modeling empathy and distress in reaction to news stories. *CoRR*, abs/1808.10399. Jiaao Chen, Zichao Yang, and Diyi Yang. 2020. Mixtext: Linguistically-informed interpolation of hidden space for semi-supervised text classification. In *Proceedings of the 58th Annual Meeting of the Association* for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 2147–2157. Association for Computational Linguistics. Minmin Chen, Kilian Q. Weinberger, and Yixin Chen. 2011. Automatic feature decomposition for single view co-training. In Proceedings of the 28th International Conference on Machine Learning, ICML 2011, Bellevue, Washington, USA, June 28 - July 2, 2011, pages 953–960. Omnipress. Li Cheng and Sinno Jialin Pan. 2014. Semi-supervised domain adaptation on manifolds. *IEEE Trans. Neural* Networks Learn. Syst., 25(12):2240–2249. Hal Daumé III. 2007. Frustratingly easy domain adaptation. In ACL 2007, Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics, June 23-30, 2007, Prague, Czech Republic. The Association for Computational Linguistics. Hal Daumé III, Abhishek Kumar, and Avishek Saha. 2010. Co-regularization based semi-supervised domain adaptation. In *Advances in Neural Information* Processing Systems 23: 24th Annual Conference on Neural Information Processing Systems 2010. Proceedings of a meeting held 6-9 December 2010, Vancouver, British Columbia, Canada, pages 478–486. Curran Associates, Inc. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers). Association for Computational Linguistics. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor S. Lempitsky. 2016. Domain-adversarial training of neural networks. J. Mach. Learn. Res., 17:59:1–59:35. Mahshid Hosseini and Cornelia Caragea. 2021a. Distilling knowledge for empathy detection. In Findings of the Association for Computational Linguistics: EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 16-20 November, 2021, pages 3713– 3724. Association for Computational Linguistics. Mahshid Hosseini and Cornelia Caragea. 2021b. It takes two to empathize: One to seek and one to provide. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 13018–13026. AAAI Press. Mahshid Hosseini and Cornelia Caragea. 2023. Feature normalization and cartography-based demonstrations for prompt-based fine-tuning on emotionrelated tasks. In Thirty-Seventh AAAI Conference on Artificial Intelligence, AAAI 2023, Washington DC, February 7-14. AAAI Press. Taekyung Kim and Changick Kim. 2020. Attract, perturb, and explore: Learning a feature alignment network for semi-supervised domain adaptation. In *European conference on computer vision*, pages 591– 607. Springer. Kevin J Liang, Samrudhdhi B Rangrej, Vladan Petrovic, and Tal Hassner. 2022. Few-shot learning with noisy labels. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9089–9098. Chen Liu, Mengchao Zhang, Zhibing Fu, Panpan Hou, and Yu Li. 2021. Flitext: A faster and lighter semisupervised text classification with convolution networks. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 2481– 2491. Association for Computational Linguistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. *CoRR*, abs/1907.11692. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In *Proceedings of the 49th Annual Meeting of the* Association for Computational Linguistics: Human Language Technologies, pages 142–150, Portland, Oregon, USA. Association for Computational Linguistics. Nghia Trung Ngo, Linh Ngo Van, and Thien Huu Nguyen. 2022. Unsupervised domain adaptation for text classification via meta self-paced learning. In Proceedings of the 29th International Conference on Computational Linguistics, COLING 2022, Gyeongju, Republic of Korea, October 12-17, 2022, pages 4741–4752. International Committee on Computational Linguistics. Curtis G. Northcutt, Anish Athalye, and Jonas Mueller. 2021. Pervasive label errors in test sets destabilize machine learning benchmarks. In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks 2021, December 2021, virtual. Soujanya Poria, Devamanyu Hazarika, Navonil Majumder, Gautam Naik, Erik Cambria, and Rada Mihalcea. 2019. MELD: A multimodal multi-party dataset for emotion recognition in conversations. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 527–536. Association for Computational Linguistics. Siyuan Qiao, Wei Shen, Zhishuai Zhang, Bo Wang, and Alan L. Yuille. 2018. Deep co-training for semisupervised image recognition. In *Computer Vision* - ECCV 2018 - 15th European Conference, Munich, Germany, September 8-14, 2018, Proceedings, Part XV, volume 11219 of Lecture Notes in Computer Science, pages 142–159. Springer. Can Qin, Lichen Wang, Qianqian Ma, Yu Yin, Huan Wang, and Yun Fu. 2020. Opposite structure learning for semi-supervised domain adaptation. *CoRR*, abs/2002.02545. David S. Rosenberg and Peter L. Bartlett. 2007. The rademacher complexity of co-regularized kernel classes. In Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics, AISTATS 2007, San Juan, Puerto Rico, March 21-24, 2007, volume 2 of *JMLR Proceedings*, pages 396– 403. JMLR.org. Kuniaki Saito, Donghyun Kim, Stan Sclaroff, Trevor Darrell, and Kate Saenko. 2019. Semi-supervised domain adaptation via minimax entropy. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 - November 2, 2019, pages 8049–8057. IEEE. Vikas Sindhwani and David S. Rosenberg. 2008. An RKHS for multi-view learning and manifold coregularization. In Machine Learning, Proceedings of the Twenty-Fifth International Conference (ICML 2008), Helsinki, Finland, June 5-9, 2008, volume 307 of *ACM International Conference Proceeding Series*, pages 976–983. ACM. Qian Sun, Rita Chattopadhyay, Sethuraman Panchanathan, and Jieping Ye. 2011. A two-stage weighting framework for multi-source domain adaptation. In Advances in Neural Information Processing Systems 24: 25th Annual Conference on Neural Information Processing Systems 2011. Proceedings of a meeting held 12-14 December 2011, Granada, Spain, pages 505–513. Swabha Swayamdipta, Roy Schwartz, Nicholas Lourie, Yizhong Wang, Hannaneh Hajishirzi, Noah A. Smith, and Yejin Choi. 2020. Dataset cartography: Mapping and diagnosing datasets with training dynamics. In *Proceedings of EMNLP 2020, Online, November* 16-20, 2020, pages 9275–9293. Association for Computational Linguistics. Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. 2017. Adversarial discriminative domain adaptation. In *2017 IEEE Conference on Computer* Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pages 2962–2971. IEEE Computer Society. Qizhe Xie, Zihang Dai, Eduard H. Hovy, Thang Luong, and Quoc Le. 2020. Unsupervised data augmentation for consistency training. In *Advances in Neural* Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. Luyu Yang, Yan Wang, Mingfei Gao, Abhinav Shrivastava, Kilian Q. Weinberger, Wei-Lun Chao, and SerNam Lim. 2021. Deep co-training with task decomposition for semi-supervised domain adaptation. In 2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021, Montreal, QC, Canada, October 10-17, 2021, pages 8886–8896. IEEE. Yuewei Yang, Kevin J. Liang, and Lawrence Carin. 2020. Object detection as a positive-unlabeled problem. In 31st British Machine Vision Conference 2020, BMVC 2020, Virtual Event, UK, September 7-10, 2020. BMVA Press. Hongyi Zhang, Moustapha Cissé, Yann N. Dauphin, and David Lopez-Paz. 2018. mixup: Beyond empirical risk minimization. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. Xiang Zhang, Junbo Jake Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 712, 2015, Montreal, Quebec, Canada. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 5 ✗ A2. Did you discuss any potential risks of your work? Our work does not have any potential risks ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and Section 1 for introduction ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? No response. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? No response. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? No response. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No response. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? No response. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. No response. ## C ✓ **Did You Run Computational Experiments?** 3 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 3 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 2 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 3.3 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 3.2 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
welivita-pu-2023-boosting
Boosting Distress Support Dialogue Responses with Motivational Interviewing Strategy
https://aclanthology.org/2023.findings-acl.334
AI-driven chatbots have become an emerging solution to address psychological distress. Due to the lack of psychotherapeutic data, researchers use dialogues scraped from online peer support forums to train them. But since the responses in such platforms are not given by professionals, they contain both conforming and non-conforming responses. In this work, we attempt to recognize these conforming and non-conforming response types present in online distress-support dialogues using labels adapted from a well-established behavioral coding scheme named Motivational Interviewing Treatment Integrity (MITI) code and show how some response types could be rephrased into a more MI adherent form that can, in turn, enable chatbot responses to be more compliant with the MI strategy. As a proof of concept, we build several rephrasers by fine-tuning Blender and GPT3 to rephrase MI non-adherent Advise without permission responses into Advise with permission. We show how this can be achieved with the construction of pseudo-parallel corpora avoiding costs for human labor. Through automatic and human evaluation we show that in the presence of less training data, techniques such as prompting and data augmentation can be used to produce substantially good rephrasings that reflect the intended style and preserve the content of the original text.
# Boosting Distress Support Dialogue Responses With Motivational Interviewing Strategy Anuradha Welivita, and Pearl Pu School of Computer and Communication Sciences École Polytechnique Fédérale de Lausanne Switzerland {kalpani.welivita,pearl.pu}@epfl.ch ## Abstract ![0_Image_0.Png](0_Image_0.Png) AI-driven chatbots have become an emerging solution to address psychological distress. Due to the lack of psychotherapeutic data, researchers use dialogues scraped from online peer support forums to train them. But since the responses in such platforms are not given by professionals, they contain both conforming and non-conforming responses. In this work, we attempt to recognize these conforming and non-conforming response types present in online distress-support dialogues using labels adapted from a well-established behavioral coding scheme named Motivational Interviewing Treatment Integrity (MITI) code and show how some response types could be rephrased into a more MI adherent form that can, in turn, enable chatbot responses to be more compliant with the MI strategy. As a proof of concept, we build several rephrasers by fine-tuning Blender and GPT3 to rephrase MI non-adherent *Advise* without permission responses into *Advise with* permission. We show how this can be achieved with the construction of pseudo-parallel corpora avoiding costs for human labor. Through automatic and human evaluation we show that in the presence of less training data, techniques such as prompting and data augmentation can be used to produce substantially good rephrasings that reflect the intended style and preserve the content of the original text. ## 1 Introduction Demands of the modern world are increasingly responsible for causing severe psychological distress in people. World Health Organization estimates psychological distress affects 29% of people in their lifetime (Steel et al., 2014). The shortage of mental health workers and the stigma associated with mental health further demotivates people from actively seeking help. With the expansion of the internet, many people are seen resorting to peer support platforms such as Reddit and Talklife Figure 1: Example of detecting unfavourable and favourable response types in distress support dialogues and boosting the responses by omitting unfavourable responses or rephrasing them into more favourable ones. to vent their distress.1 The anonymity associated with these platforms makes it easier for people to discuss their concerns without being affected by the stigma. Distress consolation through AIdriven chatbots has also become an emerging solution (Fitzpatrick et al., 2017; Inkster et al., 2018; Mousavi et al., 2021). Due to the lack of availability of large-scale psycho-therapeutic conversations, researchers are using data scraped from online peer support forums to train such chatbots (Alambo et al., 2019; Welivita and Pu, 2022). High levels of perceived empathy and information richness make them good candidates for training (Nambisan, 2011; De Choudhury and De, 2014; Sharma et al., 2020a,b). But since peers are not professionals, the responses contained in such forums can sometimes be unfavourable to address distress (e.g. confrontations, judgments, orders etc.). So, using this data can have severe risks. One solution for this is identifying favourable and unfavourable response types 1www.reddit.com; www.talklife.com 5411 that appear in distress support dialogues and developing automatic means that can propose omission or rephrasing of such unfavourable response types. Figure 1 shows an example. To analyze the types of responses in distress support dialogues, we use labels adapted from a wellestablished behavioral coding system named Motivational Interviewing Treatment Integrity (MITI) code (Moyers et al., 2014). It is used in psychology to evaluate how well a mental health provider responds. Specific response types from the MITI code have shown to increase the likelihood of positive health outcomes (Pérez-Rosas et al., 2018; Gaume et al., 2009). It defines favourable response types such as Questioning, *Reflecting*, and *Advising with permission* and unfavourable response types such as Advising without permission, *Confronting*, and *Self-Disclosing (extra-session)*. In our previous work, we developed a dataset called the MI dataset, to have a comparative understanding of the differences between online support provided by peers and trained counselors. For this, we hired professional counselors to annotate responses given by peers and counselors with labels derived from the MITI code. During analysis, we observed that peers' responses tend to be more supportive, and encouraging than counselors' (as observed by the increased percentage of *Support* and *Affirm* labels). But it was also observed that important therapeutic techniques, such as asking more *open questions* than *closed* ones, *reflections*, giving information, *advices with permission*, and emphasizing speaker's autonomy were lacking in peers' responses and hence require further boosting. One of the major observations was that among the advises given by the peers, 92.86% of them belonged to the category *Advise without permission*, which is MI non-adherent. This percentage was lower in counselor responses, but still accounted for 77.22% of the advises given by counselors. In this work, we aim to detect such *Advise without permission* responses among distress support dialogues and build a rephraser that can rephrase such responses into *Advise with permission*, which is more MI-adherent. First, we detect such responses through a classifier trained on an augmented version of the MI dataset. Next, as we do not have human written responses rephrasing *Advise without* permission responses into *Advise with permission*, we use automatic methods such as template-based replacement and retrieval to construct a pseudoparallel training corpus containing pairs of Advise without permission and *Advise with permission* sentences. Since rephrasing is a labor-intensive task compared to labeling and we require professionally trained counselors to do this in the distress consolation setting, using our already labeled dataset to construct a pseudo-parallel corpus saved us both time and cost. We apply the same methods on the augmented version of the MI dataset to form a much larger pseudo-parallel training corpus and use these corpora to fine-tune BlenderBot (Roller et al., 2021) and GPT3 (Brown et al., 2020). Some of the models we fine-tune incorporate different forms of prompting with the aim of obtaining a better outcome with less training examples. We evaluate the rephrasers using automatic and human evaluation. The results mainly show when the training dataset is small, prompting improves the performance of the rephrasers across style transfer and semantic similarity dimensions. They also suggest that when the training dataset is large (in our case through data augmentation), pseudo-parallel data generated through simpler methods such as template replacement produce better results. Our contributions are four-fold. 1) We develop an MI classifier that can predict 15 different favourable and unfavourable response types derived from the MITI code. 2) We propose a methodology to rephrase responses detected as *Advise without Permission* into more MI-adherent *Advise with Permission*. We show how this can be done in the absence of human written rephrasings by developing pseudo-parallel corpora using different automatic methods. 3) We evaluate these rephrasers using automatic and human evaluation and show how prompting and data augmentation can improve the performance of the rephrasers when there is less training data. 4) Finally, we discuss how this method can be applied to boost chatbot responses, making them more compliant with the MI strategy. Our code and the datasets can be found at https://github.com/anuradha199 2/Boosting-with-MI-Strategy ## 2 Related Work Rephrasing responses recognized as *Advise without Permission* into *Advise with Perrmission* can be identified as a sub-task falling under the task of Text Style Transfer (TST), in which the goal is to automatically control the style attributes (e.g. sentiment, politeness, humor, etc.) of text while preserving the content (Jin et al., 2022). The field of TST involves traditional linguistic approaches as well as deep learning approaches. Traditional approaches to TST rely on term replacement and templates (Mairesse and Walker, 2011; Sheikha and Inkpen, 2011). With the success of deep learning, various neural methods have been recently proposed for TST. Given datasets in which there are direct mappings between the text of the source style and the text of the target style, which are referred to as parallel corpora, standard sequence-to-sequence models are often directly applied for TST (Rao and Tetreault, 2018; Shang et al., 2019; Xu et al., 2019). But parallel corpora are challenging to find because the development of such data often requires costly human labor. Thus, TST on non-parallel corpora has become an emerging area of research (Li et al., 2018; Jin et al., 2019; Liu et al., 2022). Parallel and nonparallel datasets have been proposed for common sub-tasks of TST such as sentiment (Shen et al., 2017), topic (Huang et al., 2020), formality (Rao and Tetreault, 2018), politeness (Madaan et al., 2020), and humor (Gan et al., 2017) transfer. But to the best of our knowledge, this is the first attempt at introducing a new subtask and releasing an nonparallel corpus for style transfer between MI non-adherent *Advise without* Permission and MI adherent *Advise with Permission* responses. This task is more challenging than the other sub-tasks because it requires the expertise of professional counselors to generate training data. In this work, we release a nonparallel corpus that can be utilized for this task, which is annotated by professional counselors. We also show how automatic methods could be applied to create pseudo-parallel corpora using this dataset, which can be used to train neural models for this task. ## 3 Datasets For this work, we used dialogues curated from two online support platforms. The first one is CounselChat (counselchat.com), in which verified counselors respond to distress-related posts. The CounselChat dataset available publicly 2contains 2,129 post-response pairs spanning 31 distress-related topics. We also curated dialogues from a carefully selected set of 8 subreddits: *mentalhealthsupport*; offmychest; sad; suicidewatch; anxietyhelp; depression; *depressed*; and *depression_help*, which are popular among Reddit users to vent their distress. 2https://github.com/nbertagnolli/counsel-chat This dataset, which we call RED (Reddit Emotional Distress), contains 1,275,486 dyadic conversations having on average of 2.66 turns per dialogue. In our previous work, we recruited professional counselors to annotate a subset of 1,000 dialogues each from CounselChat and RED datasets with labels adapted from the MITI code 2.0 (Moyers et al., 2003) and 4.2.1 (Moyers et al., 2014). We call this the MI dataset. We used 15 labels for annotation. They are elaborated in the appendices. Out of them, we are interested in the labels Advise with Permission and *Advise without Permission*, which are respectively considered MI-adherent and MI non-adherent response types. The MI dataset contains 16,811 annotated responses, out of which 2.87% (484) and 13.5% (2,285) responses are labeled as *Advise with Permission* and Advise without Permission, respectively. To further augment the MI dataset, we used automatic labeling to expand the 15 labels into unlabeled dialogue responses from CounselChat and RED datasets. We used two automatic methods for this purpose: 1) N-gram-based matching; and 2) Similarity based retrieval. N-gram Based Matching: By tokenizing the responses in the MI dataset and computing the frequencies, we discovered the most frequent N-grams (four-grams and five-grams) occurring among the 15 labels. Examples of them are shown in the appendices. Next, we searched for the presence of these indicative N-grams (first five-gram and then four-grams) among individual sentences that appear in dialogue responses of the unlabeled CounselChat and RED datasets. If an indicative N-gram was found in a sentence, we labeled that sentence with the label that N-gram is indicative of. The sentences with overlapping labels were discarded due to ambiguity. In this way, we were able to automatically label 1,918 and 340,361 sentences in CounselChat and RED datasets, respectively. Similarity Based Retrieval: For each unlabeled sentence among the responses in CounselChat and RED datasets, we computed the cosine similarity with each of the labeled sentences in the MI dataset. Next, for each unlabeled sentence, we retrieved the labeled sentences whose cosine similarity is higher than a certain threshold (the thresholds were different for each of the 15 labels, which were selected after manually inspecting randomly selected pairs of unlabeled and labeled sentences corresponding to different labels). Next, we used a majority voting scheme to select the label we can associate the unlabeled sentence with. When we encountered ties, we computed the average similarities across the clusters of retrieved sentences with different labels that held a tie and selected the label based on maximum average similarity. Using this method, we were able to automatically annotate 2,881 and 1,196,012 sentences in CounselChat and RED datasets, respectively. Using the union and the intersection of the labels retrieved from N-gram-based matching and similarity-based retrieval and combining them with the gold labels from the MI dataset, we created two augmented-labeled MI datasets having 1,378,469 and 84,052 labeled sentences, respectively. For simplicity, we will refer to them as MI Augmented (Union) and MI Augmented (Intersection) datasets. ## 4 Mi Classifier We developed a classifier to automatically classify responses in distress-support dialogues into one of the 15 labels mentioned above. This is an important step that should be followed before rephrasing, since first it should identify the unfavourable responses types. For this purpose, we developed a classifier that consists of a representation network that uses the BERT architecture (Devlin et al., 2019), an attention layer that aggregates all hidden states at each time step, a hidden layer, and a softmax layer. We used the BERT-base architecture with 12 layers, 768 dimensions, 12 heads, and 110M parameters as the representation network. It was initialized with weights from RoBERTa (Liu et al., 2019). We trained three classifiers. The first one was trained on the smaller human-annotated MI dataset (MI Gold) taking 80% of the data for training and leaving 10% each for validation and testing. The other two were trained on the MI Augmented (Union) and MI Augmented (Intersection) datasets, leaving out the data used for validation and testing in the first case. In all cases, the optimal model was chosen based on average cross entropy loss calculated between the ground truth and predicted labels in the human-annotated validation set. The classifiers trained on MI Gold, MI Augmented (Intersection), and MI Augmented (Union) datasets reported accuracies of 68.31%, 67.13%, and 73.44% on the MI Gold test set, respectively. The reported accuracies on the MI Gold validation set were 67.08%, 64.07%, and 72.67%, respectively for the three classifiers. Accordingly, the labels collected through the union of N-gram matching and cosine similarity-based methods improved the accuracy of the classifier by 8.33% and 7.5%, respectively on the validation and test sets compared to the accuracies reported when trained on the gold-labeled MI dataset. ## 5 Mi Rephraser After identifying the favourable and unfavourable response types, we can choose to omit the unfavourable responses or if possible, rephrase them into a more MI adherent form. A label pair that this rephrasing strategy can be applied directly are Advise without Permission and *Advise with Permission*. Through N-gram analysis, we could discover some N-gram patterns that are indicative of the label pair *Advise without Permission* (e.g. You should, You need to, *You musn't*) and Advise with Permission (e.g. It maybe helpful to, *I wonder if* you can, *You may want to consider*). These could be identified as style attributes that vary across the responses identified as *Advise without Permission* and *Advise with Permission*. Thus, given a response identified as *Advise without Permission*, the goal of the rephraser would be to rephrase the response to be indicative of *Advise with Permission*, without changing the semantic content of the response. As mentioned in Section 2, this can be identified as a sub-task under the task of Text Style Transfer (TST). TST is formally defined as, given a target utterance x′and the target discourse style attribute a′, model p(x′|*a, x*), where x is a given text carrying a source attribute value a. In our case, x corresponds to the response identified as *Advise without Permission*, a corresponds to *Advise without Permission*, and a′corresponds to *Advise with Permission*. ## 5.1 Pseudo-Parallel Corpora As discussed in Section 2, the most recent methods for TST involve data-driven deep learning models. The prerequisite for using such models is that there exist style-specific corpora for each style of interest, either parallel or nonparallel. With the human-annotated MI dataset, we are in possession of a non-parallel corpus containing 2,285 *Advise* without Permission and 484 *Advise with Permission* type of responses. With the MI Augmented (Union) dataset, we have 199,885 *Advise without Permission* and 3,541 *Advise with Permission* type of responses. Since creating parallel corpora consumes human labor and cost, using the above data, we decided to create pseudo-parallel corpora that contain pairs of *Advise without Permission* and *Advise with* Permission responses to train our rephrasers. We used two automatic methods to create these pseudoparallel corpora: 1) Template-based replacement method; and 2) Retrieval method. ## 5.1.1 Template-Based Replacement Method We used frequency-based N-gram analysis accompanied by human inspection to determine the linguistic templates that represent *Advise with Permission* and *Advise without Permission* responses. Table 11 shows some templates discovered for *Advise without Permission* (on left) and *Advise with* Permission (on right). In template-based replacement, if the algorithm detects any linguistic template on the left among the responses labeled as Advise without Permission, it will randomly select a template from the right to replace it with, giving a pair of *Advise without Permission* and *Advise* with Permission responses that contain the same semantic content but differ in style. Advise without **Advise with Permission** Permission - *You can* (verb) - *It maybe helpful to* (verb) - *You could* (verb) - *You may want to* (verb) - *You need to* (verb) - *I encourage you to* (verb) - *You should* (verb) - *Perhaps you can* (verb) - (Verb) - *, if you would like.* We constructed two pseudo-parallel corpora by applying this method to the MI Gold and MI Augmented (Union) datasets, which contained 2,285 and 199,885 responses labeled as Advise without Permission, respectively. They respectively gave us 240 and 38,559 response pairs. ## 5.1.2 Retrieval Method Given the non-parallel corpus containing *Advise* without Permission and *Advise with Permission* responses, we computed the semantic similarity between the *Advise without Permission* and *Advise with Permission* responses and retrieved the response pairs whose similarity is above a certain threshold. We used Sentence-BERT (Reimers and Gurevych, 2019) to generate embeddings of the two types of responses and compared them using cosine similarity. After manually inspecting a random subset of response pairs over a range of similarity thresholds, we chose 0.7 as the final threshold to determine the semantically similar response pairs. Similar to template-based replacement, we used this method to construct two pseudoparallel corpora by applying the method to the goldlabeled and augmented-labeled MI datasets and obtained 104 and 54,956 response pairs, respectively. For simplicity, we will refer to the corpus constructed using the gold-labeled MI dataset as pseudo-parallel (PP) corpus and the corpus constructed using the augmented-labeled MI dataset as pseudo-parallel augmented (PPA) corpus. We used 80% of the data from each of the corpora for training our rephrasers, and 10% each for validation and testing. In section 7, we gauge the quality of the above corpora using human ratings. ## 5.2 Rephrasing Models Using the above corpora, we fine-tuned two pretrained language generation architectures Blender (Roller et al., 2021) and GPT-3 (Brown et al., 2020). Blender is a standard Seq2Seq transformer-based dialogue model. We used the 90M parameter version of Blender. Though it is a dialogue generation model, we used it mainly because it is pretrained on Reddit discussions containing ≈1.5B comments and is already aware of the language constructs used in peer support. GPT-3 is a language model that utilizes standard transformer network having 175 billion parameters. We used the smallest but fastest version of GPT-3, Ada, to build our rephrasers. The main reason to use GPT-3 is that it has demonstrated strong few-shot learning capability on many text-based tasks. Both Blender and GPT-3 were fine-tuned on template-based, retrievalbased, and combined PP and PPA corpora. Prior work has shown large language models can perform various tasks given a clever prompt prepended to the input (Brown et al., 2020). So, we developed two variations of Blender and GPT3 models by appending a generic prompt and an Ngram-based prompt to the end of the training data. In generic prompting, we simply appended the label *Advise with permission:* to the end of the input text. In N-gram prompting, we detected if there is any N-gram that is indicative of *Advise with permission* in the output text. If there is, we appended it to the end of the input text. Table 2 shows training examples with generic and N-gram-based prompts. Altogether we developed 10 different rephrasing models by fine-tuning Blender and GPT-3 on: 1) Training example with generic prompting: Input: try to learn from your mistakes and meet some new people . *Advise with permission:* Output: *It may be important to try to learn from your* mistakes and meet some new people. Training example with N-gram based prompting: Input: *try to learn from your mistakes and meet some* new people . *It may be important to:* Output: **It may be important to** try to learn from your mistakes and meet some new people. Table 2: Examples with generic and N-gram prompts. template-based PP and PPA corpora; 2) retrievalbased PP and PPA corpora; 3) combined templatebased and retrieval-based PP and PPA corpora; 4) combined template and retrieval based PP and PPA corpora appending generic prompts; 5) combined template and retrieval based PP and PPA corpora appending N-gram prompts. Some examples of the rephrased output by these different models are shown in the appendices. ## 6 Automatic Evaluation A successful style-transferred output should be able to demonstrate the correct target style and at the same time preserve the semantic content of the original text (Jin et al., 2022; Fu et al., 2018). We refer to the first criterion as *Style Transfer Strength* and the second as *Semantic Similarity*. Automatic metrics used to evaluate text generation methods such as the BLEU score (Papineni et al., 2002), ROUGE (Lin and Och, 2004), METEOR (Banerjee and Lavie, 2005), Word Mover Distance (WMD) (Kusner et al., 2015), Character N-gram F-score (chrf) (Popovic´, 2015), BERTScore (Zhang et al., 2019) and cosine similarity based on sentence embeddings (Reimers and Gurevych, 2019) are used in the literature to evaluate the semantic similarity between the original and the rephrased text. The Part-of-Speech distance (Tian et al., 2018), a metric specific to TST, is also used to measure semantic similarity. Mir et al. (2019) suggest deleting all attribute-related expressions in the text when applying these metrics to evaluate the output of TST tasks. Thus, before evaluation, we removed the style-specific phrases discovered during N-gram analysis from the input and output text. To evaluate the style transfer strength, most works use a style classifier to predict if the output conforms to the target style (Hu et al., 2017; Li et al., 2018; Prabhumoye et al., 2018). We used the MI classifier trained on the MI Augmented (Union) dataset to compute the style transfer strength. It is calculated as the percentage of samples classified as *Advise with Permission* out of all test samples. Table 3 shows the results of automatic evaluation of the rephrasers on the combined PP test dataset, which contains data from both template and retrieval-based PP test sets. Accordingly, GPT3-based rephrasers show better performance compared to Blender-based rephrasers in 85% of the time across the metrics. It could also be observed that data augmentation improves the scores across most metrics irrespective of the backbone model used. Combining the pseudo-parallel corpora obtained from template-based and retrievalbased methods could improve the performance scores of Blender-based rephrasers across most automatic metrics. But GPT-3 based rephrasers trained only on template-based pseudo-parallel data seem to achieve better scores across almost all the metrics when compared to those trained on retrieval-based and combined corpora. Blender-based rephrasers that incorporated generic prompting ranked the best across most metrics over all the other Blender-based rephrasers. With the smaller PP training corpus, the GPT-3based rephraser that incorporated generic prompting ranked the best across most metrics. But with the larger PPA training corpus, the GPT-3 based rephraser that was trained on simple templatereplaced pseudo-parallel corpora ranked the best across most automatic metrics. ## 7 Human Evaluation Similar to automatic evaluation, we used two human evaluation criteria to rate the rephrased sentences. The first is how close the rephrased sentence is to *Advise with permission* (Style transfer strength). The second is to what extent the rephrased sentence preserves the context/meaning of the original sentence (Semantic similarity). We used the UpWork crowdsourcing platform (www.upwork.com) and recruited four professional counselors to rate the rephrased sentences. Given the original *Advise without Permission* sentence and a list of rephrased sentences generated by the 10 different rephrasers, we asked two questions from the counselors: 1) Is the rephrased sentence indicative of Advise with permission?; and 2) Does the rephrased sentence preserve the original context? The counselors were asked to answer these questions by indicating a rating on a Likert scale ranging from 0 (*Not at all*) to 4 (*Yes it is*). Along | Criteria | Template | Retrieval | Template + | Template + | Template + | | | | | | |-------------------------------------|--------------|-------------|--------------|--------------|--------------|--------|--------|--------|---------|--------| | Retrieval | Retrieval | Retrieval | | | | | | | | | | (with generic | (with N-gram | | | | | | | | | | | prompting) | prompting) | | | | | | | | | | | BB | GPT3 | BB | GPT3 | BB | GPT3 | BB | GPT3 | BB | GPT3 | | | Training dataset: PP BLEU-1 0.1315 | 0.3464 | 0.0787 | 0.1308 | 0.1429 | 0.2977 | 0.1763 | 0.3821 | 0.1585 | 0.2751 | | | BLEU-2 | 0.0366 | 0.3225 | 0.0131 | 0.0501 | 0.0496 | 0.2671 | 0.0613 | 0.3556 | 0.0677 | 0.2374 | | BLEU-3 | 0.0046 | 0.3120 | 0.0046 | 0.0328 | 0.0000 | 0.2543 | 0.0031 | 0.3465 | 0.0000 | 0.2269 | | BLEU-4 | 0.0033 | 0.2994 | 0.0000 | 0.0326 | 0.0000 | 0.2262 | 0.0000 | 0.3301 | 0.0000 | 0.2164 | | ROUGE-L | 0.1760 | 0.5333 | 0.1176 | 0.1608 | 0.1843 | 0.4495 | 0.2167 | 0.5450 | 0.2135 | 0.4404 | | METEOR | 0.1568 | 0.4622 | 0.0994 | 0.1323 | 0.1879 | 0.4210 | 0.2084 | 0.5014 | 0.2108 | 0.3726 | | WMD ↓ | 1.0311 | 0.7068 | 1.1122 | 1.0800 | 1.0345 | 0.7928 | 1.0073 | 0.6746 | 1.0163 | 0.8447 | | Chrf Score | 0.2690 | 0.5008 | 0.1678 | 0.2095 | 0.2690 | 0.4737 | 0.3082 | 0.5341 | 0.2955 | 0.4245 | | BERTScore | 0.8656 | 0.9138 | 0.8382 | 0.8658 | 0.8683 | 0.9048 | 0.8821 | 0.9137 | 0.8693 | 0.9003 | | POS dist. ↓ | 5.4771 | 2.5523 | 9.8218 | 7.1482 | 5.8271 | 2.7042 | 4.8378 | 2.5830 | 5.8854 | 3.6298 | | Cos Similarity | 0.6116 | 0.7524 | 0.4429 | 0.4291 | 0.6129 | 0.6516 | 0.6918 | 0.7403 | 0.6571 | 0.6471 | | Style Strength | 29.41 | 73.53 | 0.00 | 47.06 | 38.24 | 79.41 | 94.12 | 61.76 | 23.53 | 58.82 | | Training dataset: PPA BLEU-1 0.2039 | 0.3751 | 0.2122 | 0.0987 | 0.2308 | 0.3229 | 0.2588 | 0.3688 | 0.2021 | 0.3349 | | | BLEU-2 | 0.0913 | 0.3456 | 0.1468 | 0.0263 | 0.1591 | 0.2836 | 0.1849 | 0.3332 | 0.1455 | 0.3034 | | BLEU-3 | 0.0031 | 0.3352 | 0.1370 | 0.0172 | 0.1319 | 0.2725 | 0.1536 | 0.3161 | 0.1239 | 0.2922 | | BLEU-4 | 0.0000 | 0.3217 | 0.1286 | 0.0069 | 0.1213 | 0.2536 | 0.1437 | 0.2987 | 0.1169 | 0.2798 | | ROUGE-L | 0.2642 | 0.5363 | 0.2419 | 0.1216 | 0.2718 | 0.4467 | 0.3016 | 0.5278 | 0.2352 | 0.5178 | | METEOR | 0.3081 | 0.4673 | 0.2436 | 0.1063 | 0.2932 | 0.4261 | 0.3102 | 0.4607 | 0.2557 | 0.4381 | | WMD ↓ | 0.9716 | 0.6849 | 1.0069 | 1.1584 | 0.9451 | 0.9754 | 0.9095 | 0.7258 | 1.0000 | 0.7927 | | Chrf Score | 0.3758 | 0.5038 | 0.3550 | 0.1782 | 0.4005 | 0.4648 | 0.4048 | 0.5047 | 0.3672 | 0.4897 | | BERTScore | 0.8770 | 0.9116 | 0.8748 | 0.8582 | 0.8795 | 0.9021 | 0.8837 | 0.9140 | 0.8700 | 0.9028 | | POS dist. ↓ | 7.4745 | 1.9593 | 8.0439 | 7.0396 | 6.9338 | 2.8695 | 6.1747 | 2.6637 | 10.1620 | 3.0649 | | Cos Similarity | 0.6428 | 0.7481 | 0.5910 | 0.4605 | 0.6277 | 0.6501 | 0.6303 | 0.7318 | 0.5717 | 0.6807 | | Style Strength | 73.53 | 76.47 | 58.82 | 32.35 | 70.59 | 61.76 | 67.65 | 55.88 | 52.94 | 52.94 | Table 3: Automatic evaluation results on PP test set. Under each method (Template, Retrieval etc.), the score of the rephraser that performs the best is made bold. The best score obtained for each of BB and GPT3-based rephrasers along each criteria is highlighted in green. Out of them, the best overall score is highlighted with a darker green. Criteria Template Retrieval Template + Template + **Template +** Retrieval Retrieval **Retrieval** (with generic **(with N-gram** prompting) **prompting)** BB GPT3 BB GPT3 BB GPT3 BB GPT3 **BB GPT3** Training dataset: PP; Tested on: PP Semantic Similarity (SS) 1.74 **3.35** 0.32 **1.07** 1.62 **2.65** 2.49 **2.72** 1.88 **2.31** Style Transfer Strength (STS) 2.78 **3.88** 0.44 **2.16** 2.72 3.47 **3.99** 3.21 2.47 **3.21** (Average of SS and STS) 2.26 **3.62** 0.54 **1.62** 2.17 3.06 **3.24** 2.97 2.18 **2.76** Training dataset: PP; Tested on: PPA Semantic Similarity (SS) **2.07** 0.69 0.79 **0.94** 2.22 **2.60** 2.82 **2.87** 2.10 **2.50** Style Transfer Strength (STS) 2.51 **3.70** 0.65 **2.00** 2.61 3.17 **3.96** 3.14 2.26 **3.02** (Average of SS and STS) **2.29** 2.20 0.72 **1.47** 2.42 2.89 **3.39** 3.01 **3.23** 2.76 Training dataset: PPA; Tested on: PP Semantic Similarity (SS) 2.63 3.19 **1.21** 0.81 1.69 **2.57** 1.74 **2.53** 1.21 **2.32** Style Transfer Strength (STS) **3.94** 3.82 **2.74** 1.44 3.15 **3.28** 3.00 **3.47** 2.57 **2.99** (Average of SS and STS) 3.29 3.51 **1.98** 1.13 2.42 **2.93** 2.37 **3.00** 1.89 **2.66** Training dataset: PPA; Tested on: PPA Semantic Similarity (SS) 2.78 3.26 **1.40** 1.00 1.70 **2.31** 1.71 **2.36** 1.22 **2.31** Style Transfer Strength (STS) **3.92** 3.82 **2.30** 1.92 2.59 **2.85** 2.60 **3.06** 2.40 **2.98** (Average of SS and STS) 3.35 3.54 **1.85** 1.46 **2.15 2.58** 2.16 **2.71** 1.81 **2.65** Table 4: Results of human evaluation. Under each methodology (Template, Retrieval etc.), the score of the rephraser that performs the best is highlighted in bold. The best score obtained for each of BB and GPT3-based rephrasers along each criteria is highlighted in green. Out of them, the best overall score is highlighted with a darker green. with the rephrased sentences, we also presented them the corresponding *Advise with permission* sentence obtained from the pseudo-parallel corpora in order to gauge the quality of the corpora used for training. The sentences to be rated were presented to them in a random order to reduce bias. As the combined PP test corpus developed on the MI Gold dataset is small (only 34 samples), we used 200 randomly selected samples from the combined PPA test corpus developed on the augmented MI dataset to be rated by the human workers. This was to verify the trend of results reported on the PP test corpus. We bundled 9 randomly selected test cases in one batch and allocated two workers to rate each batch. Results were calculated based on the average rating given by the two workers. Following Adiwardana et al. (2020) we also calculated the average of style transfer strength and semantic similarity ratings to obtain a single score. We computed the inter-rater agreement based on weighted Kappa that uses Fleiss-Cohen weights (Wan et al., 2015) and the scores were 0.5870 (moderate agreement) and 0.6933 (substantial agreement) for style transfer strength and semantic similarity, respectively. Table 4 shows the results of the human evaluation experiment. According to the results, GPT3-based rephrasers win over Blender-based rephrasers 70% and 85% of the time along style transfer and semantic similarity dimensions, respectively. And when it comes to the smaller PP training corpus, using generic prompting during training increases the scores across most cases. But when it comes to the larger PPA corpus, simply training the rephrasers with template-replaced pseudo-parallel pairs gives the best results irrespective of the underlying backbone model. The average ratings obtained for style transfer strength and *semantic similarity* for sentence pairs in the PP test corpus were 3.21 and 3.16, respectively. The sentence pairs in the PPA test corpus scored 3.12 and 2.69 in the above two dimensions, respectively. The average ratings being close to 3 with most of them being above 3 suggests that the training corpora used are of substantial quality. ## 8 Discussion In this paper, we presented an example on how distress-consoling responses could be boosted with MI strategy. For this, we first developed a classifier that can identify favourable and unfavourable response types as defined by the MITI code. Then we narrowed our focus to the MI non-adherent response type *Advise without Permission* and developed several rephrasers that can rephrase *Advise* without Permission responses into MI adherent response type *Advise with Permission*. As curating human written rephrasings was costly, we used templated-based replacement and retrieval methods to create pseudo-parallel corpora from gold-labeled and augmented-labeled MI datasets that contained responses from Reddit and CounselChat platforms. We used this data to train several Blender and GPT3-based rephrasers. We also used generic and N-gram-based prompts to see if prompting can improve the rephrasers' performance. Automatic as well as human evaluation results suggested fine-tuning on GPT3 gives better results in rephrasing *Advise without permission* responses into *Advise with permission*. Data augmentation techniques we used by expanding the MITI labels using N-gram-based matching and similaritybased retrieval improved the performance of the MI classifier as well as the Blender and GPT3based rephrasers. The results also suggested when the training datasets are small, the use of generic prompting can enable the rephrasing models to produce better results across style transfer and semantic similarity dimensions. But if you are dealing with large datasets (in our case through data augmentation), pseudo-parallel data generated through simpler methods such as template-based replacement can enable the models to generate substantially good rephrasings closer to the required style and semantically similar to the original sentence. In the future, we hope to develop a chatbot that can respond to psychological distress using the RED dataset that contain dialogues curated from several mental health-related subreddits. Then we hope to improve the responses generated by this chatbot by applying MI boosting at two different levels: one at the data level; and the other at the model level. At data level boosting, we hope to apply the MI classifier and automatically label the responses in the training data itself. By doing so, we will be able to rephrase the MI non-adherent responses such as *Advise without Permission* into more MI-adherent responses and omit the other unfavourable responses from the training data. The MI-boosted training data can then be used to train the chatbot. At model-level boosting, a similar methodology can be applied at the level the chatbot is decoding responses (e.g. beam search). Not only generative chatbots but also retrieval-based chatbots could be benefited from this methodology. ## 9 Limitations Certain parts of our proposed methodology, for example, template-based replacement and n-grambased prompting are applicable only when stylespecific linguistic attributes could be identified between the source and the target text. And due to the cost of human labor and the lack of publicly available client-therapist dialogues, the sample size drawn in the study is small and thus may have an impact on the conclusions drawn. Our methods have only been tested for the English language. But we believe similar methods could be applied to other languages given they have unparallel corpora tagged with *Advise without Permission* and Advise with Permission labels. The rephrasing methods described in this paper are tested for short sentences with a maximum sentence length of 98 tokens. Thus, the scalability of these methods for long text still remains to be tested. When testing the rephrasers, there are some combinations that could be tried other than the ones already tested. For example, more models can be fine-tuned and tested separately on templatereplaced and retrieval-based PP and PPA corpora but incorporating generic and N-gram prompting. In this work, we first combined these two types of corpora before attempting prompting since we could observe better performance on Blender when the corpora were combined. In order to have more data, we combined the Advise with Permission and *Advise without Permission* responses present in CounselChat and RED datasets. But studies show that there are differences in the language used by counselors and peers (Lahnala et al., 2021; Mousavi et al., 2021). So, there can be linguistic differences between the same type of response in CounselChat and RED datasets. Future work should attempt to identify these differences and ideally rephrase the responses given by peers to reflect the language of the counselors. ## 10 Ethics Statement Data Curation: Only publicly available data in Reddit and CounselChat websites were used in this work. Analysis of posts on websites such as Reddit is considered "fair play" since individuals are anonymous and users are aware their responses remain archived on the site unless explicitly deleted. It is also stated in Reddit's privacy policy that it allows third parties to access public Reddit content. 3 Also, Reddit's data is already widely available in larger dumps such as Pushshift (Baumgartner et al., 2020). Even though the policies allow it, it should be thoroughly noted that this data contains sensitive information. Thus, we adhere to the guidelines suggested by Benton et al. (2017) for working with social media data in health research, and share only anonymized and paraphrased excerpts from the dataset so that it is not possible to recover usernames through a web search with the verbatim post text. In addition, references to usernames as well as URLs are removed from dialogue content for de-identification. Human Evaluation: The human raters recruited from the crowdsourcing platform, UpWork, were all trained in the practice of counseling. Since the methods were tested on English-only text, we recruited workers who had professional competency in the English language. We paid them $10 for evaluating each batch of rephrased sentences that required on average ≈30 minutes to complete. Thus, the amount paid to the human raters was ≈2.75 times above the US minimum wage of $7.25 per hour. We also paid an extra $2 as a bonus per each batch for workers who obtained an above-average agreement with the other worker who rated the same batch. Chatbots for Distress-Consolation: One of the main applications of the proposed methodology is boosting chatbot responses for distress consolation with motivational interviewing strategy. Using chatbots for distress consolation or other mental health interventions has raised ethical concerns among many (Lanteigne, 2019; Montemayor et al., 2021; Tatman, 2022). However, chatbots that intervene in mental health-related matters have already been developed and have been quite popular for a while. Some examples are SimSensei (DeVault et al., 2014), Dipsy (Xie, 2017), Woebot (woebothealth.com), and Wysa (www.wysa.io). Czerwinski et al. (2021) state, *About 1 billion people globally are affected by mental disorders; a* scalable solution such as an AI therapist could be a huge boon. The current technology to develop such chatbots rely heavily on deep learning and pre-trained language models. But due to the inherently unpredictable nature of these models, they pose a threat of delivering unfavourable responses when such chatbots are used for distress consolation. We believe the methodology we suggest in this work can help them become more reliable and fail-safe by adhering to the motivational interviewing strategy, a guiding style of communication heavily practiced in psychotherapy. However, since the unfavourable response detection and rephrasing methods still rely on neural network models, the artifacts produced in this paper should be used for research purposes only and real-world deployment of them should be done under human supervision. ## References Daniel Adiwardana, Minh-Thang Luong, David R So, Jamie Hall, Noah Fiedel, Romal Thoppilan, Zi Yang, Apoorv Kulshreshtha, Gaurav Nemade, Yifeng Lu, et al. 2020. Towards a human-like open-domain chatbot. *arXiv preprint arXiv:2001.09977*. Amanuel Alambo, Manas Gaur, Usha Lokala, Ugur Kursuncu, Krishnaprasad Thirunarayan, Amelie Gyrard, Amit Sheth, Randon S Welton, and Jyotishman Pathak. 2019. Question answering for suicide risk assessment using reddit. In *2019 IEEE 13th International Conference on Semantic Computing (ICSC)*, pages 468–473. IEEE. Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In *Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization*, pages 65–72, Ann Arbor, Michigan. Association for Computational Linguistics. Jason Baumgartner, Savvas Zannettou, Brian Keegan, Megan Squire, and Jeremy Blackburn. 2020. The pushshift reddit dataset. *Proceedings of the International AAAI Conference on Web and Social Media*, 14(1):830–839. Adrian Benton, Glen Coppersmith, and Mark Dredze. 2017. Ethical research protocols for social media health research. In *Proceedings of the First ACL* Workshop on Ethics in Natural Language Processing, pages 94–102. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In *Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing*, pages 632–642, Lisbon, Portugal. Association for Computational Linguistics. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In *Advances in Neural Information Processing Systems*, volume 33, pages 1877–1901. Curran Associates, Inc. Daniel Cer, Mona Diab, Eneko Agirre, Iñigo LopezGazpio, and Lucia Specia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In *Proceedings* of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1–14, Vancouver, Canada. Association for Computational Linguistics. Mary Czerwinski, Javier Hernandez, and Daniel McDuff. 2021. Building an ai that feels: Ai systems with emotional intelligence could learn faster and be more helpful. *IEEE Spectrum*, 58(5):32–38. Munmun De Choudhury and Sushovan De. 2014. Mental health discourse on reddit: Self-disclosure, social support, and anonymity. In Eighth international AAAI conference on weblogs and social media. David DeVault, Ron Artstein, Grace Benn, Teresa Dey, Ed Fast, Alesia Gainer, Kallirroi Georgila, Jon Gratch, Arno Hartholt, Margaux Lhommet, et al. 2014. Simsensei kiosk: A virtual human interviewer for healthcare decision support. In Proceedings of the 2014 international conference on Autonomous agents and multi-agent systems, pages 1061–1068. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Kathleen Kara Fitzpatrick, Alison Darcy, and Molly Vierhile. 2017. Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (woebot): a randomized controlled trial. JMIR mental health, 4(2):e7785. Zhenxin Fu, Xiaoye Tan, Nanyun Peng, Dongyan Zhao, and Rui Yan. 2018. Style transfer in text: Exploration and evaluation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32. Chuang Gan, Zhe Gan, Xiaodong He, Jianfeng Gao, and Li Deng. 2017. Stylenet: Generating attractive visual captions with styles. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3137–3146. Jacques Gaume, Gerhard Gmel, Mohamed Faouzi, and Jean-Bernard Daeppen. 2009. Counselor skill influences outcomes of brief motivational interventions. Journal of substance abuse treatment, 37(2):151– 159. Dan Hendrycks and Kevin Gimpel. 2016. Gaussian error linear units (gelus). *arXiv preprint* arXiv:1606.08415. Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P Xing. 2017. Toward controlled generation of text. In *International conference* on machine learning, pages 1587–1596. PMLR. Yufang Huang, Wentao Zhu, Deyi Xiong, Yiye Zhang, Changjian Hu, and Feiyu Xu. 2020. Cycle-consistent adversarial autoencoders for unsupervised text style transfer. *arXiv preprint arXiv:2010.00735*. Becky Inkster, Shubhankar Sarda, Vinod Subramanian, et al. 2018. An empathy-driven, conversational artificial intelligence agent (wysa) for digital mental wellbeing: real-world data evaluation mixed-methods study. *JMIR mHealth and uHealth*, 6(11):e12106. Di Jin, Zhijing Jin, Zhiting Hu, Olga Vechtomova, and Rada Mihalcea. 2022. Deep learning for text style transfer: A survey. *Computational Linguistics*, 48(1):155–205. Zhijing Jin, Di Jin, Jonas Mueller, Nicholas Matthews, and Enrico Santus. 2019. IMaT: Unsupervised text attribute transfer via iterative matching and translation. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3097–3109, Hong Kong, China. Association for Computational Linguistics. Matt Kusner, Yu Sun, Nicholas Kolkin, and Kilian Weinberger. 2015. From word embeddings to document distances. In International conference on machine learning, pages 957–966. PMLR. Allison Lahnala, Yuntian Zhao, Charles Welch, Jonathan K. Kummerfeld, Lawrence C An, Kenneth Resnicow, Rada Mihalcea, and Verónica PérezRosas. 2021. Exploring self-identified counseling expertise in online support forums. In *Findings of* the Association for Computational Linguistics: ACLIJCNLP 2021, pages 4467–4480, Online. Association for Computational Lingfgfggftzr666757tl.uistics. Camylle Lanteigne. 2019. Social robots and empathy: The harmful effects of always getting what we want. Juncen Li, Robin Jia, He He, and Percy Liang. 2018. Delete, retrieve, generate: a simple approach to sentiment and style transfer. In *Proceedings of the 2018* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1865–1874. Association for Computational Linguistics. Chin-Yew Lin and Franz Josef Och. 2004. Automatic evaluation of machine translation quality using longest common subsequence and skip-bigram statistics. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL04), pages 605–612, Barcelona, Spain. Ruibo Liu, Chongyang Gao, Chenyan Jia, Guangxuan Xu, and Soroush Vosoughi. 2022. Non-parallel text style transfer with self-parallel supervision. *arXiv* preprint arXiv:2204.08123. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*. Aman Madaan, Amrith Setlur, Tanmay Parekh, Barnabas Poczos, Graham Neubig, Yiming Yang, Ruslan Salakhutdinov, Alan W Black, and Shrimai Prabhumoye. 2020. Politeness transfer: A tag and generate approach. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 1869–1881, Online. Association for Computational Linguistics. François Mairesse and Marilyn A Walker. 2011. Controlling user perceptions of linguistic style: Trainable generation of personality traits. *Computational Linguistics*, 37(3):455–488. Alexander Miller, Will Feng, Dhruv Batra, Antoine Bordes, Adam Fisch, Jiasen Lu, Devi Parikh, and Jason Weston. 2017. ParlAI: A dialog research software platform. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 79–84. Remi Mir, Bjarke Felbo, Nick Obradovich, and Iyad Rahwan. 2019. Evaluating style transfer for text. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 495–504, Minneapolis, Minnesota. Association for Computational Linguistics. Carlos Montemayor, Jodi Halpern, and Abrol Fairweather. 2021. In principle obstacles for empathic ai: why we can't replace human empathy in healthcare. AI & society, pages 1–7. Seyed Mahed Mousavi, Alessandra Cervone, Morena Danieli, and Giuseppe Riccardi. 2021. Would you like to tell me more? generating a corpus of psychotherapy dialogues. In Proceedings of the Second Workshop on Natural Language Processing for Medical Conversations, pages 1–9. TB Moyers, JK Manuel, D Ernst, T Moyers, J Manuel, D Ernst, and C Fortini. 2014. Motivational interviewing treatment integrity coding manual 4.1 (miti 4.1). Unpublished manual. Theresa B Moyers, Tim Martin, Jennifer K Manuel, William R Miller, and D Ernst. 2003. The motivational interviewing treatment integrity (miti) code: Version 2.0. *Retrieved from Verfübar unter: www.* casaa. unm. edu [01.03. 2005]. Priya Nambisan. 2011. Information seeking and social support in online health communities: impact on patients' perceived empathy. Journal of the American Medical Informatics Association, 18(3):298–304. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Verónica Pérez-Rosas, Xuetong Sun, Christy Li, Yuchen Wang, Kenneth Resnicow, and Rada Mihalcea. 2018. Analyzing the quality of counseling conversations: the tell-tale signs of high-quality counseling. In *Proceedings of the Eleventh International Conference on* Language Resources and Evaluation (LREC 2018). Maja Popovic. 2015. ´ chrF: character n-gram F-score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 392–395, Lisbon, Portugal. Association for Computational Linguistics. Shrimai Prabhumoye, Yulia Tsvetkov, Ruslan Salakhutdinov, and Alan W Black. 2018. Style transfer through back-translation. arXiv preprint arXiv:1804.09000. Sudha Rao and Joel Tetreault. 2018. Dear sir or madam, may I introduce the GYAFC dataset: Corpus, benchmarks and metrics for formality style transfer. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 129–140. Association for Computational Linguistics. Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In Proceedings of the 2019 Conference on Empirical Metho0ds in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong, China. Association for Computational Linguistics. Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Eric Michael Smith, Y-Lan Boureau, and Jason Weston. 2021. Recipes for building an open-domain chatbot. In *Proceedings of the 16th Conference of* the European Chapter of the Association for Computational Linguistics: Main Volume, pages 300–325, Online. Association for Computational Linguistics. Robert Schwartz. 2021. The big reveal | ethical implications of therapist self-disclosure. Mingyue Shang, Piji Li, Zhenxin Fu, Lidong Bing, Dongyan Zhao, Shuming Shi, and Rui Yan. 2019. Semi-supervised text style transfer: Cross projection in latent space. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4937–4946. Association for Computational Linguistics. Ashish Sharma, Monojit Choudhury, Tim Althoff, and Amit Sharma. 2020a. Engagement patterns of peerto-peer interactions on mental health platforms. In Proceedings of the International AAAI Conference on Web and Social Media, volume 14, pages 614–625. Ashish Sharma, Adam S Miner, David C Atkins, and Tim Althoff. 2020b. A computational approach to understanding empathy expressed in text-based mental health support. *arXiv preprint arXiv:2009.08441*. Fadi Abu Sheikha and Diana Inkpen. 2011. Generation of formal and informal sentences. In *Proceedings of* the 13th European Workshop on Natural Language Generation, pages 187–193. Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2017. Style transfer from non-parallel text by cross-alignment. *Advances in neural information* processing systems, 30. Zachary Steel, Claire Marnane, Changiz Iranpour, Tien Chey, John W Jackson, Vikram Patel, and Derrick Silove. 2014. The global prevalence of common mental disorders: a systematic review and meta-analysis 1980–2013. *International journal of epidemiology*, 43(2):476–493. Rachael Tatman. 2022. [link]. Youzhi Tian, Zhiting Hu, and Zhou Yu. 2018. Structured content preservation for unsupervised text style transfer. *arXiv preprint arXiv:1810.06526*. TANG Wan, HU Jun, Hui Zhang, WU Pan, and HE Hua. 2015. Kappa coefficient: a popular measure of rater agreement. *Shanghai archives of psychiatry*, 27(1):62. Anuradha Welivita and Pearl Pu. 2022. Heal: A knowledge graph for distress management conversations. Xing Xie. 2017. Dipsy: A digital psychologist. Ruochen Xu, Tao Ge, and Furu Wei. 2019. Formality style transfer with hybrid textual annotations. *arXiv* preprint arXiv:1903.06353. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. arXiv preprint arXiv:1904.09675. ## A Datasets A.1 The Red (Reddit Emotional Distress) Dataset The RED dataset is curated from carefully selected 8 mental health-related subreddits in Reddit. According to the latest statistics, 61% of Reddit users are male. Of the users, 48% are from the United States. People aged 18-29 make up Reddit's largest user base (64%). The second biggest age group is 30-49 (29%). Only 7% of Reddit users are over 50. It should be noted that these demographic biases can subtly skew our data and models from representing average human behavior. The data we curated were English-only and they may perpetuate an English bias in NLP systems. ## A.2 The Mi Dataset Altogether, 15 labels adapted from the MITI code 2.0 (Moyers et al., 2003) and 4.2.1 (Moyers et al., 2014) were used for annotation. They included Closed Question, Open Question, Simple Reflection, *Complex Reflection*, and *Give Information*, which are generally considered favourable. They also included labels recognized specifically as MI adherent, which are Advise with Permission, *Affirm*, Emphasize Autonomy, and *Support*. There are another four labels recognized as MI non-adherent, which are Advise without Permission, Confront, *Direct*, and *Warn*. We also included two other labels Self-Disclose and *Other*, which are not included in the MITI code. The label *Self-Disclose* was included because, in peer support conversations, peers are mostly seen to share their lived experiences. Though it is believed that *Self-Disclosure* contributes in building rapport between the speaker and listener, as suggested by R. Schwartz (2021), this type of disclosure must be used wisely with caution since it can as well be counterproductive distorting client's transference. Thus, it is important to be able to recognize this response type. Table 5 shows the full list of labels we adapted from the MITI code along with descriptions and examples. Table 6 shows the statistics of the annotated responses in the MI dataset, corresponding to each label. ## A.3 Data Augmentation: N-Gram Based Matching We denote examples of the most frequent N-grams corresponding to each label in Table 7. For simplicity, we list only some of them along with their corresponding frequencies. For data augmentation, we used all four-grams and five-grams, which had a frequency of above 5. Table 8 shows the statistics of the labels extended through N-gram based matching in CC and RED datasets. We also encountered 518 and 53,196 sentences in CounselChat and RED datasets respectively that had overlapping labels, which were discarded due to ambiguity. ## A.4 Data Augmentation: Similarity Based Retrieval To derive semantically meaningful sentence embeddings that can be compared using cosine-similarity, we used Sentence-BERT (SBERT) proposed by Reimers and Gurevych (2019), which uses siamese and triplet network structures to compute sentence embeddings. Among several models the authors have proposed, we used the *roberta-base-nli-stsbmean-tokens* model, fine-tuned on the NLI (Bowman et al., 2015) and STS benchmark (STSb) (Cer et al., 2017) datasets, since it has reported a high Spearman's rank correlation of 84.79 ± 0.38 between the cosine-similarity of the sentence embeddings and the gold labels in the STS benchmark test set outperforming the existing state-of-the-art. It is also more efficient to use than *roberta-large*. As described in Section 3, we used majority voting followed by computing the average similarity of retrieved sentences with the same label (in case of ties) to choose the final label for an unlabeled sentence. In Figure 2, we show an example elaborating this procedure. Table 8 shows the statistics of the labels extended through similarity-based retrieval in CC and RED datasets. ## A.5 Augmented Mi Datasets Table 9 shows the statistics corresponding to each label in the MI Augmented (Union) and MI Augmented (Intersection) datasets developed by taking the union and the intersection of the sentences automatically annotated by N-gram based matching and similarity based retrieval methods. ## B Mi Classifier We used the same hyper-parameter setting used in RoBERTa (Liu et al., 2019) when training the MI classifier. We used the Adam optimizer with β1 of 0.9, β2 of 0.98, an ϵ value of 1× 10−6, and a learning rate of 2 × 10−5. A dropout of 0.1 was used 5423 | MITI label | Description | Examples | |----------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------| | 1. Closed Question | Questions that can be answered with an yes/no response | Do you think this is an advantage? | | or a very restricted range of answers. | Did you use herion this week? | | | 2. Open Question | Questions that allow a wide range of possible answers. It may seek information or may invite the speaker's perspective or may encourage self-exploration. | What do you think are the advantages of changing this behavior? What is your take on that? | | 3. Simple Reflection | Simple reflections include repetition, rephrasing, or paraphrasing of speaker's previous statement. It conveys understanding or facilitate speaker-listener exchanges. | It seems that you are not sure what is going to come out of this talk. It sounds like you're feeling worried. | | 4. Complex Reflection | Complex reflections include repeating or rephrasing the previous statement of the speaker but adding substantial meaning or emphasis to it. It serves the purpose of conveying a deeper or more complex picture of what the speaker has said. | Speaker: Mostly, I would change for future generations. If we waste everything, then there will be nothing left. Listener: It sounds like you have a strong feeling of responsibility. | | 5. Give Information | The listener gives information, educates, provides feedback, or gives an opinion without advising. | This assignment on logging your cravings is important because we know that cravings often lead to relapses. | | MI Adherent Behaviour Codes: 6. Advise with Permission Advising when the speaker asks directly for the information or advice. Indirect forms of permission can also occur, such as when the listener invites the speaker to disregard the advice as appropriate. | If you agree with it, we could try to brainstorm some ideas that might help you. | | | 7. Affirm | Encouraging the speaker by saying something positive | You should be proud of yourself for | | or complimentary. | your past's efforts. | | | 8. | Emphasize Auton | | | omy | Emphasizing the speaker's control, freedom of choice, | Yes, you're right. No one can force you to stop drinking. | | autonomy, and ability to decide. | It is really up to you to decide. | | | 9. Support | Supporting the client with statements of compassion or | I'm here to help you with this | | sympathy. | I know it's really hard to stop drinking | | | MI Non-Adherent Behaviour Codes: 10. Advise without Permission Making suggestions, offering solutions or possible actions without first obtaining permission from the speaker. | You should simply scribble a note that reminds you to turn the computer off during breaks. | | | 11. Confront | Directly and unambiguously disagreeing, arguing, correcting, shaming, blaming, criticizing, labeling, moralizing, ridiculing, or questioning the speaker's honesty. | You think that is any way to treat people you love? Yes, you are an alcoholic. You might not think so, but you are. | | 12. Direct | Giving the speaker orders, commands, or imperatives. | Don't do that! Keep track of your cravings, using this log, and bring it in next week to review with me. | | 13. Warn | A statement or event that warns of something or that | Be careful, DO NOT stop taking meds | | serves as a cautionary example. | without discussing with your doctor. | | | Other: 14. Self-Disclose | The listener discloses his/her personal information or | I used to be similar where I get obsessed about how people look but after | | experiences. | maturing some I got over that. | | | 15. Other | All other statements that are not classified under any of | Good morning. | | the above codes | Hi there. | | Table 5: The set of labels adapted from the MITI code that the MI classifier is able to recognize. on all layers and attention weights, and a GELU activation function (Hendrycks and Gimpel, 2016). We limited the maximum number of input tokens to 100, and used a batch size of 32. All models were trained for 20 epochs. In all cases, the optimal epoch was selected based on the average cross entropy loss calculated between the ground-truth and predicted labels of the human-annotated (MI Gold) validation set. All the experiments were conducted on a machine with [email protected], 256 GB RAM, 2x200 GB SSD, and 4xGPU (NVIDIA Titan X Pascal). Experiments were also done using GPT3 as the pre-trained language model, however, RoBERTa was seen to outperform GPT3 in this classification task. Figure 3 shows the architectural diagram of the | Label | # Labels | # Labels | Total | |-------------------------------------------------------------|------------|------------|---------| | in CC | in RED | | | | Closed Question | 500 | 405 | 905 | | Open Question | 264 | 212 | 476 | | Simple Reflection | 304 | 252 | 556 | | Complex Reflection | 732 | 562 | 1,294 | | Give Information | 3,643 | 1213 | 4,856 | | MI Adherent Behavior Codes: Advise w/ Permission 417 | 67 | 484 | | | Affirm | 428 | 517 | 945 | | Emphasize Autonomy | 152 | 101 | 253 | | Support | 418 | 815 | 1,233 | | MI Non-Adherent Behavior Codes: Advise w/o Permission 1,414 | 871 | 2,285 | | | Confront | 142 | 176 | 318 | | Direct | 460 | 438 | 898 | | Warn | 67 | 46 | 113 | | Other: Self-Disclose | 174 | 1216 | 1,390 | | Other | 513 | 292 | 805 | | Total | 9,628 | 7,183 | 16,811 | Table 6: Statistics of human annotated MITI labels in ![14_image_1.png](14_image_1.png) ![14_image_2.png](14_image_2.png) CounselChat (CC) and RED datasets. MI classifier used for annotation. Table 10 shows the performance scores of the MI classifier when trained on gold-labeled and augmented MI datasets. ## C Mi Rephraser C.1 Construction Of Pseudo-Parallel Corpora Table 11 denotes the full list of templates corresponding to *Advise without Permission* and *Advise* ![14_image_0.png](14_image_0.png) with Permission responses that were used in the process of creating pseudo-parallel corpora using the template-based replacement method. In Figure 4, we visualize the process of creating Pseudo-Parallel (PP) and Pseudo-Parallel Augmented (PPA) corpora along with statistics corresponding to each dataset. ## C.2 Rephrasing Models For developing rephrasing models, we used the 90M parameter version of Blender (Roller et al., 2021). It contains an 8 layer encoder, an 8-layer decoder with 512-dimensional embeddings, and 16 attention heads. It has a maximum input length of 1024 tokens. All code for fine-tuning is available in ParlAI (Miller et al., 2017). All the models were fine-tuned for 200 epochs, with a batch size of 8, and a learning rate of 1 × 10-6. For other hyperparameters, we used the default values defined in their documentation at https://parl.ai/proj ects/recipes. Fine-tuning the models was conducted in a machine with [email protected], 256 GB RAM, 2x200 GB SSD, and 4xGPU (NVIDIA Titan X Pascal). We also used GPT3 pretrained language model having 175 billion parameters. The smallest but fastest version of GPT3, Ada was used in our experiments. Fine-tuning of GPT3 models were done through the paid API provided by OpenAI (www.openai.com) following API guide at https: //beta.openai.com/docs/guides/fine-tunin g. We used the default set of hyperparameters for fine-tuning all GPT3 based models. These hyperparameters are tested to work well across a range of use cases. All the models were fine-tuned for 4 epochs, with a batch size ≈0.2% of the number of examples in the training set (capped at 256), and a learning rate of 0.05. Table 12 shows some examples of rephrased sen- | Label | Examples of most frequent four-grams | Examples of most frequent five-grams | |-----------------------------------------------------|------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------| | Closed Question | Do you have any (11), Do you have a (7), Do you want to (7), Have you talked to (5), Do you think you (5) | - | | Open Question | Do you want to (10), you want to be (8), How do you feel (5), Why do you feel (5), What is the evidence (5) | Do you want to be (6) | | Simple Reflection | It sounds like you (16), sounds like you have | It sounds like you are (7), It sounds like you | | (9), sounds like you are (8) | have (6) | | | Complex Reflection | It sounds like you (26), My guess is that (5), | It sounds like you are (7), It sounds like you | | The fact that you (5), why you might feel (5) | have (6) | | | Give Information | may be able to (11), who you are and (8), For example , if (8), A lot of people (7), A good therapist will (6) | who you are and what (6), you are and what you (6), be able to help you (6), it is important to (5), a higher level of care (5) | | Advise w/ Permission | It may be helpful (8), would be a good (7), you would like to (6), a good idea to (5), I would encourage you (5) | It may be helpful to (6), I would encourage you to (5) | | Affirm | I 'm glad you (19), wish you the best (7), I 'm glad that (7), I wish you the (6), you 're doing better (5) | I 'm glad you 're (9), I wish you the best (6) | | Emphasize Autonomy | - | - | | Support | I 'm so sorry (12), sorry to hear about (12), I hope you find (10), you are not alone (9), m here for you (8) | I 'm sorry to hear (11), I 'm here for you (8), I know how you feel (8), if you wan na talk (6), I hope you can find (5) | | Advise w/o Permission | Reach out to a (6), I would suggest that (6), I think you should (5), I urge you to (5), I think you need (5) | , you may want to (5), I would suggest that you (5) | | Confront | - | - | | Direct | - | - | | Warn | - | - | | Self-Disclose | I feel the same (9), I 've been in (8), the same | I feel the same way (5), I do n't know what (5) | | way . (7), do n't know what (6), I feel like it (5) | | | | Other | you for your question (12), Hello , and thank | Hello , and thank you (9), you for your question | | (9), thank you for your (9) | . (12) | | Table 7: Examples of most frequent four-grams and five-grams corresponding to each label. Their frequencies are denoted within brackets. | Label | N-gram based matching | Similarity-based retrieval | | | | | |-----------------------|-------------------------|------------------------------|----------|----------|-----------|-----------| | # Labels | # Labels | Total | # Labels | # Labels | Total | | | in CC | in RED | in CC | in RED | | | | | Closed Question | 75 | 17,190 | 17,265 | 132 | 71,505 | 61,637 | | Open Question | 29 | 12,242 | 12,271 | 49 | 36,107 | 36,156 | | Simple Reflection | 71 | 9,674 | 9,745 | 43 | 21,827 | 21,870 | | Complex Reflection | 110 | 20,539 | 20,649 | 20 | 17,243 | 17,263 | | Give Information | 571 | 71,996 | 72,567 | 893 | 166,586 | 167,479 | | Advise w/ Permission | 161 | 5,979 | 6,140 | 5 | 3,728 | 3,733 | | Affirm | 136 | 16,407 | 16,543 | 187 | 106,066 | 106,253 | | Emphasize Autonomy | 0 | 0 | 0 | 3 | 2,839 | 2,842 | | Support | 213 | 94,670 | 94,883 | 482 | 528,469 | 528,951 | | Advise w/o Permission | 520 | 58,857 | 59,377 | 969 | 171,502 | 172,471 | | Confront | 0 | 0 | 0 | 1 | 2,581 | 2,582 | | Direct | 0 | 0 | 0 | 16 | 21,058 | 21,074 | | Warn | 0 | 0 | 0 | 6 | 2,342 | 2,348 | | Self-Disclose | 5 | 28,309 | 28,314 | 8 | 14,702 | 14,710 | | Other | 27 | 4,498 | 4,525 | 67 | 29,457 | 28,524 | | Total | 1,918 | 340,361 | 342,279 | 2,881 | 1,196,012 | 1,198,893 | Table 8: Statistics of the labels extended through N- gram-based matching and similarity-based retrieval in CC and RED datsets. tences by the different rephraser models we fine- tuned. | Label | MI Augmented (Intersection) | MI Augmented (Union) | | | | | | | |--------------------|-------------------------------|------------------------|--------|----------|-----------|-----------|-----------|-----------| | # Labels | # Labels | Total | Total | # Labels | # Labels | Total | Total | | | in CC | in RED | + MI Gold | in CC | in RED | + MI Gold | | | | | Closed Question | 9 | 5,598 | 5,607 | 6,512 | 135 | 78,932 | 79,067 | 79,972 | | Open Question | 1 | 2,353 | 2,354 | 2,830 | 60 | 40,805 | 40,865 | 41,341 | | Simple Reflection | 1 | 185 | 186 | 742 | 41 | 19,961 | 20,002 | 20,558 | | Complex Reflection | 2 | 201 | 203 | 1,497 | 44 | 21,247 | 21,291 | 22,585 | | Give Information | 77 | 3,379 | 3,456 | 8,312 | 1083 | 203,110 | 204,193 | 209,049 | | Advise w/ Per. | 0 | 28 | 28 | 512 | 5 | 3,052 | 3,057 | 3,541 | | Affirm | 48 | 898 | 946 | 1,891 | 208 | 106,575 | 106,783 | 107,728 | | Emphasize Autonomy | 0 | 0 | 0 | 253 | 3 | 2,700 | 2,703 | 2,956 | | Support | 76 | 44,635 | 44,711 | 45,944 | 551 | 592,220 | 592,771 | 594,004 | | Advise w/o Per. | 144 | 8,872 | 9,016 | 11,301 | 1,029 | 196,571 | 197,600 | 199,885 | | Confront | 0 | 0 | 0 | 318 | 0 | 2,468 | 2,468 | 2,786 | | Direct | 0 | 0 | 0 | 898 | 15 | 20,690 | 20,705 | 21,603 | | Warn | 0 | 0 | 0 | 113 | 6 | 2,278 | 2,284 | 2,397 | | Self-Disclose | 0 | 729 | 729 | 2,119 | 12 | 36,522 | 36,534 | 37,924 | | Other | 0 | 5 | 5 | 810 | 67 | 31,268 | 31,335 | 32,140 | | Total | 358 | 66,883 | 67,241 | 84,052 | 3,259 | 1,358,399 | 1,361,658 | 1,378,469 | Table 9: Statistics of the annotated responses in MI Augmented (Intersection) and MI Augmented (Union) datasets. | Dataset | Size | Optimal | Train | Valid | Test | | | |-----------------|---------------|-----------|----------|----------|--------|-------|-------| | Epoch | Loss | Acc. (%) | Acc. (%) | F1-score | | | | | (weighted avg.) | | | | | | | | | Train: | 13,449 | | | | | | | | MI Gold | Valid (Gold): | 1,681 | 7 | 0.3002 | 67.08 | 68.31 | 68.07 | | Test (Gold): | 1,681 | | | | | | | | MI | Train: | 80,690 | | | | | | | Augmented | Valid (Gold): | 1,681 | 2 | 0.2277 | 64.07 | 67.13 | 65.85 | | (Intersection) | Test (Gold): | 1,681 | | | | | | | MI | Train: | 1,375,107 | | | | | | | Augmented | Valid (Gold): | 1,681 | 13 | 0.1324 | 72.67 | 73.44 | 72.92 | | (Union) | Test (Gold): | 1,681 | | | | | | Table 10: The performance scores of the MI classifier when trained on gold-labeled and augmented MI datasets. All scores are reported on the human-annotated validation and test sets. All scores are reported for a single run. | Advise without Permission | Advise with Permission | | |-----------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------| | - You can (verb) | - It maybe helpful to (verb) | | | - You could (verb) | - You may want to (verb) | | | - You need to (verb) | - I encourage you to (verb) | | | - You should (verb) | - Perhaps you can (verb) | | | - (Verb) | - | , if you would like. | | - You can try to (verb) | - It would be good idea to (verb) | | | - I think you should (verb) | - It may be important to (verb) | | | - I suggest that you (verb) | - I would encourage you to (verb) | | | - I suggest you (verb) | - I wonder if you can (verb) | | | - Maybe you can (verb) | - Maybe it is important to (verb) | | | - Maybe you could (verb) | - An option would be to (verb) - You may want to consider (present continuous form of the verb) - You may consider (present continuous form of the verb) - I would recommend (present continuous form of the verb) - I wonder if you can consider (present continuous form the verb) | | Table 11: Linguistic templates corresponding to *Advise without Permission* and *Advise with Permission* responses. ## D Automatic Evaluation We used the the NLTK package to compute the BLEU 4, METEOR 5, and chrf 6scores. The 4https://www.nltk.org/_modules/nltk/translate 5427/bleu_score.html Advise without permission: *try to learn from your mistakes and meet some new people .* Advise with permission (ground-truth): *It may be important to try to learn from your mistakes and meet some new people .* Template-based: Blender: *it would be a good idea to try to learned from your mistakes and meet some new people like me .* GPT3: *I encourage you to try to learn from your mistakes and meet some new people .* Retrieval-based: Blender: In addition to being open to meeting new friends , involve yourself in projects from which you ' ll feel satisfied by your own results . GPT3: *I think it would be valuable for you to start mingling with other young people .* Template+Retrieval-based: Blender: *i would encourage you to look up ' ' toxic relationships ' ' and try to make your relationships stronger .* GPT3: *I would encourage you to try to learn from your mistakes and meet some new people.* Template+Retrieval-based (with generic prompting): Blender: *in addition to being open to meeting new friends , involve yourself in projects from which you ' ll feel satisfied by* your own results . GPT3: *I would encourage you to try to learn from your mistakes and meet some new people.* Template+Retrieval-based (with N-gram prompting): Prompt: *it may be important to* Blender: it may be important to try to learn from your mistakes and meet a new person . : ) it may also be important for you to start engaging in things outside of the home that make you happy . GPT3: *It may be important to try to learn from your mistakes and meet some new people* Advise without permission: *find something you enjoy and work towards it .* Advise with permission (ground-truth): *It maybe helpful to find something you enjoy and work towards it .* Template-based: Blender: *it would be a good idea to find something yourself and work towards it . . . i wonder if you have heard of this ?* GPT3: *It may be important to find something you enjoy and work towards it .* Retrieval-based: Blender: *i think it would be valuable for you to start engaging in things outside of the home that make you happy .* GPT3: *Give yourself some credit for being awesome and tell yourself you can build new friendships .* Template+Retrieval-based: Blender: *i think it would be valuable for you to start engaging in things outside of the home that make you happy .* GPT3: *It would be a good idea to find something you enjoy and work towards it .* Template+Retrieval-based (with generic prompting): Blender: *i think it would be valuable for you to start engaging in things outside of the home that make you happy .* GPT3: *An option would be to find something you enjoy and work towards it .* Template+Retrieval-based (with N-gram prompting): Prompt: *it maybe helpful to* Blender: *it's perhaps helpful to find something yourself and work towards it .* GPT3: *It maybe helpful to find something you enjoy and work towards it .* Advise without permission: *Talk to a therapist .* Advise with permission (ground-truth): I encourage you to meet with a therapist and ask them how they might be able to help you . Template-based: Blender: *it would be a good idea to talk to one of the therapists . i wonder if you can talk to an therapist .* GPT3: *I encourage you to talk to a therapist .* Retrieval-based: Blender: *I encourage you to meet with a therapist and ask them how they might be able to help you .* GPT3: *I encourage you to meet with a therapist and ask them how they might be able to help you* Template+Retrieval-based: Blender: *i encourage you to meet with a therapist and ask them how they might be able to help you .* GPT3: *I encourage you to meet with a therapist and ask them how they might be able to help* Template+Retrieval-based (with generic prompting): Blender: *i encourage you to meet with a therapist and ask them how they might be able to help you .* GPT3: *I encourage you to meet with a therapist and ask them how they might be able to help* Template+Retrieval-based (with N-gram prompting): Prompt: *able to help you* Blender: *i encourage you to meet with a therapist . they ' ll help you find some strategies or tools to make it easier .* GPT3: *I encourage you to meet with a therapist and ask them how they might be able to help* Table 12: Examples of rephrased sentences by the different rephraser models we developed ![18_image_0.png](18_image_0.png) ROUGE score and the BERTscore were computed using the rouge 7and bert_score 8 python libraries, respectively. The POS distance was calculated as mentioned in the work by Tian et al. (2018) following the code released by the authors on github.9 For computing the Word Mover Distance (WMD), we used Gensim's implementation of the WMD. 10 We used sentence embeddings generated using Sentence-BERT (Reimers and Gurevych, 2019) to compute the cosine similarity between the original and rephrased text. Among the models the authors have proposed, we used the *roberta-base-nli-stsbmean-tokens* model, fine-tuned on the NLI (Bowman et al., 2015) and STS benchmark (STSb) (Cer et al., 2017) datasets to generate the embeddings. All the automatic evaluation scores are reported for a single run. ## E Human Evaluation Figures 5, 6, and 7 shows the user interfaces developed for the human evaluation task. The first one shows the task description, the second one shows the self-evaluating practice task designed to get the counselors familiarized with the rating task, and the last one shows the actual human evaluation task itself. ## F Other Remarks In human evaluation results, we observed in 97.5% of the cases, the average scores obtained for style transfer strength are better than the average scores obtained for semantic similarity. This observation is invariant of the type of backbone model used in training. This implies template-based and retrievalbased methods used in creating pseudo parallel data to train the rephrasers make it easier for the rephrasers to generate rephrased sentences that reflect a particular style (in this case, *Advise with permission*) than preserving the semantic meaning of the original sentence. This is a matter to be further investigated. To improve the scores on semantic similarity, future work can explore ways to take into account the context that precedes the sentence to be rephrased. In this way, though the rephrased version may not reflect exactly what was in the 7https://pypi.org/project/rouge/ 8https://pypi.org/project/bert-score/ 9https://github.com/YouzhiTian/Structured-Con tent-Preservation-for-Unsupervised-Text-Style-T ransfer/blob/master/POS_distance.py 10https://radimrehurek.com/gensim/auto_example s/tutorials/run_wmd.html ![19_image_0.png](19_image_0.png) for the development of intelligent writing assistants that can suggest better responses when peers untrained in the practice of counseling attempt to respond to distress-related posts on peer support platforms such as Reddit. ## G Distribution And Use Of Artifacts The artifacts produced, including the datasets and the models, will be released under the CC BYNC-SA 3.0 license https://creativecommon s.org/licenses/by-nc-sa/3.0, providing only non-commercial access to the users. We use artifacts such as the CounselChat dataset, and pretrained language architectures such as BERT (Devlin et al., 2019), RoBERTA (Liu et al., 2019), Blender (Roller et al., 2021), and GPT3 (Brown et al., 2020) for research purposes only, which does not violate their intended use. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 9: "Limitations" ✓ A2. Did you discuss any potential risks of your work? Section 10: "Ethics Statement" under "Chatbots for Distress-Consolation" ✓ A3. Do the abstract and introduction summarize the paper's main claims? "Abstract" and Section 1: "Introduction" ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3 "Datasets", Section 4 "MI Classifier", Section 5.1 "Pseudo-Parallel Corpora", and Section 5.2 "Models". ✓ B1. Did you cite the creators of artifacts you used? Section 3 "Datasets", Section 4 "MI Classifier", and Section 5.2 "Models". ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Appendix G: "Distribution and Use of Artifacts" and Section 10: Ethics Statement under "Chatbots for Distress-Consolation" ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Appendix G: "Distribution and Use of Artifacts" and Section 10: Ethics Statement under "Chatbots for Distress-Consolation" ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Section 10 "Ethics Statement" under "Data Curation". ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 3 "Datasets", and Appendix A: "Datasets" ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 3 "Datasets", Section 4 "MI Classifier", Section 5.1 "Pseudo-Parallel Corpora", and Appendix A "Datasets". The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ## C ✓ **Did You Run Computational Experiments?** Section 4: "MI Classifier" and Section 5.2: "Rephrasing Models" ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix B: "MI Classifier", and Appendix C.2: "Rephrasing Models" ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4: "MI Classifier", Section 5.2: "Rephrasing Models", Appendix B: "MI Classifier", and Appendix C.2: "Rephrasing Models" ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 4: "MI Classifier", Section 6: "Automatic Evaluation", Appendix B: "MI Classifier", and Appendix D: "Automatic Evaluation" ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Appendix D: "Automatic Evaluation" ## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Section 7: "Human Evaluation" ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Appendix D: "Human Evaluation" ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Section 7: "Human Evaluation" and Section 10: "Ethics Statement" under "Human Evaluation" D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. We recruited human workers only to rate rephrased responses generated by the rephrasing models that we developed. No personal information was collected during this experiment. But we discussed the details of our experiment and informed why we are conducting the experiment for the crowdworkers recruited. These details are denoted under Appendix E: "Human Evaluation". D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. We recruited human workers only to rate rephrased responses generated by the rephrasing models that we developed. No personal information was collected during this experiment. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. We recruited human workers only to rate rephrased responses generated by the rephrasing models that we developed. No personal information was collected during this experiment. But we did include the information that all workers recruited were professionally trained in the practice of counseling and all had professional competency in English.
han-etal-2023-ecola
{ECOLA}: Enhancing Temporal Knowledge Embeddings with Contextualized Language Representations
https://aclanthology.org/2023.findings-acl.335
Since conventional knowledge embedding models cannot take full advantage of the abundant textual information, there have been extensive research efforts in enhancing knowledge embedding using texts. However, existing enhancement approaches cannot apply to \textit{temporal knowledge graphs} (tKGs), which contain time-dependent event knowledge with complex temporal dynamics. Specifically, existing enhancement approaches often assume knowledge embedding is time-independent. In contrast, the entity embedding in tKG models usually evolves, which poses the challenge of aligning \textit{temporally relevant} texts with entities. To this end, we propose to study enhancing temporal knowledge embedding with textual data in this paper. As an approach to this task, we propose Enhanced Temporal Knowledge Embeddings with Contextualized Language Representations (ECOLA), which takes the temporal aspect into account and injects textual information into temporal knowledge embedding. To evaluate ECOLA, we introduce three new datasets for training and evaluating ECOLA. Extensive experiments show that ECOLA significantly enhances temporal KG embedding models with up to 287{\%} relative improvements regarding Hits@1 on the link prediction task. The code and models are publicly available on \url{https://github.com/mayhugotong/ECOLA}.
# Ecola: Enhancing Temporal Knowledge Embeddings With Contextualized Language Representations Zhen Han˚ :7**, Ruotong Liao** :1,6, Jindong Gu2, Yao Zhang1**, Zifeng Ding**1,5, Yujia Gu4, Heinz Koeppl3, Hinrich Schütze1,6**, Volker Tresp**1,6 1LMU Munich 2University of Oxford 3Technical University of Darmstadt 4Technical University of Munich 5Siemens AG 6Munich Center for Machine Learning (MCML), Munich, Germany 7Amazon [email protected], [email protected], [email protected] ## Abstract Since conventional knowledge embedding models cannot take full advantage of the abundant textual information, there have been extensive research efforts in enhancing knowledge embedding using texts. However, existing enhancement approaches cannot apply to *temporal knowledge graphs* (tKGs), which contain time-dependent event knowledge with complex temporal dynamics. Specifically, existing enhancement approaches often assume knowledge embedding is time-independent. In contrast, the entity embedding in tKG models usually evolves, which poses the challenge of aligning *temporally relevant* texts with entities. To this end, we propose to study enhancing temporal knowledge embedding with textual data in this paper. As an approach to this task, we propose Enhanced Temporal Knowledge Embeddings with Contextualized Language Representations (ECOLA), which takes the temporal aspect into account and injects textual information into temporal knowledge embedding. To evaluate ECOLA, we introduce three new datasets for training and evaluating ECOLA. Extensive experiments show that ECOLA significantly enhances temporal KG embedding models with up to **287%** relative improvements regarding Hits@1 on the link prediction task. The code and models are publicly available‡. ## 1 Introduction Knowledge graphs (KGs) have long been considered an effective and efficient way to store structural knowledge about the world. A knowledge graph consists of a collection of triples p*s, p, o*q, where s (subject entity) and o (object entity) correspond to nodes, and p (predicate) indicates the edge type (relation) between the two entities. Common knowledge graphs (Toutanova et al., 2015; Dettmers et al., 2018) assume that the relations between entities are static connections. However, in the real world, there are not only static facts but also temporal relations associated with the entities. To this end, temporal knowledge graphs (tKGs) (Tresp et al., 2015) were introduced that capture temporal aspects of relations by extending a triple to a *quadruple*, which adds a timestamp to describe when the relation is valid, e.g., (R.T. Erdo˘gan, *visit*, US, *2019-11-12*). If the temporal relationship lasts for several timestamps, most tKGs represent it by a sequence of quadruples, e.g., {(R.T. Erdo˘gan, visit, US, *2019-11-12*), (R.T. Erdo˘gan, visit, US, 201911-13)}. Conventional knowledge embedding approaches learn KGs by capturing the structural information, suffering from the sparseness of KGs. To address this problem, some recent studies incorporate textual information to enrich knowledge embedding. KG-BERT (Yao et al., 2019) takes entity and relation descriptions of a triple as the input of a pre-trained language model (PLM) and turns KG link prediction into a sequence classification problem. Similarly, KEPLER (Wang et al., 2021) computes entity representations by encoding entity descriptions with a PLM and then applies KG score functions for link prediction. However, they could not be applied to tKGs. Specifically, existing approaches (e.g., KEPLER) encode an entity, no matter at which timestamp, with the same static embedding based on a shared entity description. In comparison, entity embeddings in tKG models usually evolve over time as entities often involve in different events at different timestamps. Therefore, an entity might be aligned with different textual knowledge at different time. And it should be taken into account which textual knowledge is relevant to which entity at which timestamp. We name this challenge as **temporal alignment** between texts and tKGs, which is to establish a correspondence between textual knowledge and their ![1_image_0.png](1_image_0.png) tKG depiction. Another challenge is that many temporal knowledge embedding models (Goel et al., 2020; Han et al., 2020a) learn the entity representations as a function of time. However, the existing enhancement approaches cannot be naturally applicable to such tKG embedding. We refer to this challenge as **dynamic embedding challenge**. In this work, we propose to study *enhancing temporal knowledge embedding with textual data*. As an approach to this task, we develop Enhanced Temporal Knowledge Embeddings with Contextualized Language Representations (ECOLA), which uses temporally relevant textual knowledge to enhance the time-dependent knowledge graph embedding. Specifically, we solve the **temporal alignment** challenge using tKG quadruples as an implicit measure. We pair a quadruple with its relevant textual data, e.g., event descriptions, which corresponds to the temporal relations between entities at a specific time. Then we use the event description to enhance the representations of entities and the predicate involved in the given quadruple. Besides, ECOLA solves the **dynamic embedding** challenge using a novel knowledge-text prediction (KTP) task which injects textual knowledge into temporal knowledge embeddings. Specifically, given a quadruple-text pair, we feed both the temporal knowledge embeddings of the quadruple and token embeddings of the text into a PLM. The KTP task is an extended masked language modeling task that randomly masks words in texts and entities/predicates/timestamp in quadruples. With the help of the KTP task, ECOLA would be able to recognize mentions of the subject entity and the object entity and align semantic relationships in the text with the predicate in the quadruple. For training ECOLA, we need datasets with tKG quadruples and aligned textual event descriptions, which are unavailable in the existing temporal KG benchmarks. Thus, we construct three new temporal knowledge graph datasets by adapting two existing datasets, i.e., GDELT (Leetaru and Schrodt, 2013) and Wiki (Dasgupta et al., 2018), and an event extraction dataset (Li et al., 2020). To summarize, our contributions are as follows: (i) We are the first to address the challenge of enhancing temporal knowledge embedding with temporally relevant textual information while preserving the time-evolving properties of entity embedding. (ii) We construct three datasets to train the text-enhanced tKG models. Specifically, we adapt three existing temporal KG completion datasets by augmenting each quadruple with a relevant textual description. (iii) Extensive experiments show that ECOLA is model-agnostic and can be potentially combined with any temporal KG embedding model. ECOLA also has a superior performance on the temporal KG completion task and enhances temporal KG models with up to 287% relative improvements in the Hits@1 metric. (iv) As a joint model, ECOLA also empowers PLMs by integrating *temporal structured knowledge* into them. We select temporal question answering as a downstream NLP task, demonstrating that ECOLA can considerably enhance PLMs. ## 2 Preliminaries And Related Work Temporal Knowledge Graphs Temporal knowledge graphs are multi-relational, directed graphs with labeled timestamped edges between entities (nodes). Let E and P represent a finite set of entities and predicates, respectively. A quadruple q " pes, p, eo, tq represents a timestamped and labeled edge between a subject entity es P E and an object entity eo P E at a timestamp t P T . Let F represent the set of all true quadruples, the temporal knowledge graph completion (tKGC) is the task of inferring F based on a set of observed facts O. Specifically, tKGC is to predict either a missing subject entity p?, p, eo, tq given the other three components or a missing object entity pes, p, ?, tq. We provide related works on temporal knowledge representations in Appendix A. Joint Language and Knowledge Models Recent studies have achieved great success in jointly learning language and knowledge representations. Zhang et al. (2019) and Peters et al. (2019) focus on enhancing language models using external knowledge. They separately pre-train the entity em- ![2_image_0.png](2_image_0.png) bedding with knowledge embedding models, e.g., TransE (Bordes et al., 2013 ), and inject the pretrained entity embedding into PLMs, while fixing the entity embedding during training PLMs. Thus, they are not real joint models for learning knowledge embedding and language embedding simultaneously. Yao et al. (2019), Kim et al. (2020), and Wang et al. ( 2021 ) learn to generate entity embeddings with PLMs from entity descriptions. Moreover, He et al. ( 2019 ), Sun et al. ( 2020 ), and Liu et al. (2020) exploit the potential of contextualized knowledge representation by constructing subgraphs of structured knowledge and textual data instead of treating single triples as training units. Nevertheless, none of these works consider the temporal aspect of knowledge graphs, which makes them different from our proposed ECOLA. ## Ecola 3 In this section, we present the overall framework of ECOLA, including the model architecture in Section 3.1 - 3.3 , a novel task designed for aligning knowledge embedding and language representation in Section 3.4 , and the training procedure in Section 3.5. As shown in Figure 2, ECOLA implicitly incorporates textual knowledge into temporal knowledge embeddings by jointly optimizing the knowledge-text prediction loss and the temporal knowledge embedding loss . Note that, at inference time, we only take the enhanced temporal knowledge embeddings to perform the temporal KG completion task without using PLM and any textual data for preventing information leakage and keeping a fast inference speed. ## 3.1 Embedding Layer In tKG embedding models, entity representations evolve over time. Thus, the key point of enhancing a time-dependent entity representation e i ( t ) is to find texts that are relevant to the entity at the time of interest t . To this end, we use tKG quadruples (e.g., ( e i , p , e j , t )) as an implicit measure for the alignment. We pair a quadruple with its relevant textual data and use such textual data to enhance the entity representation e i ( t ). Therefore, a training sample is a pair of quadruple from temporal KGs and the corresponding textual description, which are packed together into a sequence. As shown in Figure 2 , the input embedding is the sum of token embedding, type embedding, and position embedding. For token embedding, we maintain three lookup tables for subwords, entities, and predicates, respectively. For subword embedding, we first tokenize the textual description into a sequence of subwords following (Devlin et al., 2018) and use the WordPiece algorithm (Wu et al., 2016). As the light blue tokens shown in Figure 2, we denote an embedding sequence of subword tokens as { w 1 , ..., w n } . In contrast to subword embedding, the embeddings for entities and predicates are directly learned from scratch, similar to common knowledge embedding methods. We denote the entity embedding and predicate embedding as e and p, respectively, as the dark blue tokens shown in Figure 2. We separate the knowledge tokens, i.e., entities and predicates, and subword tokens with a special token [SEP]. To handle different token types, we add type embedding to indicate the type of each token, i.e., subword, entity, and predicate. For position embedding, we assign each token an index according to its position in the input sequence and follow Devlin et al. (2018) to apply fully-learnable absolute position embeddings. ## 3.2 Temporal Knowledge Encoder As shown in Figure 2, the input embedding for *entities* and *predicates* consists of knowledge token embedding, type embedding, and position embedding. In this section, we provide details of the temporal knowledge embedding (tKE) objective. A temporal embedding function defines entity embedding as a function that takes *an entity* and *a timestamp* t as input and generates a time-dependent representation in a vector space. There is a line of work exploring temporal embedding functions. Since we aim to propose a modelagnostic approach, we combine ECOLA with three temporal embedding functions, i.e., DyERNIEEuclid (Han et al., 2020a), UTEE (Han et al., 2021c), and DE-SimplE (Goel et al., 2020). In the following, we refer to DyERNIE-Euclid as DyERNIE and take it as an example to introduce our framework. Specifically, the entity representation is derived from an initial embedding and a velocity vector e DyER iptq " e¯ DyER i ` vei t, where e¯ DyER irepresents the initial embedding that does not change over time, and vei is an entity-specific velocity vector. The combination with other temporal embedding functions is discussed in Section 4. The score function measuring the plausibility of a quadruple is defined as follows, $$\phi^{DyER}(e_{i},p,e_{j},t)=\tag{1}$$ $$-d(\mathbf{P}\odot\mathbf{e}_{i}^{DyER}(t),\mathbf{e}_{j}^{DyER}(t)+\mathbf{p})+b_{i}+b_{j},$$ where $\mathbf{P}$ and $\mathbf{p}$ represent the predicate matrix and the translation vector of predicate p, respectively; d denotes the Euclidean distance, and bi, bj are scalar biases. By learning tKE, we generate M negative samples for each positive quadruple in a batch. We choose the binary cross entropy as the temporal knowledge embedding objective $${\mathcal{L}}_{t K E}=\frac{-1}{N}\sum_{k=1}^{N}(y_{k}\log(p_{k})+(1-y_{k})\log(1-p_{k})),\eqno(2)$$ where N is the sum of positive and negative training samples, yk represents the binary label indicating whether a training sample is positive or not, pk denotes the predicted probability σpϕ DyER kq, and σp¨q represents the sigmoid function. ## 3.3 Masked Transformer Encoder To encode the input sequence, we use the pretrained language representation model BERT (Devlin et al., 2018). Specifically, the encoder feeds a sequence of N tokens including entities, *predicates*, and *subwords* into the embedding layer introduced in Section 3.1 to get the input embeddings and then computes L layers of d-dimensional contextualized representations. Eventually, we get a contextualized representation for each token, which could be further used to predict masked tokens. ## 3.4 Knowledge-Text Prediction Task To incorporate textual knowledge into temporal knowledge embedding, we use the pre-trained language model BERT to encode the textual description and propose a knowledge-text prediction task to align the language representations and the knowledge embedding. The knowledge-text prediction task is an extension of the masked language modeling (MLM) task. As illustrated in Figure 2, given a pair of a quadruple and the corresponding event description, the knowledge-text prediction task is to randomly mask some of the input tokens and train the model to predict the original index of the masked tokens based on their contexts. As different types of tokens are masked, we encourage ECOLA to learn different capabilities: - **Masking entities**. To predict an entity token in the quadruple, ECOLA has the following ways to gather information. First, the model can detect the textual mention of this entity token and determine the entity; second, if the other entity token and the predicate token are not masked, the model can utilize the available knowledge token to make a prediction, which is similar to the traditional semantic matchingbased temporal KG models. Masking entity nodes helps ECOLA align the representation spaces of language and structured knowledge, and inject contextualized representations into entity embeddings. - **Masking predicates**. To predict the predicate token in the quadruple, the model needs to detect mentions of the subject entity and object entity and classify the semantic relationship between the two entity mentions. Thus, masking predicate tokens helps the model integrate language representation into the predicate embedding and map words and entities into a common representation space. - **Masking subwords**. When subwords are masked, the objective is similar to traditional MLM. The difference is that ECOLA considers not only the dependency information in the text but also the entities and the logical relationship in the quadruple. Additionally, we initialize the encoder with the pretrained BERTbase. Thus, masking subwords helps ECOLA keep linguistic knowledge and avoid catastrophic forgetting while integrating contextualized representations into temporal knowledge embeddings. In each quadruple, the predicate and each entity have a probability of 15% to be masked. Similarly, we mask 15% of the subwords of the textual description at random. We ensure that entities and the predicate cannot be masked at the same time in a single training sample, where we conduct an ablation study in Section 6 to show the improvement of making this constraint. When a token is masked, we replace it with (1) the [MASK] token 80% of the time, (2) a randomly sampled token with the same type as the original token 10% of the time, (3) the unchanged token 10% of the time. For each masked token, the contextualized representation in the last layer of the encoder is used for three classification heads, which are responsible for predicting entities, predicates, and subword tokens, respectively. At last, a cross-entropy loss L*KT P* is calculated over these masked tokens. ## 3.5 Training Procedure And Inference We initialize the transformer encoder with BERTbase §and the knowledge encoder with random vectors. Then we use the temporal knowledge embedding (tKE) objective LtKE to train the knowledge encoder and use the knowledge-text §https://huggingface.co/bert-base-uncased prediction (KTP) objective L*KT P* to incorporate temporal factual knowledge and textual knowledge in the form of a multi-task loss: $${\mathcal{L}}={\mathcal{L}}_{t K E}+\lambda{\mathcal{L}}_{K T P},$$ where λ is a hyperparameter to balance tKE loss and KTP loss. Note that those two tasks share the same embedding layer of entities and predicates. At inference time, we aim to answer link prediction queries, e.g., pes, p, ?, tq. Since there is no textual description at inference time, we take the entity and predicate embedding as input and use the score function of the knowledge encoder, e.g., Equation 1, to predict the missing links. Specifically, the score function assigns a plausibility score to each quadruple, and the proper object can be inferred by ranking the scores of all quadruples tpes, p, ej , tq, ej P Eu that are accompanied with candidate entities. ## 4 The Model-Agnostic Property Of Ecola ECOLA is model-agnostic and can enhance different temporal knowledge embedding models. Besides ECOLA-DyERNIE, we introduce here two additional variants of ECOLA. ECOLA-DE enhances DE-SimplE, which applies the diachronic embedding (DE) function (Goel et al., 2020). DE-function defines the temporal embeddings of entity ei at timestamp t as $$\mathbf{e}_{i}^{DE}(t)[n]=\begin{cases}\mathbf{a}_{e_{i}}[n]&\text{if}\ 1\leq n\leq\gamma d,\\ \mathbf{a}_{e_{i}}[n]\sin(\boldsymbol{\omega}_{e_{i}}[n]t+\mathbf{b}_{e_{i}}[n])&\text{else.}\end{cases}\tag{3}$$ $\mathbf{H}=\mathbf{\omega}_{e_{i}}^{DE}(t)\mathbf{\hat{L}}$ denotes the $\mathbf{\hat{L}}$-vector of the Here, e DE iptqrns denotes the n th element of the embeddings of entity ei at time t. aei , ωei , bei P R d are entity-specific vectors with learnable parameters, d is the dimensionality, and γ P r0, 1s represents the portions of the time-independent part. ECOLA-UTEE enhances UTEE (Han et al., 2021c) that learns a *shared* temporal encoding for all entities to address the overfitting problem of DE-SimplE on sparse datasets. Compared to ECOLA-DE, ECOLA-UTEE replaces Equation 3 with e UT EE ipt**q " r**e¯i||a sinpωt ` bqs, e¯i P R γd; a, w, b P Rp1´γqd, where e¯i denotes entityspecific time-invariant part, || denotes concatenation, a, ω, and b are shared among all entities. | Dataset | # Entities | # Predicates | # Timestamps | # training set | # validation set | # test set | |-----------|--------------|----------------|----------------|------------------|--------------------|--------------| | GDELT | 5849 | 237 | 2403 | 755166 | 94395 | 94395 | | DUEE | 219 | 41 | 629 | 1879 | 247 | 247 | | WIKI | 10844 | 23 | 82 | 233525 | 19374 | 19374 | ## 5 Datasets Training ECOLA requires both temporal KGs and textual descriptions. Given a quadruple pes, p, eo, tq, the key point is to find texts that are temporally relevant to es and eo at t. Existing tKG datasets do not provide such information. To facilitate the research on integrating textual knowledge into temporal knowledge embedding, we reformat GDELT¶, DuEE||, and Wiki**. We show the dataset statistics in Table 1. GDELT is an initiative knowledge base storing events across the globe connecting people and organizations, e.g., (Google, consult, the United States, 2018/01/06). For each quadruple, GDELT provides the link to the news report which the quadruple is extracted from. We assume each sentence that contains both mentions of the subject and object is relevant to the given quadruple, and, thus, temporally aligned with the subject and object at the given timestamp. We pair each of these sentences with the given quadruple to form a training sample. This process is similar to the distant supervision algorithm (Mintz et al., 2009) in the relation extraction task. The proposed dataset contains 5849 entities, 237 predicates, 2403 timestamps, and 943956 quadruples with accompanying sentences. DuEE is originally a human-annotated dataset for event extraction containing 65 event types and 121 argument roles. Each sample contains a sentence and several extracted event tuples. We select 41 event types that could be represented by quadruples and reformat DuEE by manually converting event tuples into quadruples and then pairing quadruples with their corresponding sentence. Wiki is a temporal KG dataset proposed by Leblay and Chekol (2018). Following the postprocessing by Dasgupta et al. (2018), we discretize the time span into 82 different timestamps. We align each entity to its Wikipedia page and extract ¶https://www.gdeltproject.org/data.html\#googlebigquery ||https://ai.baidu.com/broad/download **https://www.wikidata.org/wiki/Wikidata:Main_Page the first section as its description. To construct the relevant textual data of each quadruple, we combine the subject description, relation, and object description into a sequence. In this case, the knowledge-text prediction task lets the subject entity learn the descriptions of its *neighbors at different timestamps*, thus, preserving the temporal alignment between time-dependent entity representation and textual data. ## 6 Experiments We evaluate the enhanced temporal knowledge embedding on the temporal KG completion task. Specifically, we take the entity and predicate embedding of ECOLA-DyERNIE and use Equation 1 to predict missing links. The textual description of test quadruples could introduce essential information and make the completion task much easier. Thus, to make a **fair comparison** with other temporal KG embedding models, we take the enhanced *lookup table embedding* of temporal KGs to perform the link prediction task at test time but use neither textual descriptions of test quadruples nor the language model. We report such results in Table 2. As additional results, we also show the prediction outcome that takes the text description of test quadruples as input in Figure 4a. Baselines We include both static and temporal KG embedding models. From the static KG embedding models, we use TransE (Bordes et al., 2013), DistMult (Yang et al., 2014), and SimplE (Kazemi and Poole, 2018). These methods ignore the time information. From the temporal KG embedding models, we compare our model with several stateof-the-art methods, including ATiSE (Xu et al., 2019), TNTComplE (Lacroix et al., 2020), DyERNIE†† (Han et al., 2020a), TeRO (Xu et al., 2020), and DE-SimplE (Goel et al., 2020). We provide implementation details in Appendix B and attach the source code in the supplementary material. | Datasets | GDELT - filtered | Wiki - filtered | DuEE - filtered | | | | | | | |---------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------|-------------------|-------------------|-------|--------|--------|-------|--------|--------| | Model | MRR | Hits@1 | Hits@3 | MRR | Hits@1 | Hits@3 | MRR | Hits@1 | Hits@3 | | TransE | 8.08 | 0.00 | 8.33 | 27.25 | 16.09 | 33.06 | 34.25 | 4.45 | 60.73 | | SimplE | 10.98 | 4.76 | 10.49 | 20.75 | 16.77 | 23.23 | 51.13 | 40.69 | 58.30 | | DistMult | 11.27 | 4.86 | 10.87 | 21.40 | 17.54 | 23.86 | 48.58 | 38.26 | 55.26 | | TeRO | 6.59 | 1.75 | 5.86 | 32.92 | 21.74 | 39.12 | 54.29 | 39.27 | 63.16 | | ATiSE | 7.00 | 2.48 | 6.26 | 35.36 | 24.07 | 41.69 | 53.79 | 42.31 | 59.92 | | TNTComplEx | 8.93 | 3.60 | 8.52 | 34.36 | 22.38 | 40.64 | 57.56 | 43.52 | 65.99 | | DE-SimplE | 12.25 | 5.33 | 12.29 | 42.12 | 34.03 | 45.23 | 58.86 | 44.74 | 68.62 | | ECOLA-DE | 19.67 ˘ 16.04 ˘ 19.50 ˘ 43.53 ˘ 35.78 ˘ 46.42 ˘ 60.78 ˘ 47.43 ˘ 69.43 ˘ 00.11 00.19 00.04 00.08 00.17 00.02 00.16 00.13 00.64 | | | | | | | | | | UTEE | 9.76 | 4.23 | 9.77 | 26.96 | 20.98 | 30.39 | 53.36 | 43.92 | 60.52 | | ECOLA-UTEE | 19.11 ˘ 15.29 ˘ 19.46 ˘ 38.35 ˘ 30.56 ˘ 42.11 ˘ 60.36 ˘ 46.55 ˘ 69.22 ˘ 00.16 00.38 00.05 00.22 00.18 00.14 00.36 00.51 00.93 | | | | | | | | | | DyERNIE | 10.72 | 4.24 | 10.81 | 23.51 | 14.53 | 25.21 | 57.58 | 41.49 | 70.24 | | ECOLA-DyERNIE 19.99 ˘ 16.40 ˘ 19.78 ˘ 41.22 ˘ 33.02 ˘ 45.00 ˘ 59.64 ˘ 46.35 ˘ 67.87 ˘ 00.05 00.09 00.03 00.04 00.06 00.27 00.20 00.18 00.53 | | | | | | | | | | Evaluation Protocol For each quadruple q " pes, p, eo, tq in the test set G*test*, we create two queries: pes, p, ?, tq and p?, p, eo, tq. For each query, the model ranks all possible entities E according to their scores. Let *Rank*pesq and *Rank*peoq represent the rank for es and eo of the two queries, respectively, we evaluate our models using standard metrics across the link prediction literature: mean reciprocal rank (MRR):1 2¨|G*test*| řqPG*test*p1 Rankpesq` 1 Rankpeoqq and *Hits*@kpk P t1, 3, 10uq: the percentage of times that the true entity candidate appears in the top k of ranked candidates. Quantitative Study Table 2 reports the tKG completion results on the test sets, which are averaged over three trials. Firstly, we can see that ECOLAUTEE improves its baseline temporal KG embedding model, UTEE, by a large margin, demonstrating the effectiveness of our fusing strategy. Specifically, ECOLA-UTEE enhances UTEE on GDELT with a *relative improvement* of 95% and 99% in terms of mean reciprocal rank (MRR) and Hits@3, even nearly **four times** better in terms of Hits@1. Thus, its superiority is clear on GDELT, which is the most challenging dataset among benchmark tKG datasets, containing nearly one million quadruples. Secondly, ECOLA-UTEE and ECOLA-DE generally outperform UTEE and DE-SimplE on the three datasets, demonstrating that ECOLA is model-agnostic and can enhance different tKG embedding models. Besides, in the DuEE dataset, ECOLA-DyERNIE achieves a better performance than DyERNIE in Hits@1 and MRR, but the gap reverses in Hits@3. The reason could be that ECOLA-DyERNIE is good at classifying hard negatives using textual knowledge, and thus has a high Hits@1; however, since DuEE is much smaller than the other two datasets, ECOLA-DyERNIE may overfit in some cases, where the ground truth is pushed away from the top 3 ranks. Ablation Study We compare DE-SimplE, ECOLA-DE, and ECOLA-SF on GDELT in Figure 3a. ECOLA-SF is the **static counterpart** of ECOLA-DE, where we do not consider the temporal alignment while incorporating textual knowledge. Specifically, ECOLA-SF integrates all textual knowledge into the **time-invariant part** of entity representations. We randomly initialize an embedding vector ¯ei P R dfor each entity ei P E, where ¯ei has the same dimension as the token embedding in the pre-trained language model. Then we learn the **time-invariant part** ¯ei via the knowledge-text prediction task. For the temporal KG completion task, we combine ¯ei with temporal knowledge embeddings, $$\mathbf{e}_{i}^{S F}(t)[n]={\begin{cases}\mathbf{W}_{s f}\bar{\mathbf{e}}_{i}[n]\quad{\mathrm{if}}\ \ 1\leqslant n\leqslant\gamma d,\\ \mathbf{a}_{e_{i}}[n]\sin(\boldsymbol{\omega}_{e_{i}}[n]t+\mathbf{b}_{e_{i}}[n]){\mathrm{~else}},\end{cases}}$$ where e SF iptq P R dis an entity embedding containing static and temporal embedding part. aei , ωei , bei P R d´γd are entity-specific vectors with learnable parameters. Wsf P R dˆγd is matrix with learnable weights. As shown in Figure 3a, the performance gap between ECOLA-DE and ECOLA-SF is significant, demonstrating the tempo- (a) (b) ![7_image_0.png](7_image_0.png) ral alignment between time-dependent entity representation and textual knowledge is more powerful than the *static alignment*. Moreover, Figure 3b shows the results of different masking strategies on GDELT. The first strategy, e.g., *Masking E+R+W*, allows to simultaneously mask predicate, entity, and subword tokens in the same training sample. The second strategy is *Masking E/R+W*, where we mask 15% subword tokens in the language part, *and either* an entity or a predicate in the knowledge tuple. In the third strategy called *Masking E/R/W*, for each training sample, we choose to mask *either* subword tokens, an entity, or the predicate. Figure 3b shows the advantage of the second masking strategy, indicating that remaining adequate information in the knowledge tuple helps the model to align the knowledge embedding and language representations. Qualitative Analysis To investigate why incorporating textual knowledge can improve the tKG embedding models' performance, we study the test samples that have been correctly predicted by the fusion model ECOLA-DE but wrongly by the tKG model DE-SimplE. It is observed that language representations help overcome the incompleteness of the tKG by leveraging knowledge from augmented textual data. For example, there is a test quadruple (US, host a visit, ?, 2019-11-14) with ground truth R.T. Erdo˘gan. The training set contains a quite relevant quadruple, i.e., *(Turkey, intend to negotiate* with, US, 2019-11-11). However, the given tKG does not contain information indicating that the entity *R.T. Erdo˘gan* is a representative of *Turkey*. So it is difficult for the tKG model DE-SimplE to infer the correct answer from the above-mentioned quadruple. In ECOLA-DE, the augmented textual (a) (b) ![7_image_1.png](7_image_1.png) data do contain such information, e.g. "The president of Turkey, R.T. Erdogan, inaugurated in Aug. 2014.", which narrows the gap between *R.T. Erdogan* and *Turkey*. Thus, by integrating textual information into temporal knowledge embedding, the enhanced model can gain additional information which the knowledge base does not include. ## 7 Discussion Inference with Textual Data In Section 6, we compared different tKG embedding models, where textual data of test quadruples is absent during inference time. However, if the textual descriptions of the test quadruples are given during inference, will the contextualized language model incorporate this information into tKG embeddings? We use the entity predictor of the knowledge-text prediction task to perform the tKG completion task on GDELT. As shown in Figure 4a, the results show significant improvement across all metrics, specifically, 145% relatively higher regarding MRR of ECOLA-UTEE when given textual data during inference than not given. Thus, the results confirm that KTP task is a good choice for successful alignment between knowledge and language space and ECOLA utilizes the pre-trained language model to inject language representations into temporal knowledge embeddings. Masking Temporal Information in KTP As temporal alignment is crucial for enhancing temporal knowledge embeddings, we study the effect of masking temporal information by extending the existing KTP task with an additional *time prediction* task, where the timestamp in the input is masked, | Datasets | GDELT - filtered | Wiki - filtered | | | | | |-------------|--------------------|-------------------|-----------|-----------|-----------|--------| | Model | MRR | Hits@1 | Hits@3 | MRR | Hits@1 | Hits@3 | | ECOLA-UTEE | 19.11 | 15.29 | 19.46 | 38.35 | 30.56 | 42.11 | | tECOLA-UTEE | 20.39 | 16.83 | 20.08 | 42.53 | 34.06 | 46.32 | | (6.7% Ò) | (10.1% Ò) | (3.2% Ò) | (10.9% Ò) | (11.5% Ò) | (10.0% Ò) | | and the model learns to predict the original timestamp. The extended model is named tECOLAUTEE and has significant performance gain on both GDELT and Wiki datasets across all metrics as shown in Table 3. We conjecture that the additional time prediction task forces the model to capture the temporal dynamics in temporal knowledge embeddings and utilize the temporal information in given textual descriptions. Since each temporal knowledge embedding models the temporal information in different ways, masking and predicting temporal information will be specific to each temporal knowledge embedding model. We leave this finding to future work for further inspections. Temporal Question Answering Although we focus on generating informative temporal knowledge embeddings in this work, joint models often benefit both the language model and the temporal KG model mutually. Unlike previous joint models (Zhang et al., 2019; Peters et al., 2019), we do not modify the Transformer architecture, e.g., adding entity linkers or fusion layers. Thus, the language encoder enhanced by external knowledge can be adapted to a wide range of downstream tasks as easily as BERT. Besides the tKG completion task, we evaluate the enhanced language model in ECOLA on the temporal question-answering task to study its enhancement. Natural questions often include temporal constraints, e.g., who was the US president before Jimmy Carter? To deal with such challenging temporal constraints, temporal question answering over temporal knowledge base, formulated as TKGQA task, has become trendy since tKGs help to find entity or timestamp answers with support of temporal facts. Saxena et al. (2021) introduced the dataset CRONQUESTIONS containing natural temporal questions with different types of temporal constraints. They proposed a baseline CRONKGQA that uses BERT to understand the temporal constraints, followed by a scoring function for answer prediction. We apply ECOLA to enhance the BERT in CRONKGQA then plug it back into CRONKGQA and finetune it on the question answering dataset. We name the enhanced model as ECOLA-CRONKGQA. The models are evaluated with standard metrics *Hits*@kpk P t1, 3uq: the percentage of times that the true entity or time candidate appears in the top k of ranked candidates. Figure 4b shows that our proposed ECOLA considerably enhances CronKGQA, demonstrating the benefits of ECOLA to the language model. ## 8 Conclusion We introduced ECOLA to enhance time-evolving entity representations with temporally relevant textual data using a novel knowledge-text prediction task. Besides, we constructed three datasets that contain paired structured temporal knowledge and unstructured textual descriptions, which can benefit future research on fusing *temporal* structured and unstructured knowledge. Extensive experiments show ECOLA can improve various temporal knowledge graph models by a large margin. ## Limitations To train ECOLA, we need to provide structured knowledge with aligned unstructured textual data to the model. Thus, we should either manually pair quadruples with event descriptions or use some matching algorithm to automatically build the pairs. The former requires human labeling effort and is hard to apply on large-scale datasets, while the latter would introduce noise into the dataset. Thus, ECOLA is currently tailored for domain adaptation and enhances pre-trained models with domain knowledge. There is still work to be done to let models be jointly trained on *large-scale* structured and unstructured data. ## Ethics Statement ECOLA is tailored to integrate temporal knowledge embedding and textual knowledge and can be applied to a wide variety of downstream tasks, such as temporal knowledge graph link prediction and temporal question answering. It can also power search and, thus, serve as a key intermediary of information in users' lives. Since most temporal knowledge graphs are automatically extracted from web data, it's important to ensure it does not contain offensive content. ECOLA can be used to classify the quadruples in temporal knowledge graphs using the pre-trained language model and contribute to the knowledge graph protection's perspective. ## Acknowledgement The authors acknowledge support by the German Federal Ministry for Education and Research (BMBF), funding project "Software Campus 2.0 (LMU München) (grant 01IS17048)". ## References Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. *Advances in neural information processing systems*, 26. Shib Sankar Dasgupta, Swayambhu Nath Ray, and Partha Talukdar. 2018. Hyte: Hyperplane-based temporally aware knowledge graph embedding. In Proceedings of the 2018 conference on empirical methods in natural language processing, pages 2001– 2011. Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. 2018. Convolutional 2d knowledge graph embeddings. In *Proceedings of the AAAI* conference on artificial intelligence, volume 32. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*. Zifeng Ding, Jingpei Wu, Bailan He, Yunpu Ma, Zhen Han, and Volker Tresp. 2022. Few-shot inductive learning on temporal knowledge graphs using concept-aware information. arXiv preprint arXiv:2211.08169. Rishab Goel, Seyed Mehran Kazemi, Marcus Brubaker, and Pascal Poupart. 2020. Diachronic embedding for temporal knowledge graph completion. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 34, pages 3988–3995. Zhen Han, Peng Chen, Yunpu Ma, and Volker Tresp. 2020a. DyERNIE: Dynamic Evolution of Riemannian Manifold Embeddings for Temporal Knowledge Graph Completion. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7301–7316, Online. Association for Computational Linguistics. Zhen Han, Peng Chen, Yunpu Ma, and Volker Tresp. 2021a. Explainable subgraph reasoning for forecasting on temporal knowledge graphs. In Proceedings of the 2021 International Conference on Learning Representations. Zhen Han, Zifeng Ding, Yunpu Ma, Yujia Gu, and Volker Tresp. 2021b. Learning neural ordinary equations for forecasting future links on temporal knowledge graphs. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8352–8364, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Zhen Han, Yunpu Ma, Yuyi Wang, Stephan Günnemann, and Volker Tresp. 2020b. Graph hawkes neural network for forecasting on temporal knowledge graphs. arXiv preprint arXiv:2003.13432. Zhen Han, Gengyuan Zhang, Yunpu Ma, and Volker Tresp. 2021c. Time-dependent entity embedding is not all you need: A re-evaluation of temporal knowledge graph completion models under a unified framework. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8104–8118, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Bin He, Di Zhou, Jinghui Xiao, Qun Liu, Nicholas Jing Yuan, Tong Xu, et al. 2019. Integrating graph contextualized knowledge into pre-trained language models. arXiv preprint arXiv:1912.00147. Woojeong Jin, Meng Qu, Xisen Jin, and Xiang Ren. 2019. Recurrent event network: Autoregressive structure inference over temporal knowledge graphs. arXiv preprint arXiv:1904.05530. Seyed Mehran Kazemi and David Poole. 2018. Simple embedding for link prediction in knowledge graphs. Advances in neural information processing systems, 31. Bosung Kim, Taesuk Hong, Youngjoong Ko, and Jungyun Seo. 2020. Multi-task learning for knowledge graph completion with pre-trained language models. In *Proceedings of the 28th International* Conference on Computational Linguistics, pages 1737–1743. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Timothée Lacroix, Guillaume Obozinski, and Nicolas Usunier. 2020. Tensor decompositions for temporal knowledge base completion. arXiv preprint arXiv:2004.04926. Julien Leblay and Melisachew Wudage Chekol. 2018. Deriving validity time in knowledge graph. In *Companion Proceedings of the The Web Conference 2018*, pages 1771–1776. Kalev Leetaru and Philip A Schrodt. 2013. Gdelt: Global data on events, location, and tone, 1979–2012. In *ISA annual convention*, volume 2, pages 1–49. Citeseer. Xinyu Li, Fayuan Li, Lu Pan, Yuguang Chen, Weihua Peng, Quan Wang, Yajuan Lyu, and Yong Zhu. 2020. Duee: a large-scale dataset for chinese event extraction in real-world scenarios. In CCF International Conference on Natural Language Processing and Chinese Computing, pages 534–545. Springer. Weijie Liu, Peng Zhou, Zhe Zhao, Zhiruo Wang, Qi Ju, Haotang Deng, and Ping Wang. 2020. K-bert: Enabling language representation with knowledge graph. In *Proceedings of the AAAI Conference on Artificial* Intelligence, volume 34, pages 2901–2908. Yunpu Ma, Volker Tresp, and Erik A Daxberger. 2019. Embedding models for episodic knowledge graphs. Journal of Web Semantics, 59:100490. Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In *Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and* the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 1003– 1011. Matthew E Peters, Mark Neumann, Robert L Logan IV, Roy Schwartz, Vidur Joshi, Sameer Singh, and Noah A Smith. 2019. Knowledge enhanced contextual word representations. *arXiv preprint* arXiv:1909.04164. Apoorv Saxena, Soumen Chakrabarti, and Partha Talukdar. 2021. Question answering over temporal knowledge graphs. In *Proceedings of the 59th Annual* Meeting of the Association for Computational Linguistics. Haohai Sun, Jialun Zhong, Yunpu Ma, Zhen Han, and Kun He. 2021. Timetraveler: Reinforcement learning for temporal knowledge graph forecasting. *arXiv* preprint arXiv:2109.04101. Tianxiang Sun, Yunfan Shao, Xipeng Qiu, Qipeng Guo, Yaru Hu, Xuanjing Huang, and Zheng Zhang. 2020. Colake: Contextualized language and knowledge embedding. *arXiv preprint arXiv:2010.00309*. Kristina Toutanova, Danqi Chen, Patrick Pantel, Hoifung Poon, Pallavi Choudhury, and Michael Gamon. 2015. Representing text for joint embedding of text and knowledge bases. In Proceedings of the 2015 conference on empirical methods in natural language processing, pages 1499–1509. Volker Tresp, Cristóbal Esteban, Yinchong Yang, Stephan Baier, and Denis Krompaß. 2015. Learning with memory embeddings. *arXiv preprint* arXiv:1511.07972. Xiaozhi Wang, Tianyu Gao, Zhaocheng Zhu, Zhengyan Zhang, Zhiyuan Liu, Juanzi Li, and Jian Tang. 2021. Kepler: A unified model for knowledge embedding and pre-trained language representation. *Transactions of the Association for Computational Linguistics*, 9:176–194. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. *arXiv preprint arXiv:1609.08144*. Chengjin Xu, Mojtaba Nayyeri, Fouad Alkhoury, Hamed Shariat Yazdi, and Jens Lehmann. 2020. TeRo: A time-aware knowledge graph embedding via temporal rotation. In Proceedings of the 28th International Conference on Computational Linguistics, pages 1583–1593, Barcelona, Spain (Online). International Committee on Computational Linguistics. Chengjin Xu, Mojtaba Nayyeri, Fouad Alkhoury, Hamed Shariat Yazdi, and Jens Lehmann. 2019. Temporal knowledge graph embedding model based on additive time series decomposition. *arXiv preprint* arXiv:1911.07893. Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2014. Embedding entities and relations for learning and inference in knowledge bases. *arXiv* preprint arXiv:1412.6575. Liang Yao, Chengsheng Mao, and Yuan Luo. 2019. Kgbert: Bert for knowledge graph completion. arXiv preprint arXiv:1909.03193. Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. Ernie: Enhanced language representation with informative entities. arXiv preprint arXiv:1905.07129. ## Appendix A Related Work Of Temporal Knowledge Embedding Temporal Knowledge Embedding (tKE) is also termed Temporal Knowledge Representation Learning (TKRL), which is to embed entities and predicates of temporal knowledge graphs into lowdimensional vector spaces. TKRL is an expressive and popular paradigm underlying many KG models (Jin et al., 2019; Han et al., 2020b, 2021b; Sun et al., 2021; Lacroix et al., 2020; Ding et al., 2022). To capture temporal aspects, each model either embeds discrete timestamps into a vector space or learns time-dependent representations for each entity. Ma et al. (2019) developed extensions of static knowledge graph models by adding timestamp embeddings to their score functions. Besides, HyTE (Dasgupta et al., 2018) embeds time information in the entity-relation space by learning a temporal hyperplane to each timestamp and projects the embeddings of entities and relations onto timestamp-specific hyperplanes. Later, Goel et al. (2020) equipped static models with a diachronic entity embedding function which provides the characteristics of entities at any point in time and achieves strong results. Moreover, Han et al. (2020a) introduced a non-Euclidean embedding approach that learns evolving entity representations in a product of Riemannian manifolds. It is the first work to contribute to geometric embedding for tKG and achieves state-of-the-art performances on the benchmark datasets. Besides, Han et al. (2021a) proposed a subgraph reasoning algorithm for temporal knowledge graph forecasting, which can provide human-understandable evidence to its prediction. ## B Implementation We use the datasets augmented with reciprocal relations to train all baseline models. We tune the hyperparameters of our models using the random search and report the best configuration. Specifically, we set the loss weight λ to be 0.3, except for ECOLA-DE model trained on Wiki dataset where λ is set to be 0.001. We use the Adam optimizer (Kingma and Ba, 2014). We use the implementation of DE-SimplE‡‡, ATiSE/TeRO§§. We use the code for TNTComplEx from the tKG framework ‡‡https://github.com/BorealisAI/de-simple §§https://github.com/soledad921/ATISE (Han et al., 2021c). We implement TTransE based on the implementation of TransE in PyKEEN¶¶. We tune the model across a range of hyperparameters as shown in Table 6. We provide the detailed settings of hyperparameters of each baseline model and ECOLA in Table 4. ## C The Amount Of Compute And The Type Of Resources Used We run our experiments on an NVIDIA A40 with a memory size of 48G. We provide the **training** time of our models and some baselines in Table 5. Note that there are no textual descriptions at inference time, and we take the entity and predicate embedding as input and use the score function of KG models to predict the missing links. Thus, the inference time of ECOLA (e.g., ECOLA-DE) and its counterpart KG model (e.g., DE-SimplE) is the same. The numbers of parameters are in Table 7 . ## D The License Of The Assets We adapt three existing datasets, i.e., GDELT, DuEE, and Wiki. We would first state the original license. - **GDELT:** as stated in the term of use of GDELT***, the GDELT Project is an open platform for research and analysis of global society and thus all datasets released by the GDELT Project are available for unlimited and unrestricted use for any academic, commercial, or governmental use of any kind without fee. One may redistribute, rehost, republish, and mirror any of the GDELT datasets in any form. However, any use or redistribution of the data must include a citation to the GDELT Project and a link to this website (https://www.gdeltproject.org/). - **Wiki** is proposed by Dasgupta et al. (2018) and has Apache License 2.0. - **DuEE** is released by Baidu Research. As stated on its website†††, they have committed to provide these datasets at no cost for research and personal uses. For the derived datasets, we only release a short version due to the size limit of uploads. Thus, we will release the full version and give the license, ¶¶https://github.com/pykeen/pykeen ***https://www.gdeltproject.org/about.html\#termsofuse †††https://ai.baidu.com/broad/introduction?dataset=duee | Parameters | Embedding dimension | Negative Sampling | Learning rate | Batch Size | | | | | | | | | |---------------|-----------------------|---------------------|-----------------|--------------|------|------|--------|--------|--------|-------|------|------| | Datasets | GDELT | DuEE | Wiki | GDELT | DuEE | Wiki | GDELT | DuEE | Wiki | GDELT | DuEE | Wiki | | TransE | 768 | 768 | 768 | 200 | 100 | 100 | 5e-4 | 5e-4 | 5e-4 | 256 | 128 | 256 | | SimplE | 768 | 768 | 768 | 200 | 100 | 100 | 5e-4 | 5e-4 | 5e-4 | 256 | 128 | 256 | | TTransE | 768 | 768 | 768 | 200 | 100 | 100 | 5.2e-4 | 5.2e-4 | 5.2e-4 | 256 | 256 | 256 | | TNTComplEx | 768 | 768 | 768 | 200 | 100 | 100 | 1.5e-4 | 1.5e-4 | 1.5e-4 | 256 | 256 | 256 | | DE-SimplE | 768 | 768 | 768 | 200 | 100 | 100 | 5e-4 | 5e-4 | 5e-4 | 256 | 128 | 256 | | ECOLA-SF | 768 | 768 | 768 | 200 | 100 | 100 | 1e-4 | 2e-5 | 1e-4 | 64 | 16 | 64 | | ECOLA-DE | 768 | 768 | 768 | 200 | 200 | 200 | 2e-5 | 2e-5 | 2e-5 | 4 | 8 | 4 | | ECOLA-UTEE | 768 | 768 | 768 | 200 | 200 | 200 | 2e-5 | 2e-5 | 2e-5 | 4 | 8 | 4 | | ECOLA-dyERNIE | 768 | 768 | 768 | 200 | 200 | 200 | 2e-5 | e-4 | 2e-5 | 4 | 8 | 4 | Table 5: The runtime of the training procedure (in hours). | Dataset | GDELT | DuEE | Wiki | |---------------|---------|--------|--------| | DE-SimplE | 17 | 0.5 | 5.0 | | ECOLA-DE | 24.0 | 16.7 | 43.2 | | UTEE | 67.3 | 0.5 | 11.3 | | ECOLA-UTEE | 36.0 | 12.8 | 45.6 | | DyERNIE | 25 | 0.1 | 5.9 | | ECOLA-DyERNIE | 23.8 | 10.8 | 67.2 | Table 6: Search space of hyperparameters. | Hyperparameter | Search space | |------------------|-----------------------------| | learning rate | {e-5, 5e-5, e-4, 5e-4, e-3} | | warm up | {0.05, 0.2, 0.3} | | weight decay | {0.01, 0.05, 0.2} | | batch size | {16, 128, 256, 512} | Table 7: The number of parameters (M). | Dataset | GDELT | DuEE | Wiki | |-----------|---------|--------|--------| | DyERNIE | 159 | 140 | 174 | | UTEE | 150 | 139 | 158 | | DE | 175 | 140 | 173 | copyright information, and terms of use once the paper gets accepted. ## E Documentation Of The Artifacts This paper uses three datasets, GDELT, Wiki, and DuEE. GDELT mainly covers social and political events written in English. Wiki in this paper mainly contains evolving knowledge, i.e., affiliation and residence place information, which is also written in English. DuEE is a dataset in Chinese and mainly talks about social news, such as the launch of new electronic products. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? There is a Limitation section in the paper before the references. ✗ A2. Did you discuss any potential risks of your work? The proposed method is a general method of jointly learning structural knowledge and textual knowledge. We do not see any obvious potentially risks. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and Section 1. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 1, 3, 4, 5. ✓ B1. Did you cite the creators of artifacts you used? Section 5 and Section 6. ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Appendix E. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section 4 and Section 5. ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? The data we used is built upon the GDELT project (https://www.gdeltproject.org/data.html), which was collected from news webpages, e.g., BBC. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Appendix F. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 5 and Table 4 in the appendix. ## C ✓ **Did You Run Computational Experiments?** Section 6. ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix D. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Appendix C. ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 6 and Table 1. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Appendix C. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
ghanbarzadeh-etal-2023-gender
Gender-tuning: Empowering Fine-tuning for Debiasing Pre-trained Language Models
https://aclanthology.org/2023.findings-acl.336
Recent studies have revealed that the widely-used Pre-trained Language Models (PLMs) propagate societal biases from the large unmoderated pre-training corpora. Existing solutions require debiasing training processes and datasets for debiasing, which are resource-intensive and costly. Furthermore, these methods hurt the PLMs{'} performance on downstream tasks. In this study, we propose Gender-tuning, which debiases the PLMs through fine-tuning on downstream tasks{'} datasets. For this aim, Gender-tuning integrates Masked Language Modeling (MLM) training objectives into fine-tuning{'}s training process. Comprehensive experiments show that Gender-tuning outperforms the state-of-the-art baselines in terms of average gender bias scores in PLMs while improving PLMs{'} performance on downstream tasks solely using the downstream tasks{'} dataset. Also, Gender-tuning is a deployable debiasing tool for any PLM that works with original fine-tuning.
# Gender-Tuning: Empowering Fine-Tuning For Debiasing Pre-Trained Language Models Yan Huang University of North Texas [email protected] Hamid Palangi1 Microsoft Research Radames Cruz Moreno2 Microsoft Research Hamed Khanpour 3 Microsoft Research {hpalangi1,radames.cruz2,hamed.khanpour3}@microsoft.com Somayeh Ghanbarzadeh University of North Texas [email protected] ## Abstract Recent studies have revealed that the widelyused Pre-trained Language Models (PLMs) propagate societal biases from the large unmoderated pre-training corpora. Existing solutions require debiasing training processes and datasets for debiasing, which are resourceintensive and costly. Furthermore, these methods hurt the PLMs' performance on downstream tasks. In this study, we propose *Gendertuning*, which debiases the PLMs through finetuning on downstream tasks' datasets. For this aim, Gender-tuning integrates Masked Language Modeling (MLM) training objectives into fine-tuning's training process. Comprehensive experiments show that Gender-tuning outperforms the state-of-the-art baselines in terms of average gender bias scores in PLMs while improving PLMs' performance on downstream tasks solely using the downstream tasks' dataset. Also, Gender-tuning is a deployable debiasing tool for any PLM that works with original fine-tuning. ## 1 Introduction Pre-trained Language Models (PLMs) have achieved state-of-the-art performance across various tasks in natural language processing (Devlin et al., 2019; Liu et al., 2019; Clark et al., 2020). One of the crucial reasons for this success is pretraining on large-scale corpora, which is collected from unmoderated sources such as the internet. Prior studies (Caliskan et al., 2017; Zhao et al., 2018; May et al., 2019; Kurita et al., 2019; Gehman et al., 2020) have shown that PLMs capture a significant amount of social biases existing in the pretraining corpus. For instance, they showed that the PLMs learned that the word "he" is closer to the word "engineer" because of the frequent cooccurrence of this combination in the training corpora, which is known as social gender biases. Since PLMs are increasingly deployed in real-world scenarios, there is a serious concern that they propagate discriminative prediction and unfairness. Several solutions for mitigating the social biases have been proposed, including: using banned word lists (Raffel et al., 2020), building deliberated training datasets (Bender et al., 2021), balancing the biased and unbiased terms in the training dataset (Dixon et al., 2018; Bordia and Bowman, 2019), debiasing embedding spaces (Liang et al., 2020; Cheng et al., 2021), and self-debiasing in text generation (Schick et al., 2021). Although all these solutions have shown different levels of success, they tend to limit the PLMs' ability (Meade et al., 2022). For example, the banned words solution prevent gaining knowledge of topics related to banned words. Also, some of them hurt the PLMs' performance on downstream tasks. Furthermore, dataset curation and pre-training are two resourceintensive tasks needed for most of the above solutions (Schick et al., 2021). In this study, we address the challenges mentioned above by proposing an effective approach named *Gender-tuning* for debiasing the PLMs through fine-tuning on downstream tasks' datasets. For this goal, Gender-tuning perturbs the training examples by first finding the gender-words in the training examples based on a given gender-word list. Then Gender-tuning replaces them with the new words to interrupt the association between the gender-words and other words in the training examples (Table 1). Finally, Gender-tuning classifies the examples with the replaced words according to the original training examples' ground-truth labels to compute a joint loss from perturbation and classification for training the Gender-tuning. The key advantage of our method is integrating the debiasing process into the fine-tuning that allows the debiasing and fine-tuning to perform simultaneously. Thus, Gender-tuning does not require separate pre-training or additional training data. Also, this integration makes Gender-tuning a plug-and5448 ![1_image_1.png](1_image_1.png) play debiasing tool for any PLMs that works with original fine-tuning. To evaluate the effectiveness of our proposed method, we conducted comprehensive experiments following two state-of-the-art debiasing baselines: SENT_DEBIAS (Sent-D) (Liang et al., 2020) and FairFil (FairF) (Cheng et al., 2021). The results show that Gender-tuning outperforms both baselines in terms of the average gender-bias scores in the BERT model while improving its performance on the downstream tasks. In addition, we reported the performance of Gender-tuning applied to the RoBERTa that shows considerable improvement. Finally, our ablation studies demonstrate that all components of Gender-tuning, including two training phases and joint loss, play an essential role in achieving success. ## 2 Methodology We propose a novel debiasing approach, named Gender-tuning (Figure 1), that performs the debiasing process and fine-tuning simultaneously on the downstream tasks' dataset. For this aim, Gender-tuning integrates two training objectives: 1) Masked Language Modeling (MLM) training objective for gender-word perturbation and 2) Finetuning for classification. In each training batch, Gender-tuning works as follows: Gender-tuning uses MLM to perturb training examples by masking the existing gender-word(s)1. The MLM training objective is to predict masked token(s) with a mean cross-entropy loss that we denote as perturbation-loss (L*perturb*). The training examples with predicted tokens, called *genderperturbed examples* (Table 1), are fed into fine1We use the feminine and masculine word lists created by (Zhao et al., 2018) ![1_image_0.png](1_image_0.png) tuning to be classified according to the original examples' ground-truth label (y). Then pθ(y′ = y|xˆ) is the fine-tuning classification function to predict the gender-perturbed example's label (y′) based on the gender-perturbed example (xˆ) to compute the fine-tuning loss (Lf ine−*tuning*), where θ is the PLM's parameters for the fine-tuning. A weighted aggregation of the perturbation loss and fine-tuning loss, called joint-loss (L*joint*), is used for training the Gender-tuning as follows: $$\;\;\;u r b\;+\;\;(1\;\;.$$ $${\mathfrak{x}}){\mathcal{L}}_{f i n e-}$$ Ljoint = α L*perturb* + (1 − α)Lf ine−*tuning* (1) where α is a weighting factor that is employed to adjust the contribution of the two training losses in computing the joint-loss. The Gender-tuning training objective is to minimize joint-loss to ensure that the label of the perturbed example is the same as the label of the original training example. In the following, we present how joint-loss impacts the training process of Gender-tuning in each training batch: Suppose the MLM predicts an incorrect token. For instance, the example: "the film affirms the power of the [actress]" changes to "the film affirms the power of the [trauma]". In this example, the predicted word [trauma] is a non-related genderword that raises perturbation-loss value (L*perturb* > 0). In this case, even if fine-tuning classifies the perturbed example correctly, joint-loss is still big enough to force Gender-tuning to continue training. Also, suppose Gender-tuning creates social gender bias through gender perturbation. For instance, the example: "angry black [actor]" changes to "angry black [woman]" that "woman" and "actor" are not close semantically that raises perturbation-loss SST-2 BERT RoBERTa Origin Sent-D FairF Gender-tuning*random* Gender-tuning (ours) Origin Gender-tuning*random* Gender-tuning (ours) Names, Career/Family 0.03 0.10 0.21 0.46 **0.03** 0.07 **0.08** 0.14 Terms, Career/Family 0.01 0.05 0.37 **0.03** 0.16 0.33 0.44 **0.01** Terms, Math/Art 0.21 0.22 0.26 **0.05** 0.39 1.32 1.25 **0.57** Names, Math/Art 1.15 0.75 **0.09** 0.65 0.31 1.34 1.12 **1.11** Terms, Science/Art 0.10 0.08 0.12 0.42 **0.07** 0.25 **0.12** 0.47 Names, Science/Art 0.22 0.04 **0.005** 0.38 0.10 0.47 0.62 **0.47** Avg. Abs. e-size 0.291 0.212 0.182 0.331 **0.176** 0.630 0.605 **0.461** Accuracy 91.97 89.10 91.60 **92.66** 92.10 93.57 **93.92** 93.69 CoLA Names, Career/Family 0.009 0.14 **0.03** 0.34 0.09 0.29 0.15 **0.05** Terms, Career/Family 0.19 0.18 0.11 0.15 **0.03** 0.26 0.08 **0.00** Terms, Math/Art 0.26 0.31 0.09 0.55 **0.08** 0.06 **0.02** 0.15 Names, Math/Art 0.15 0.30 **0.10** 0.72 0.24 0.06 0.25 **0.07** Terms, Science/Art 0.42 0.16 0.24 **0.05** 0.07 0.32 **0.57** 0.70 Names, Science/Art 0.03 0.19 0.12 0.28 **0.07** 0.27 0.14 **0.03** Avg. Abs. e-size 0.181 .217 0.120 0.343 **0.096** 0.210 0.201 **0.166** Accuracy 56.51 55.40 56.50 **56.85** 56.60 57.35 57.55 **58.54** QNLI Names, Career/Family 0.26 0.05 0.10 **0.01** 0.02 0.04 0.38 **0.17** Terms, Career/Family 0.15 **0.004** 0.20 0.13 0.04 0.22 0.10 **0.04** Terms, Math/Art 0.58 **0.08** 0.32 0.30 **0.08** 0.53 0.16 **0.09** Names, Math/Art 0.58 0.62 0.28 0.23 **0.16** 0.48 0.06 **0.03** Terms, Science/Art 0.08 0.71 0.24 0.25 **0.21** 0.47 0.57 **0.53** Names, Science/Art 0.52 0.44 0.16 0.15 **0.04** 0.36 0.47 0.52 Avg. Abs. e-size 0.365 0.321 0.222 0.178 **0.091** 0.350 0.290 **0.230** Accuracy 91.30 90.60 90.80 **91.61** 91.32 92.03 **92.51** 92.09 value (L*perturb* > 0). In this case, the output of the fine-tuning might be correct (Lf ine−*tuning* ≈ 0) due to the PLMs' learned biases ("angry black woman" is a known gender/race bias). However, due to the big value of perturbation-loss, the joinloss is big enough to override fine-tuning results and forces Gender-tuning to continue training. Moreover, we observed that sometimes example perturbation changes the concept/label of training examples. For instance, the input: "[He] is an excellent [actor] (label: positive)" changes to "[She] is a wonderful [murderer] (label: positive)", and fine-tuning classification output is correct (Lf ine−*tuning* ≈ 0). In this example, the predicted word [murderer] is conceptually far from gender-related words [actor]. So, perturbation loss becomes significant, which creates a big value for joint-loss to force Gender-tuning to continue training. Finally, we found examples that MLM replaces the gender-word with the [UNK] token. In these examples, the perturbation-loss is close to zero (L*perturb* ≈ 0) and the output of the finetuning classifier is incorrect (Lf ine−*tuning* > 0). In this case, the joint-loss is big enough to continue training and provide a new chance for MLM to predict a meaningful token instead of a [UNK]. More analysis of our perturbation strategy can be found in Section 4.1 and Table 3. ## 3 Experimental Setup To evaluate our proposed method, we conduct experiments by following the evaluation process of the two state-of-the-art baselines (Sent-D and FairF) such as the bias evaluation metric (SEAT), applied PLMs, and downstream tasks' datasets.2 We report the SEAT effect size (e-size), average absolute e-size, and classification accuracy on downstream tasks for three different setups: 1) Origin: fine-tuning the PLMs on the downstream task datasets using huggingface transformers code (Wolf et al., 2020). 2) **Gender-tuning***random*: instead of replacing the gender-words in an training example, Gender-tuning*random* replaces a certain percentage of an input tokens randomly (5% of each input sequence). 3) **Gender-tuning**: the proposed method. We used the same hyperparameter for all three setups for a fair comparison. ## 4 Results And Discussion Table 2 illustrates SEAT absolute effect size (esize) (lower is better) on sentence templates of Terms/Names under different gender domains provided by (Caliskan et al., 2017), average absolute e-size (lower is better), and classification accuracy on downstream tasks (higher is better) 2Details of the baselines, bias evaluation metric, PLMs, datasets, and hyperparameters are presented in Appendix A. | Training input | Perturbed | Type | Label | |----------------------------------------------------|-----------------------------------------------------|----------------|---------| | with [his] usual intelligence and subtlety. | with [the] usual intelligence and subtlety. | neutral | 1 | | by casting an [actress] whose face projects | by casting an [image] whose face projects | | | | that [woman] 's doubts and yearnings , | that [person] 's doubts and yearnings , | neutral | 1 | | it succeeds. | it succeeds. | | | | certainly has a new career ahead of [him] if | certainly has a new career ahead of [her] if | convert-gender | 1 | | [he] so chooses. | [she] so chooses. | | | | by [men] of marginal intelligence , with | by [people] of marginal intelligence , with | neutral | 0 | | reactionary ideas. | reactionary ideas. | | | | why this distinguished [actor] would stoop so low. | why this distinguished [man] would stoop so low. | same-gender | 0 | | it is very awful - - and oozing with creepy [men]. | it is very awful - - and oozing with creepy [UNK] . | deleting | 0 | | Proves once again [he] hasn't lost. | Proves once again [he] hasn't lost . | identical | 1 | for three experiment setups (Section 3) and two state-of-the-art baselines. The results show that Gender-tuning outperforms the baselines regarding the average absolute effect size for both PLMs on all datasets. Also, in contrast with the baselines, Gender-tuning improves the accuracy of both PLMs on all downstream tasks. It shows that the proposed method preserves the useful semantic information of the training data after debiasing. The Gender-tuning*random* results show an inconsistent effect on the bias scores. Although Gendertuning*random* improves the PLMs' accuracy on the downstream tasks, it significantly magnifies the bias score in the BERT model on SST-2 and CoLA. Also, it slightly reduces the average bias score in the RoBERTa on all datasets and in BERT on the QNLI. ## 4.1 Perturbation Analysis The PLMs achieved state-of-the-art performance on the downstream tasks datasets by applying the MLM for the example perturbation in pre-training phase. Thus we hypothesize that the MLM can generate realistic gender-perturbed examples that can considerably modify the gender relation between the input tokens without affecting the label. However, there is a concern that the pre-trained MLM transfers the gender bias through the perturbation process. To address this concern, we investigate the predicted tokens that the pre-trained MLM replaces with the gender-words. We randomly select 300 examples from training dataset including 150 examples with feminine words and 150 examples with masculine words. Based on these 300 examples, we observe five types of perturbation as shown through some examples in Table 3: - **Neutral**; replace the gender-words with neutral word such as people, they, their, and etc. - **Convert-gender**; replace the gender-words with opposite gender. the word "he" change to "she". - **Same-gender**; replace the gender-words with the same gender. change the word "man" to "boy". - **Deleting**; replace the gender-words with unknown token ([UNK]). In 300 examples, it only happens when there are several masked tokens. - **Identical**; replace the gender-word with itself. It mostly happens when there is only one gender-word. In our investigation with 300 examples, we had 46% Neutral, 29% Identical, 17% Convert-gender, 7% Same-gender, and 1% Deleting perturbation. As illustrated in Table 3, Gender-tuning does not make a meaningful change in identical and samegender perturbation. These examples likely conform to the gender biases in the MLM. Suppose identical, or same-gender perturbation gets the correct output from the perturbation process (L*perturb.* ≈ 0). In this case, the only way to learn the biases in the MLM is to get the correct output from finetuning step and joint-loss close to zero. This issue stops the MLM and fine-tuning model from further update. However, joint-loss plays an essential role SST-2 BERT RoBERTa Origin Gender-tuning/ Gender-tuning/ Gender-tuning Origin Gender-tuning/ Gender-tuning/ Gender-tuning no-joint-train no-joint-loss (ours) no-joint-train no-joint-loss (ours) Names, Career/Family 0.03 0.22 0.16 **0.03** 0.07 0.18 0.62 **0.14** Terms, Career/Family 0.01 0.31 0.37 **0.16** 0.33 0.09 0.41 **0.01** Terms, Math/Art 0.21 0.75 0.49 **0.39** 1.32 0.99 1.02 **0.57** Names, Math/Art 1.15 0.55 0.56 **0.31** 1.34 **0.92** 0.97 1.11 Terms, Science/Art 0.10 0.01 0.32 **0.07** 0.25 0.76 **0.00** 0.47 Names, Science/Art 0.22 **0.07** 0.47 0.10 0.47 0.76 0.56 **0.47** Avg. Abs. e-size 0.291 0.318 0.395 **0.176** 0.630 0.616 0.596 **0.461** Accuracy 91.97 **92.88** 92.66 92.10 93.57 **94.38** 92.54 93.69 CoLA Names, Career/Family 0.09 0.37 **0.04** 0.09 0.29 0.07 0.16 **0.05** Terms, Career/Family 0.19 0.06 0.11 **0.03** 0.26 0.16 0.11 **0.00** Terms, Math/Art 0.26 0.89 0.96 **0.08** 0.06 0.41 0.29 **0.15** Names, Math/Art 0.15 1.03 0.82 **0.24** 0.06 0.22 0.87 **0.07** Terms, Science/Art 0.42 0.47 0.19 **0.07** 0.32 **0.42** 0.80 0.70 Names, Science/Art 0.03 0.49 0.32 **0.07** 0.27 0.36 0.88 **0.03** Avg. Abs. e-size 0.181 0.551 0.406 **0.096** 0.210 0.273 0.518 **0.166** Accuracy 56.51 56.32 **56.70** 56.60 57.35 **62.11** 57.27 58.54 QNLI Names, Career/Family 0.26 0.03 0.15 **0.02** 0.04 **0.12** 0.14 0.17 Terms, Career/Family 0.15 0.20 0.41 **0.04** 0.22 0.31 0.11 **0.04** Terms, Math/Art 0.58 0.47 **0.03** 0.08 0.53 0.50 0.62 **0.09** Names, Math/Art 0.58 0.94 **0.04** 0.16 0.48 0.38 0.42 **0.03** Terms, Science/Art 0.08 **0.12** 0.27 0.21 0.47 **0.25** 0.50 0.53 Names, Science/Art 0.52 0.54 0.11 **0.04** 0.36 **0.03** 0.20 0.52 Avg. Abs. e-size 0.365 0.383 0.168 **0.091** 0.350 0.265 0.331 **0.230** Accuracy 91.30 **91.57** 91.28 91.32 92.03 **92.58** 91.69 92.09 in alleviating learning gender bias from identical and same-gender perturbations. To clarify the role of joint-loss in overcoming above problem, we investigated fine-tuning output on identical and same-gender perturbations. We observed that fine-tuning gets the incorrect output from 60% of the identical and 75% of the samegender perturbation. Thus these examples return to training iteration because their joint-loss is large enough to update the language models and perform a new training iteration. New training iteration means re-perturbing and re-fine-tuning result on these examples. Therefore, training based on both training steps' loss and computing joint-loss persistently prevents learning from gender bias in MLM as well as the PLM. ## 5 Ablation We conduct the ablation experiments to demonstrate the effectiveness of Gender-tuning components, including 1) joint-training process and 2) joint-loss in Gender-tuning's debiasing performance (Table 4). The experiments are as follows: 1) **Gender-tuning**no−joint−*training*: first we used MLM to train the PLM through the gender-word perturbation on downstream task datasets. Then we fine-tuned the PLM on the downstream task dataset. 2) **Gender-tuning**no−joint−*loss*: we train Gender-tuning based on only fine-tuning loss. In both PLMs, results illustrate that Gendertuning is more effective for reducing the average gender bias than in two ablation experiments. The two ablation experiments magnify the bias scores noticeably, while Gender-tuning gains the smallest SEAT absolute effect size, especially in the BERT model. Results also show that the ablation experiment setups that do not benefit from jointloss cannot update the MLM and PLM when the output of the fine-tuning classification is correct (Lf ine−*tuning* ≈ 0), even though the correct output likely bases on the gender biases in the PLMs. ## 6 Conclusion We propose a novel approach for debiasing PLMs through fine-tuning on downstream tasks' datasets. The proposed method is an aggregation of biasword perturbation using MLM and fine-tuning classification. In this study, we evaluated our proposed method on gender biases and named it *Gendertuning*. Comprehensive experiments prove that Gender-tuning outperforms two state-of-the-art debiasing methods while improving the performance of the PLMs on downstream tasks. The key advantage of our approach is using the fine-tuning setting that allows the training process to be carried out without needing additional training processes or datasets. Also, it makes Gender-tuning a plug-andplay debiasing tool deployable to any PLMs. ## 7 Limitation Although Gender-tuning succeeds in reducing the gender bias scores in the pre-trained language models, there are some limitations to performing debiasing. Gender-tuning only works on gender-related words list. Thus Gender-tuning cannot cover the probable gender biases that do not exist in its' list. We defer the gender-related word list modification to future research. All our experiments ran on English language texts with English gender-word morphology. ## References Soumya Barikeri, Anne Lauscher, Ivan Vulic, and Goran ´ Glavaš. 2021. Redditbias: A real-world resource for bias evaluation and debiasing of conversational language models. *arXiv preprint arXiv:2106.03521*. Emily M Bender, Timnit Gebru, Angelina McMillanMajor, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big? In *Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency*, pages 610–623. Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. *Advances* in neural information processing systems, 29:4349– 4357. Shikha Bordia and Samuel R Bowman. 2019. Identifying and reducing gender bias in word-level language models. *NAACL HLT 2019*, page 7. Marc-Etienne Brunet, Colleen Alkalay-Houlihan, Ashton Anderson, and Richard Zemel. 2019. Understanding the origins of bias in word embeddings. In International conference on machine learning, pages 803–811. PMLR. Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. *Science*, 356(6334):183–186. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020. A simple framework for contrastive learning of visual representations. In *International conference on machine learning*, pages 1597–1607. PMLR. Pengyu Cheng, Weituo Hao, Siyang Yuan, Shijing Si, and Lawrence Carin. 2021. Fairfil: Contrastive neural debiasing method for pretrained text encoders. In International Conference on Learning Representations. Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. 2020. Electra: Pre-training text encoders as discriminators rather than generators. Sunipa Dev, Tao Li, Jeff M Phillips, and Vivek Srikumar. 2020. On measuring and mitigating biased inferences of word embeddings. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 7659–7666. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171– 4186. Emily Dinan, Angela Fan, Adina Williams, Jack Urbanek, Douwe Kiela, and Jason Weston. 2020. Queens are powerful too: Mitigating gender bias in dialogue generation. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language* Processing (EMNLP), pages 8173–8188. Lucas Dixon, John Li, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. 2018. Measuring and mitigating unintended bias in text classification. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pages 67–73. Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, and Noah A Smith. 2020. Realtoxicityprompts: Evaluating neural toxic degeneration in language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, pages 3356–3369. Masahiro Kaneko and Danushka Bollegala. 2019. Gender-preserving debiasing for pre-trained word embeddings. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1641–1650. Keita Kurita, Nidhi Vyas, Ayush Pareek, Alan W Black, and Yulia Tsvetkov. 2019. Measuring bias in contextualized word representations. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing, pages 166–172. Paul Pu Liang, Irene Mengze Li, Emily Zheng, Yao Chong Lim, Ruslan Salakhutdinov, and LouisPhilippe Morency. 2020. Towards debiasing sentence representations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*. Thomas Manzini, Lim Yao Chong, Alan W Black, and Yulia Tsvetkov. 2019. Black is to criminal as caucasian is to police: Detecting and removing multiclass bias in word embeddings. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 615–621. Chandler May, Alex Wang, Shikha Bordia, Samuel R Bowman, and Rachel Rudinger. 2019. On measuring social biases in sentence encoders. In NAACL-HLT (1). Nicholas Meade, Elinor Poole-Dayan, and Siva Reddy. 2022. An empirical survey of the effectiveness of debiasing techniques for pre-trained language models. In *Proceedings of the 60th Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 1878–1898. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21:1– 67. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100, 000+ questions for machine comprehension of text. In *EMNLP*. Shauli Ravfogel, Yanai Elazar, Hila Gonen, Michael Twiton, and Yoav Goldberg. 2020. Null it out: Guarding protected attributes by iterative nullspace projection. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7237–7256. Timo Schick, Sahana Udupa, and Hinrich Schütze. 2021. Self-diagnosis and self-debiasing: A proposal for reducing corpus-based bias in nlp. *arXiv preprint* arXiv:2103.00453. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In *Proceedings of the 2013 conference on empirical methods in natural language processing*, pages 1631–1642. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353–355. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2019. Glue: A multi-task benchmark and analysis platform for natural language understanding. In *7th International Conference on Learning Representations,* ICLR 2019. Alex Warstadt, Amanpreet Singh, and Samuel R Bowman. 2019. Neural network acceptability judgments. Transactions of the Association for Computational Linguistics, 7:625–641. Kellie Webster, Xuezhi Wang, Ian Tenney, Alex Beutel, Emily Pitler, Ellie Pavlick, Jilin Chen, Ed Chi, and Slav Petrov. 2020. Measuring and reducing gendered correlations in pre-trained models. arXiv preprint arXiv:2010.06032. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Jieyu Zhao, Yichao Zhou, Zeyu Li, Wei Wang, and Kai-Wei Chang. 2018. Learning gender-neutral word embeddings. In *EMNLP*. Ran Zmigrod, Sabrina J Mielke, Hanna Wallach, and Ryan Cotterell. 2019. Counterfactual data augmentation for mitigating gender stereotypes in languages with rich morphology. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1651–1661. ## A Appendix A.1 Baselines For comparison purposes, we chose two stateof-the-art baselines which focus on debiasing sentence-level pre-trained text encoders in PLMs. ## A.1.1 Sent-Debias SENT-DEBIAS (Liang et al., 2020) is an extension of the HARD-DEBIAS method (Bolukbasi et al., 2016) to debias sentences for both binary and multi-class bias attributes spanning gender and religion. The key advantage of Sent-D is the contextualization step in which bias-attribute words are converted into bias-attribute sentences by using a diverse set of sentence templates from text corpora. Sent-D is a four-step process that involves: identifying words that exhibit biased attributes, contextualizing them in sentences that contain these biases, creating sentence representations, estimating the subspace of the bias represented in the sentences, and debiasing general sentences by removing the projection onto this subspace. ## A.1.2 Fairfil FairF (Cheng et al., 2021) is the first neural debiasing method for pretrained sentence encoders. For a given pretrained encoder, FairF learns a fair filter (FairFil) network, whose inputs are the original embedding of the encoder, and outputs are the debiased embedding. Inspired by the multi-view contrastive learning (Chen et al., 2020), for each training sentence, FairF first generates an augmentation that has the same semantic meaning but in a different potential bias direction. FairFil is contrastively trained by maximizing the mutual information between the debiased embeddings of the original sentences and corresponding augmentations. To further eliminate bias from sensitive words in sentences, FairF uses debiasing regularizer, which minimizes the mutual information between debiased embeddings and the sensitive words' embeddings. ## A.2 Bias Evaluation Metric Following the prior studies (Sent-D and FairF), we use Sentence Encoder Association Test (SEAT) (May et al., 2019) to measure the gender bias scores in the pre-trained language models that trained using Gender-tuning. SEAT extended the Word Embedding Association Test (WEAT; caliskan2017semantics) to sentence-level representations. WEAT compares the distance of two sets. Two sets of target words (e.g., {*family, child, parent,...*} and {*work, office, profession,...*} ) that characterize particular concepts *f amily* and *career* respectively. Two sets of attribute words (e.g., {*man,* he, him,...} and {*woman, she, her,...*} ) that characterize a type of bias. WEAT evaluates whether the representations for words from one particular attribute word set tend to be more closely associated with the representations for words from one particular target word set. For instance, if the *female* attribute words listed above tend to be more closely associated with the *f amily* target words, this may indicate bias within the word representations. Let's denote A and B as sets of attribute words and X and Y the set of target words. As described in (Caliskan et al., 2017) the WEAT test statistic is: $$s(X,Y,A,B)=\sum_{x\in X}s(x,A,B)-\sum_{y\in Y}s(y,A,B)\tag{2}$$ where for a specific word w , s(*w, A, B*) is defined as the difference between w's mean cosine similarity with the words from A and w's mean cosine similarity with the word from B. They report an effective size given by: $$d={\frac{\mu([s(x,A,B)]_{x\in X}-\mu([s(y,A,B)]_{y\in Y})}{\sigma([s(t,X,Y)]_{t\in A\cup B})}}\tag{3}$$ where µ and σ denote the mean and standard deviation respectively. Hence, an effect size closer to zero represents smaller degree of bias in the word representation. The SEAT test extended WEAT by replacing the word with a collection of template sentences (i.e., *"this is a [word]", "that is* a [word]"). Then the WEAT test statistic can be computed on a given sets of sentences including attribute and target words using sentence representations from a language model. ## A.3 Plms Two widely used pre-trained language models have been chosen for this study, BERT-base (Devlin et al., 2019)and RoBERTa-base (Liu et al., 2019). BERT-base is a bidirectional encoder with 12 layers and 110M parameters that is pre-trained on 16GB of text. RoBERTa-base has almost the same architecture as BERT but is pre-trained on ten times more data (160GB) with significantly more pretraining steps than BERT. ## A.4 Datasets We conducted empirical studies on the following three tasks from the GLUE3 benchmark (Wang et al., 2019): (1) **SST-2**: Stanford Sentiment Treebank is used for binary classification for sentences extracted from movie reviews (Socher et al., 2013). It contains 67K training sentences. (2) **CoLA**: Corpus of Linguistic Acceptability (Warstadt et al., 2019) consists of English acceptability judgment. CoLA contains almost 9K training examples. (3) **QNLI**: Question Natural Language Inference (Wang et al., 2018) is a QA dataset which is derived from the Stanford Question Answering Dataset (Rajpurkar et al., 2016) and used for binary classification. QNLI contains 108K training pairs. Also, we use the feminine and masculine word lists created by (Zhao et al., 2018) for gender-word perturbation in Gender-tuning. ${}^{3}$https://gluebenchmark.com/tasks. ## A.5 Hyperparameters The hyperparameters of the models, except batch size, are set to their default4 values (e.g., epoch = 3, learning-rate = 2 × 10−5, and etc.). After trying several trials run, the batch size has been selected among {8, 16, 32}. We empirically selected the optimal value for α by a grid search in 0 < α < 1 with 0.1 increments. For each downstream task, the best value of α sets to 0.7. All experiments were performed with three training epochs and using an NVIDIA V100 GPU. ## A.6 Related Works Debiasing Database; The most straightforward approach for reducing the social biases in the training corpora is bias-neutralization. In this way, the training corpus is directly re-balanced by swapping or removing bias-related words and counterfactual data augmentation (CDA) (Zmigrod et al., 2019; Dinan et al., 2020; Webster et al., 2020; Dev et al., 2020; Barikeri et al., 2021). Also, Gehman et al. (2020) proposed domain-adaptive pre-training on unbiased corpora. Although the results showed these proposed methods mitigated the social biases in the pre-trained models, they need to be re-trained on a larger scale of the corpora. For example, Webster et al. (2020) proposed a CDA that needs an additional 100k steps of training on the augmented dataset. Data augmentation and collecting a large-scale unbiased corpus are both computationally costly. Debiasing Embedding; There are several solutions for debiasing static word embedding (Bolukbasi et al., 2016; Kaneko and Bollegala, 2019; Manzini et al., 2019; Ravfogel et al., 2020) and debiasing contextualized word-embedding (Caliskan et al., 2017; Brunet et al., 2019) and sentence-embedding (Liang et al., 2020; Cheng et al., 2021). Compared to debiasing static word embedding, where the semantic representation of a word is limited to a single vector, contextualized word/sentence embedding models are more challenging (Kaneko and Bollegala, 2019). Since the key to the pre-trained language models' success is due to powerful embedding layers (Liang et al., 2020), debiasing embedding might affect transferring of the accurate information and performance of these models on the downstream tasks. Also, they need some pre-training for 4https://github.com/huggingface/transformers debiasing the embedding layer before fine-tuning on downstream tasks. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 7 A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Left blank. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? Not applicable. Left blank. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Left blank. ## C ✓ **Did You Run Computational Experiments?** 3 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 3 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 3 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 4 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 3 ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
zhou-etal-2023-textobfuscator
{T}ext{O}bfuscator: Making Pre-trained Language Model a Privacy Protector via Obfuscating Word Representations
https://aclanthology.org/2023.findings-acl.337
In real-world applications, pre-trained language models are typically deployed on the cloud, allowing clients to upload data and perform compute-intensive inference remotely. To avoid sharing sensitive data directly with service providers, clients can upload numerical representations rather than plain text to the cloud. However, recent text reconstruction techniques have demonstrated that it is possible to transform representations into original words, suggesting that privacy risk remains. In this paper, we propose TextObfuscator, a novel framework for protecting inference privacy by applying random perturbations to clustered representations. The random perturbations make the representations indistinguishable from surrounding clustered representations, thus obscuring word information while retaining the original word functionality. To achieve this, we utilize prototypes to learn clustered representation, where tokens of similar functionality are encouraged to be closer to the same prototype during training. Additionally, we design different methods to find prototypes for token-level and sentence-level tasks, which can improve performance by incorporating semantic and task information. Experimental results on token and sentence classification tasks show that TextObfuscator achieves improvement over compared methods without increasing inference cost.
# Textobfuscator: Making Pre-Trained Language Model A Privacy Protector Via Obfuscating Word Representations Xin Zhou1∗, Yi Lu5∗†, Ruotian Ma1**, Tao Gui**2‡, Yuran Wang4, Yong Ding4, Yibo Zhang4, Qi Zhang1**, Xuanjing Huang**1, 3‡ 1School of Computer Science, Fudan University, Shanghai, China 2Institute of Modern Languages and Linguistics, Fudan University, Shanghai, China 3International Human Phenome Institutes, Shanghai, China 4 Honor Device Co., Ltd 5 School of Computer Science and Engineering, Northeastern University, Shenyang, China {xzhou20, tgui, qz}@fudan.edu.cn, [email protected] ## Abstract In real-world applications, pre-trained language models are typically deployed on the cloud, allowing clients to upload data and perform compute-intensive inference remotely. To avoid sharing sensitive data directly with service providers, clients can upload numerical representations rather than plain text to the cloud. However, recent text reconstruction techniques have demonstrated that it is possible to transform representations into original words, suggesting that privacy risk remains. In this paper, we propose **TextObfuscator**, a novel framework for preserving inference privacy by applying random perturbations to clustered representations. The random perturbations make each word representation indistinguishable from surrounding functionally similar representations, thus obscuring word information while retaining the original word functionality. To achieve this, we utilize prototypes to learn clustered representations, where words of similar functionality are encouraged to be closer to the same prototype during training. Additionally, we design different methods to find prototypes for token-level and sentencelevel tasks, which can improve performance by incorporating semantic and task information. Experimental results on token and sentence classification tasks show that TextObfuscator achieves improvement over compared methods without increasing inference cost. ## 1 Introduction Pre-trained language models (PLMs) have achieved impressive performance on various NLP downstream tasks (Devlin et al., 2018; Brown et al., ![0_image_0.png](0_image_0.png) 2020; Qiu et al., 2020), but they also come with increased model size and significant computational requirements. In real-world applications, these large-scale models are often offered as an inference service (Altman, 2022). The service providers train PLMs for target tasks and deploy them on the cloud. Clients who lack high-computation resources can query these service with their input and obtains the desired responses (DALE, 2015). Unfortunately, current inference services are plagued by serious privacy concerns (Lehmkuhl et al., 2021). Client data may contain sensitive information such as names, addresses, and even trade secrets, sharing such information with service providers compromises the privacy of clients. To address privacy risks, a naive solution is for clients to generate shallow representations on their devices and upload numerical representations to the cloud for subsequent inference, as shown in Figure 1. However, recent text reconstruction methods (Song and Raghunathan, 2020; Pan et al., 2020) have shown that word representations can be easily transformed into raw texts, indicating privacy risk remains. Inference service without privacy guarantees is not only unacceptable for clients but also illegal for service providers1. Recent literature has proposed various methods to mitigate privacy leakage in representation. For example, Chen et al. (2022b) and Hao et al. (2022) have applied homomorphic encryption (Gentry, 2009) to transformer-based models, which enables computations to be performed on encrypted data. But homomorphic encryption often incurs significant **computation time and communication costs** (Gilad-Bachrach et al., 2016), making it impractical for real-world applications. Alternatively, several studies have adapted differential privacy (Lyu et al., 2020a; Hoory et al., 2021; Yue et al., 2021a) and adversarial training (Li et al., 2018; Coavoux et al., 2018; Plant et al., 2021) to reduce the privacy information contained in representations. However, in our scenario, the privacy information pertains to each word, and **reducing word information in the** shallow layer can harm subsequent inference, thus degrading performance (Jawahar et al., 2019), especially in token-level tasks. In this paper, we propose TextObfuscator, a novel paradigm for privacy-preserving inference. The key idea of our method is to learn private representations that **obscure original word information** while preserving original word functionality. Specifically, we find prototypes for each word and encourage functionally similar words close to the same prototype during training. Subsequently, random perturbations are applied to these clustered representations, which yields two key benefits. Firstly, it obscures original word information as the perturbed representations are indistinguishable from those clustered around them, making it harder for privacy attackers to reconstruct original words, thus protecting privacy. Secondly, it maintains the original word functionality as the perturbed representations stay within the same functional clusters, leading to improved performance. To learn clustered representations, we have designed different methods to find suitable prototypes for token and sentence classification. For tokenlevel tasks, each word is assigned a label that serves as a prototype indicator (Snell et al., 2017). But for sentence-level tasks, there is no explicit prototype indicator for words. Therefore, the clustering algorithm is used for word assignment. Based on clustering results, we take the cluster centers as prototypes and assign semantically similar words to the same prototype. However, semantic-based clustering may lead to keywords from different classes being clustered together, hindering target tasks. For example, if "good" and "bad" play the same role, the sentiment of a sentence will be ambiguous. To address this, we utilize TF-IDF to identify keywords from different classes and use these keywords to redivide the clustering results. Our codes are publicly available at https://github.com/ xzhou20/TextObfuscator. Our contribution can be summarized as follows: - We propose TextObfuscator, a novel representation learning method for privacy-preserving inference by obfuscating representations. - We propose to combine semantic and task information for finding prototypes, which leads to improved performance. - We evaluate TextObfuscator on several NLP tasks including token and sentence classification, and demonstrate its effectiveness in protecting privacy while improving performance. ## 2 Preliminaries 2.1 Inference As Service Suppose a client wants to query the inference service without leaking privacy, client can perform acceptable computations on their device (such as intelligent chips, smartphones, and personal computers) to obtain the word representations H = fθc (X), where fθc is the client model released by service provider. Then numerical H instead of plain text X are uploaded to the cloud. The PLM fθs deployed on the server performs subsequent compute-intensive inference Y = fθs (H) and sends predictions Y back to the client. In this scenario, only representations are shared with service providers, avoiding the leakage of text. ## 2.2 Privacy Threats Privacy threats in the inference phase mainly come from service providers, who have access to the client model fθc , server model fθs and client's word representation H. Recently studies (Song and Raghunathan, 2020) have shown that the word information in the representation is sufficient to reconstruct the original text. For example, the shallow representations are usually similar to their embedding, privacy attackers can compare the ![2_image_0.png](2_image_0.png) representation and embedding matrices to identify the most similar word. Furthermore, service providers can generate word representations via client model and train a powerful inversion model to directly transform the representations back into the original text X = fθinv (H), even if privacypreserving methods have been applied on H. The challenge in private inference lies in ensuring fθinv cannot learn useful word information from H to reconstruct X, while that the information in H is sufficient for subsequent inference. ## 3 Our Method 3.1 Overview In this section, we present TextObfuscator, a novel framework for privacy-preserving inference. Our method learns private representation from a new perspective, which aims to obfuscate rather than reduce word information. The overall framework of TextObfuscator is shown in Figure 2. We first find prototypes for each word using semantic and task information, these prototypes are used to encourage functionally similar words to cluster together in the training phase. Then random perturbations are applied to each representation, making them indistinguishable from the surrounding clustered word representations. Even if privacy attackers get representation, they cannot establish the correct connection between the obfuscated representations and original words. Furthermore, these representations maintain original word functionality as they remain close to their prototype. Next, we introduce how to find prototypes and learn private representation. ## 3.2 Find Task-Related Prototypes In step one, we introduce two crucial components. One is M(xi) = pxi , which assigning word xito its prototype pxi , refered as **word assignment**. The other is obtaining initial prototypes P = {pi} np i=1, refered as **prototype initialization**. We enhance word assignment and prototype initialization from the semantic and task perspective, as the function of a word is not solely determined by its semantics, but also by its role in the target task. ## 3.2.1 Token-Level Task In the token-level task, each word is assigned a label, which corresponds to the initial definition of the prototype (Snell et al., 2017; Ji et al., 2022). As the result, we take the label of the word as the indicator of prototype for token-level tasks. Given a token-level dataset Dt = {(Xi, Yi)} N i=1, where (Xi, Yi) = {xj , yj} n j=1, we first use the client model fθc to traverse the dataset Dt and obtain the representations {Hi} N i=1 where Hi = fθc (Xi). These contextual representations can provide semantic information for prototype initialization. Then we assign words with the identical label to the same prototype and take the average representation of words within a particular class as the initial prototype. Suppose there are k representations belong to the label c, the prototype pc of label c can be represented as: $$\mathbf{p}_{c}={\frac{1}{k}}\sum_{j=1}^{k}\mathbf{h}_{j}^{c},\qquad\qquad(1)$$ where h c j is the j-th representation of label c and k is the number of representations in label c. In this way, we leverage the task information from the label to guide the word assignment M, and subsequently utilize the semantic information from the word representations to obtain the prototype initialization P = {pi} np i=1, where np denotes the number of labels. ## 3.2.2 Sentence-Level Task Unlike token-level tasks, there are no natural prototype indicators for words in the sentence-level dataset. To tackle this problem, we perform a clustering algorithm on the representations, using the clustering results to indicate word assignment and prototype initialization. To perform clustering, we need to prepare a representation for each word. Similar to the token-level task, we first use client model fθc to traverse sentence-level dataset Ds and obtain {Hi} N i=1. For word xithat appears repeatedly in different contexts, we calculate the average of their representations to obtain the final word representation xˆi = 1 k Pk j=1 h x i , where h xi jis the j-th word representation of word xi and k is the number of words xi occurs in Ds. Finally, we get Xˆ = {xˆi} nx i=1 where nx is the number of unique words in Ds, and perform K-Means on Xˆ : $${\mathcal{M}},{\mathcal{P}}=K m e a n s({\hat{\mathbf{X}}}),$$ M,P = *Kmeans*(Xˆ ), (2) the clustering algorithm assigns semantically similar words to the same cluster, thus completing the word assignment M. The centroid of the clusters is used as the prototypes initialization P = {pi} np i=1 where np here is the pre-defined number of clusters. However, it is not appropriate to assign all semantically similar words to the same cluster. For example, in sentiment analysis, the word representations of "good" and "bad" may be similar in representation space and are often assigned to the same cluster, but if they play the same role in a sentence, it can lead to ambiguity in the sentiment of the sentence. We refer to words that are highly relevant to specific classes as task-related words, and it is important to ensure that task-related words from different classes are assigned to different prototypes. To identify task-related words for each class, we use the TF-IDF (Salton and Buckley, 1988), a numerical statistic that reflects a word's importance in a document. In this case, we treat all sentences within a class as one document and use TF-IDF to find the keywords for each class. Subsequently, the resulting keywords are used to re-divide M and update P, the algorithm of re-division is shown in Appendix A.3. ## 3.3 Private Representation Training In the training phase, we use prototypes to encourage functionally similar word representations to be clustered in the representation space and apply random perturbation for preserving privacy. Clustering Loss. Given the input text X = {xi} n i=1 and word assigniment M, we first use client model fθc to get the word representation H = {hi} n i=1, then use center loss (Wen et al., 2016) to make representation close to its prototype: $${\mathcal{L}}_{c l o s e}={\frac{1}{2}}\sum_{i=1}^{n}||\mathbf{h}_{i}-\mathbf{p}_{x_{i}}||_{2}^{2},\qquad\quad(3)$$ where pxi = M(xi) is the prototype of xi. Furthermore, we also pull away the distance between different prototypes to prevent these prototypes collapse during training (Li et al., 2021a), thus enhancing task performance. The prototype distance loss is formulated as: $${\mathcal{L}}_{a w a y}={\frac{2}{n_{p}(n_{p}-1)}}\sum_{i=1}^{n_{p}}\sum_{j=i+1}^{n_{p}}||\mathbf{p}_{i}-\mathbf{p}_{j}||_{2}^{2},\,\,(4)$$ $$(2)$$ where np is the number of prototypes. We refer to L*close* and L*away* together as L*cluster*. Random Perturbation. We apply a random perturbation to each representation hi, which shifts the representation to another point near its prototype. Following Plant et al. (2021), we take the Laplace Noise as the random perturbation. The perturbed representations are sent to the server model fθs for subsequent computation: $${\hat{\mathbf{Y}}}=f_{\theta_{s}}(\mathbf{H}+L a p(\epsilon)),$$ $$({\boldsymbol{S}})$$ where Yˆ is the prediction results and ϵ is the hyperparameter to control the scale of noise. The perturbation is applied in both the training and inference phases. Because each perturbation is random, it is difficult for a privacy attacker to establish a link between the perturbed representation and the original word. Perturbed representations deviate from the original words but still serve original functions, thus preserving privacy while maintaining performance. Overall Loss. The supervised task loss L*task* is joint learning in a multi-task learning manner, the overall loss for our model is: $${\mathcal{L}}={\mathcal{L}}_{t a s k}+\gamma_{1}{\mathcal{L}}_{c l o s e}+\gamma_{2}{\mathcal{L}}_{a w a y},$$ where γ1 and γ2 are weighting factors. Inspired by Li et al. (2021a), we perform the clustering algorithm at the beginning of each epoch to make clustering results more accurate. During the training phase, the client model and server model are optimized together by service providers. During the inference phase, the client performs lightweight inference using the client model, then shares obfuscated representations with service providers and potential privacy attackers. Aside from perturbations, the inference phase of our method is the same as standard PLMs, thus, we do not introduce additional inference time. ## 4 Experiment 4.1 Datasets To verify the effectiveness of our methods, we conduct experiments on both token classification and sentence classification tasks, covering named entity recognition: **CoNLL2003** (Tjong Kim Sang and De Meulder, 2003) and **OntoNotes5.0** (Weischedel et al., 2013), sentiment analysis: **SST-2** (Socher et al., 2013) and topic classification: **AGNEWS**, (Zhang et al., 2015). These tasks are close to realworld applications, which can verify the actual utility of our methods. The statistics of datasets are shown in Appendix A.1. ## 4.2 Baselines 4.2.1 Attack Methods We use three recently proposed text reconstruction methods for privacy attacks. **KNN-Attack** (Qu et al., 2021) computes the distance between each representation and public word embedding matrix and takes the nearest word in the embedding matrix as the attack result. The attacker can be anyone who has access to the client's representation. Inversion-Attack (Höhmann et al., 2021) requires the attacker to train an inversion model, which directly transforms the client representation to a word in a one-to-one manner. The attacker can be the service provider with access to the client and server model to generate training data for the inversion model. **MLC-Attack** (Song and Raghunathan, 2020) also trains an inversion model like Inversion-Attack, but it runs in a multi-label classification manner and predicts a set of words in the sentence independent of their word ordering. ## 4.2.2 Defence Methods We compare our **TextObfuscator** with three representative privacy-preserving methods and standard Fine-tune (Devlin et al., 2018). **DPNR** (Lyu et al., 2020b) uses differential privacy and word dropout to provide a privacy guarantee. **CAPE** (Plant et al., 2021) further adopts differential privacy and adversarial training to reduce privacy information in representation. **SanText+** (Yue et al., 2021b) replaces the sensitive words in plain text based on differential privacy and word frequency. ## 4.3 Privacy Metrics TopK is a token-level metric that measures the percentage of correct words in the attacker's top k predictions. **RougeL** (Lin, 2004) is a generation metric that measures the overlap between two sentences. We follow Gupta et al. (2022) and take it as a sentence-level metric to measure the coherence of attack results. Set is a metric specific to MLCAttack, which quantifies the proportion of words in original sentence that are present in prediction set. Details of metrics are shown in Appendix A.2. ## 4.4 Experimental Settings In our experiments, all methods are implemented based on roberta*base* (Liu et al., 2019). We divide the model into a smaller client model fθc with three transformer layers and a large server model fθs with the remaining nine transformer layers. The privacy attack methods are all performed in the output representations of fθc , which will be shared with service providers and under the risk of privacy leakage. For the privacy defence methods, DPNR, CAPE, and our TextObfuscator are applied to the output representation of fθc , and SanText+ is applied to the input text directly. The | CoNLL2003 OntoNotes5 SST-2 AGNEWS | |-------------------------------------| Dataset Method Acc/F1 ↑KNN-Attack ↓ Inversion-Attack ↓ **MLC-Attack** ↓ Top1 Top5 Rouge Top1 Top5 Rouge Set | Top1 | Top5 | Rouge | Top1 | Top5 | Rouge | Set | | | |-----------|--------|---------|--------|--------|---------|-------|-------|-------| | Fine-tune | 91.72 | 87.33 | 97.72 | 90.89 | 99.99 | 100 | 99.90 | 41.41 | | DPNR | 79.14 | 0.03 | 0.47 | 0.99 | 14.60 | 28.91 | 11.54 | 10.21 | | CAPE | 84.47 | 0.03 | 0.51 | 0.82 | 10.39 | 22.28 | 8.94 | 9.37 | | SanText+ | 76.94 | 60.59 | 75.29 | 50.80 | 81.54 | 87.68 | 69.18 | 13.36 | | Ours | 89.11 | 0.24 | 1.42 | 1.01 | 6.18 | 18.56 | 5.44 | 8.32 | | Fine-tune | 89.68 | 80.18 | 98.17 | 92.65 | 100 | 100 | 100 | 71.13 | | DPNR | 72.38 | 0.07 | 0.73 | 1.72 | 18.18 | 33.94 | 17.62 | 15.87 | | CAPE | 85.89 | 0.05 | 0.82 | 1.31 | 14.57 | 30.02 | 14.25 | 13.62 | | SanText+ | 71.57 | 57.35 | 73.40 | 51.21 | 78.99 | 86.07 | 68.05 | 46.90 | | Ours | 87.17 | 0.68 | 2.31 | 2.13 | 7.97 | 22.22 | 9.88 | 13.62 | | Fine-tune | 94.38 | 88.21 | 98.75 | 96.04 | 100 | 100 | 100 | 62.09 | | DPNR | 87.84 | 0.02 | 1.76 | 0.87 | 4.39 | 16.39 | 5.72 | 16.82 | | CAPE | 89.44 | 0.03 | 1.86 | 0.70 | 5.06 | 16.15 | 6.37 | 17.12 | | SanText+ | 87.27 | 70.78 | 75.95 | 60.05 | 81.79 | 89.01 | 69.17 | 52.73 | | Ours | 91.51 | 0.05 | 0.47 | 0.87 | 5.48 | 17.97 | 11.35 | 15.74 | | Fine-tune | 94.71 | 89.45 | 98.87 | 96.37 | 100 | 100 | 100 | 86.13 | | DPNR | 93.12 | 0.02 | 2.32 | 1.79 | 3.97 | 13.53 | 6.82 | 15.86 | | CAPE | 93.99 | 0.02 | 3.41 | 1.58 | 3.39 | 12.60 | 2.22 | 14.26 | | SanText+ | 91.92 | 59.31 | 64.58 | 51.57 | 78.20 | 85.11 | 70.86 | 61.36 | | Ours | 94.52 | 0.04 | 0.53 | 1.12 | 3.38 | 12.37 | 2.01 | 13.16 | implementation details and hyperparameters are shown in Appendix A.4. ## 4.5 Main Results Table 1 shows the main results of our method and all baselines. We can observe that: (1) Representations without defence method are vulnerable to privacy attacks. In the absence of any privacy defence methods, all privacy attacks on Fine-tune are highly successful. Inversionattack even achieves 100% top-1 attack accuracy, indicating that privacy is fully compromised. (2) Resisting Inversion-Attacks is key to protecting privacy. Most defence methods can resist KNNAttack, it only achieves nearly 0 Top1 and Top5 attack accuracy for all tasks. In the case of MLCAttack, the attack results are a set of disordered words that may contain redundancy. It is hard to reconstruct the correct sequence from such words. However, for Inversion-Attack, the representation is transformed into the original word one-to-one, and it achieves the highest attack accuracy, which is the most likely to compromise privacy. (3) Previous defence methods achieve limited task performance. CAPE and DPNR show good privacy on sentence-level tasks, Inversion-attack on these methods only achieve about 5% top 1 attack accuracy on SST2 and AGNEWS. But for tokenlevel tasks which require richer word information, these methods not only degraded privacy but also suffered significantly from task performance. We speculate that these methods reduce the privacy information, i.e., word information, in the representation, which hinders the understanding of the sentence. There is an inherent contradiction between reducing word information on shallow representations and maintaining task performance, especially on token-level tasks. **(4) With equal** or better privacy, our proposed TextObfuscator shows a task performance improvement over baselines. The advantage of the TextObfuscator is that we do not reduce private information but rather obfuscate clustered representations, which misleads privacy attackers while still preserving functionality for each word representation, thus achieving better task performance while maintaining privacy. ## 5 Analysis 5.1 Ablation Study Effect of Different Components. To verify the effectiveness of the different components (L*close*, 5464 L*away* and random perturbation) in our method, we conduct a series of ablation experiments and show the results in Table 2. We can observe that: (1) Without cluster loss (L*close* and L*away*), random perturbation alone is inadequate as a defence against privacy attacks. We speculate that the perturbation applied to unclustered representations can only provide limited obfuscation to the attacker. Most perturbed words still maintain a distance from other words, providing attackers the opportunity to distinguish between them. (2) Cluster loss without random perturbation is completely indefensible against Inversion-Attack. The powerful inversion model can still distinguish different words from clustered representations. **Only the combination** of clustered representations and random perturbations can effectively mislead privacy attackers. (3) Without the L*away*, some prototypes tend to collapse to one point, resulting in a decline in task performance but a privacy boost. | Dataset | Method | Task↑ | KNN ↓ Inversion ↓ | | |----------------|----------------|---------|---------------------|-------| | TextObfuscator | 91.17 | 0.05 | 6.01 | | | w/o Laway | 90.37 | 0.00 | 6.47 | | | w/o Lcluster | 90.13 | 4.29 | 31.44 | | | w/o Perturb | 93.12 | 0.00 | 100 | | | SST-2 | TextObfuscator | 89.11 | 0.26 | 7.02 | | w/o Laway | 88.44 | 0.23 | 7.41 | | | CoNLL03 | w/o Lcluster | 89.06 | 2.21 | 31.60 | | w/o Perturb | 91.42 | 0.05 | 100 | | Effect of Clustering Algorithms. Sentence classification tasks require two additional processes, clustering and re-division algorithms. We conduct experiments on SST-2 to verify the performance and privacy impact of the cluster number and TF-IDF-based re-division. From experimental results shown in Figure 3, we can observe that redivision (KMeans-TFIDF) consistently improves task performance and privacy for all cluster numbers. Besides that, we find that a large cluster number can damage privacy (lower Top1 means better privacy), and a small cluster number can lead to a degradation in task performance. Therefore, a moderate cluster number is deemed to be optimal. ## 5.2 Visualisation Representation Visualisation. To intuitively show the influence of the cluster loss and random ![6_image_0.png](6_image_0.png) perturbation, we employed T-SNE (Van der Maaten and Hinton, 2008) to visualize representations of TextObfuscator. Specifically, we select six classes from the CoNLL2003 dataset and utilized the full test set to generate these representations. From the visualization results in Figure 4, we can observe that, before perturbing (triangular point), functionally similar representations are clustered together while maintaining a certain distance from other clusters. After perturbing (round point), the representations are shifted and mixed with surrounding representations but remain within their respective functional clusters. Such representations obfuscate word information as they are indistinguishable from each other, but maintain word function as they still perform the same function in the representation space. We take NER as an example, perturbing the word representation of "John" may result in it being similar to another word, such as "Mike". However, a privacy attacker will only be able to establish a false association between the representation of "Mike" and the original word "John", thereby effectively protecting privacy. But for NER task, both words "John" and "Mike" serve the same role as "PER (Person)" and do not negatively impact the model's ability to classify them. These visualization results provide empirical evidence for the principles and effectiveness of TextObfuscator. Attack Results Visualisation. To intuitively show the effectiveness of our privacy-preserving method, we visualize the results of privacy attacks for one sample from OntoNotes5. As shown in Table 3, we can observe that the attack results on TextObfuscator are largely unreadable, with only some high-frequency words "the" and wrong words | Input text: President Bush called his attention to the matter during the Italian leader's visit here last week. KNN: President Bush his attention to matter during Italian leader 's visit here last week. Fine-tune Inversion: President Bush called his attention to the matter during the Italian leader's visit here last week. MLC: { Bush | President | visit | Italian | week | last | ' | s | the | called | here | during | to | his | . | this } KNN: anybody ls <= our Israeliibble >ancial clinicians, Wednesday Sag Jin relocation teleport. Text Obfuscator Inversion: the The Putin the the the the the the the the Israeli the the the the the next year, MLC: { to | the | . | in, } | |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| Table 3: Results of privacy attack. Text in red represents successfully recovered words. Text in bold means **privacy** information. Attacks on TextObfuscator only recover meaningless words, no useful information is leakaged. ![7_image_0.png](7_image_0.png) such as "Putin" being recovered. Keywords such as people, places, and time that may contain privacy have not been recovered correctly, indicating that our method is effective in protecting privacy. ## 6 Related Work The high performance and computational cost of PLMs have accelerated the development of inference services (Soifer et al., 2019; Pais et al., 2022). These services enable clients to perform compute-intensive PLM inference in the cloud by uploading personal data, which brings convenience but also raises concerns about privacy (Zhang et al., 2021; Liu et al., 2020a). In order to mitigate privacy leakage, many sought to upload representations that have been privatized by privacy-preserving technologies instead of the original text to the cloud (DALE, 2015). One method is to encrypt the representation, using either homomorphic encryption (Chen et al., 2022b) or a customized encryption protocol (Hao et al., 2022) to enable computations to be performed on the encrypted representation. Encryption-based methods often require high computation time and communication costs (Gilad-Bachrach et al., 2016) and may not be practical for real-world applications. Therefore, we did not compare this method in our experiments. Another method is to use Differential privacy (Xu et al., 2020; Lyu et al., 2020a; Yue et al., 2021a; Hoory et al., 2021) and adversarial training (Coavoux et al., 2018; Plant et al., 2021; Chen et al., 2022a) to learn private representation, which reduces privacy attributes in representation. Applying these works to reduce word information leads to limited performance, as the word information in the shallow layer is important for subsequent inference. Our method proposes to obfuscate word information while maintaining word functionality, thus providing better performance and privacy. Recently, Zhou et al. (2022a) propose TextFusion, which utilizes token fusion to hinder privacy attackers from training a targeted inversion model. We explore a stronger and more realistic setting than TextFusion, where the privacy attacker is the service provider itself. As the service provider is aware of TextFusion's defense strategies, they can design targeted privacy attack methods to disclose more sensitive information. We did not compare our method with TextFusion due to different settings. In addition to NLP, there are also many works for protecting inference privacy in computer vision (Xiang et al., 2019; Osia et al., 2020; Liu et al., 2020b), However, most of these methods cannot be used directly in NLP because they only consider one single image, and we need to protect the privacy of a sequence of words. The popularity of transformer structures (Dosovitskiy et al., 2020) in computer vision may alleviate this situation, but the adaptation of these methods still requires further exploration. ## 7 Conclusion In this paper, we propose TextObfuscator, a novel representation learning method for privacypreserving inference. The main idea of our method is to obfuscate word information and maintain word functionality. We achieve this by applying random perturbations to the clustered representations. The perturbed representations are indistinguishable from the surrounding representations but still around their functional clusters. To learn clustered representation, we find prototypes for each word and encourage the word representation to be close to its prototype. Additionally, we propose different methods to find prototypes for token-level and sentence-level tasks, utilizing semantic and task information. Through experiments on token and sentence classification tasks, we evaluate the effectiveness of TextObfuscator and provide further analysis of the principles of our proposed method. Overall, our results suggest that TextObfuscator is a promising method for preserving inference privacy. ## 8 Limitations We summarize the limitations of our method as follows: (1) TextObfuscator was designed to protect word privacy in the inference phase, and we did not verify its ability to preserve other privacy attributes and training phase privacy. (2) Although we have done empirical experiments and visualizations to demonstrate the effectiveness of our method, a mathematical proof would enhance its privacy guarantees. (3) Our method requires more training steps than fine-tuning, resulting in an increased computational cost. ## Acknowledgements The authors wish to thank the anonymous reviewers for their helpful comments. This work was partially funded by National Natural Science Foundation of China (No.62206057,62076069,61976056), Shanghai Rising-Star Program (23QA1400200), Program of Shanghai Academic Research Leader under grant 22XD1401100, and Natural Science Foundation of Shanghai (23ZR1403500). ## References Sam Altman. 2022. Openai api. URL. https:// openai.com/api/. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. Jikun Chen, Feng Qiang, and Na Ruan. 2022a. Adversarial representation sharing: A quantitative and secure collaborative learning framework. arXiv preprint arXiv:2203.14299. Tianyu Chen, Hangbo Bao, Shaohan Huang, Li Dong, Binxing Jiao, Daxin Jiang, Haoyi Zhou, and Jianxin Li. 2022b. The-x: Privacy-preserving transformer inference with homomorphic encryption. *arXiv* preprint arXiv:2206.00216. Maximin Coavoux, Shashi Narayan, and Shay B Cohen. 2018. Privacy-preserving neural representations of text. *arXiv preprint arXiv:1808.09408*. ROBERT DALE. 2015. Nlp meets the cloud. Natural Language Engineering, 21(4):653–659. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. Craig Gentry. 2009. Fully homomorphic encryption using ideal lattices. In *Proceedings of the forty-first* annual ACM symposium on Theory of computing, pages 169–178. Ran Gilad-Bachrach, Nathan Dowlin, Kim Laine, Kristin Lauter, Michael Naehrig, and John Wernsing. 2016. Cryptonets: Applying neural networks to encrypted data with high throughput and accuracy. In International conference on machine learning, pages 201–210. PMLR. Samyak Gupta, Yangsibo Huang, Zexuan Zhong, Tianyu Gao, Kai Li, and Danqi Chen. 2022. Recovering private text in federated learning of language models. *arXiv preprint arXiv:2205.08514*. Meng Hao, Hongwei Li, Hanxiao Chen, Pengzhi Xing, Guowen Xu, and Tianwei Zhang. 2022. Iron: Private inference on transformers. In *Advances in Neural* Information Processing Systems. Johannes Höhmann, Achim Rettinger, and Kai Kugler. 2021. Invbert: Text reconstruction from contextualized embeddings used for derived text formats of literary works. *arXiv preprint arXiv:2109.10104*. Shlomo Hoory, Amir Feder, Avichai Tendler, Sofia Erell, Alon Peled-Cohen, Itay Laish, Hootan Nakhost, Uri Stemmer, Ayelet Benjamini, Avinatan Hassidim, et al. 2021. Learning and evaluating a differentially private pre-trained language model. In *Findings of the* Association for Computational Linguistics: EMNLP 2021, pages 1178–1189. Ganesh Jawahar, Benoît Sagot, and Djamé Seddah. 2019. What does BERT learn about the structure of language? In *Proceedings of the 57th Annual* Meeting of the Association for Computational Linguistics, pages 3651–3657, Florence, Italy. Association for Computational Linguistics. Bin Ji, Shasha Li, Shaoduo Gan, Jie Yu, Jun Ma, Huijun Liu, and Jing Yang. 2022. Few-shot named entity recognition with entity-level prototypical network enhanced by dispersedly distributed prototypes. In Proceedings of the 29th International Conference on Computational Linguistics, pages 1842–1854, Gyeongju, Republic of Korea. International Committee on Computational Linguistics. Ryan Lehmkuhl, Pratyush Mishra, Akshayaram Srinivasan, and Raluca Ada Popa. 2021. Muse: Secure inference resilient to malicious clients. In 30th USENIX Security Symposium (USENIX Security 21), pages 2201–2218. Junnan Li, Pan Zhou, Caiming Xiong, and Steven Hoi. 2021a. Prototypical contrastive learning of unsupervised representations. In International Conference on Learning Representations. Linyang Li, Demin Song, Ruotian Ma, Xipeng Qiu, and Xuanjing Huang. 2021b. Knn-bert: fine-tuning pretrained models with knn classifier. arXiv preprint arXiv:2110.02523. Yitong Li, Timothy Baldwin, and Trevor Cohn. 2018. Towards robust and privacy-preserving text representations. *arXiv preprint arXiv:1805.06093*. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In *Text summarization* branches out, pages 74–81. Ximeng Liu, Lehui Xie, Yaopeng Wang, Jian Zou, Jinbo Xiong, Zuobin Ying, and Athanasios V Vasilakos. 2020a. Privacy and security issues in deep learning: A survey. *IEEE Access*, 9:4566–4593. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. Zhijian Liu, Zhanghao Wu, Chuang Gan, Ligeng Zhu, and Song Han. 2020b. Datamix: Efficient privacypreserving edge-cloud inference. In European Conference on Computer Vision, pages 578–595. Springer. Lingjuan Lyu, Xuanli He, and Yitong Li. 2020a. Differentially private representation for nlp: Formal guarantee and an empirical study on privacy and fairness. *arXiv preprint arXiv:2010.01285*. Lingjuan Lyu, Xuanli He, and Yitong Li. 2020b. Differentially private representation for NLP: Formal guarantee and an empirical study on privacy and fairness. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 2355–2365, Online. Association for Computational Linguistics. Ruotian Ma, Xin Zhou, Tao Gui, Yiding Tan, Linyang Li, Qi Zhang, and Xuan-Jing Huang. 2022. Templatefree prompt tuning for few-shot ner. In *Proceedings* of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5721–5732. Seyed Ali Osia, Ali Shahin Shamsabadi, Sina Sajadmanesh, Ali Taheri, Kleomenis Katevas, Hamid R Rabiee, Nicholas D Lane, and Hamed Haddadi. 2020. A hybrid deep learning architecture for privacypreserving mobile analytics. *IEEE Internet of Things* Journal, 7(5):4505–4518. Sebastião Pais, João Cordeiro, and M Luqman Jamil. 2022. Nlp-based platform as a service: a brief review. Journal of Big Data, 9(1):1–26. Xudong Pan, Mi Zhang, Shouling Ji, and Min Yang. 2020. Privacy risks of general-purpose language models. In 2020 IEEE Symposium on Security and Privacy (SP), pages 1314–1331. IEEE. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In *Proceedings of the 2014 Conference on Empirical Methods in Natural Language* Processing (EMNLP), pages 1532–1543, Doha, Qatar. Association for Computational Linguistics. Richard Plant, Dimitra Gkatzia, and Valerio Giuffrida. 2021. CAPE: Context-aware private embeddings for private language learning. In *Proceedings of the* 2021 Conference on Empirical Methods in Natural Language Processing, pages 7970–7978, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Xipeng Qiu, Tianxiang Sun, Yige Xu, Yunfan Shao, Ning Dai, and Xuanjing Huang. 2020. Pre-trained models for natural language processing: A survey. Science China Technological Sciences, 63(10):1872– 1897. Chen Qu, Weize Kong, Liu Yang, Mingyang Zhang, Michael Bendersky, and Marc Najork. 2021. Natural language understanding with privacy-preserving bert. In *Proceedings of the 30th ACM International Conference on Information & Knowledge Management*, pages 1488–1497. Gerard Salton and Christopher Buckley. 1988. Termweighting approaches in automatic text retrieval. Information processing & management, 24(5):513– 523. Zekun Xu, Abhinav Aggarwal, Oluwaseyi Feyisetan, and Nathanael Teissier. 2020. A differentially private text perturbation method using a regularized mahalanobis metric. *arXiv preprint arXiv:2010.11947*. Jake Snell, Kevin Swersky, and Richard Zemel. 2017. Prototypical networks for few-shot learning. Advances in neural information processing systems, 30. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In *Proceedings of the 2013 Conference on Empirical* Methods in Natural Language Processing, pages 1631–1642, Seattle, Washington, USA. Association for Computational Linguistics. Congzheng Song and Ananth Raghunathan. 2020. Information leakage in embedding models. In Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security, pages 377–390. Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142– 147. Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of machine learning research, 9(11). Kiri Wagstaff, Claire Cardie, Seth Rogers, Stefan Schrödl, et al. 2001. Constrained k-means clustering with background knowledge. In *Icml*, volume 1, pages 577–584. Yandong Wen, Kaipeng Zhang, Zhifeng Li, and Yu Qiao. 2016. A discriminative feature learning approach for deep face recognition. In European conference on computer vision, pages 499–515. Springer. Liyao Xiang, Haotian Ma, Hao Zhang, Yifan Zhang, Jie Ren, and Quanshi Zhang. 2019. Interpretable complex-valued neural networks for privacy protection. *arXiv preprint arXiv:1901.09546*. ## A Appendix A.1 Statistics Of Dataset Juntao Yu, Bernd Bohnet, and Massimo Poesio. 2020. Named entity recognition as dependency parsing. arXiv preprint arXiv:2005.07150. Xiang Yue, Minxin Du, Tianhao Wang, Yaliang Li, Huan Sun, and Sherman S. M. Chow. 2021a. Differential privacy for text analytics via natural text sanitization. In *Findings, ACL-IJCNLP 2021*. Xiang Yue, Minxin Du, Tianhao Wang, Yaliang Li, Huan Sun, and Sherman S. M. Chow. 2021b. Differential privacy for text analytics via natural text sanitization. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021*, pages 3853–3866, Online. Association for Computational Linguistics. Jonathan Soifer, Jason Li, Mingqin Li, Jeffrey Zhu, Yingnan Li, Yuxiong He, Elton Zheng, Adi Oltean, Maya Mosyak, Chris Barnes, et al. 2019. Deep learning inference service at microsoft. In 2019 USENIX Conference on Operational Machine Learning (OpML 19), pages 15–17. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. *Advances in neural information* processing systems, 28. Xiaoyu Zhang, Chao Chen, Yi Xie, Xiaofeng Chen, Jun Zhang, and Yang Xiang. 2021. Privacy inference attacks and defenses in cloud-based deep neural network: A survey. Xin Zhou, Jinzhu Lu, Tao Gui, Ruotian Ma, Zichu Fei, Yuran Wang, Yong Ding, Yibo Cheung, Qi Zhang, and Xuanjing Huang. 2022a. TextFusion: Privacypreserving pre-trained model inference via token fusion. In *Proceedings of the 2022 Conference on* Empirical Methods in Natural Language Processing, pages 8360–8371, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Xin Zhou, Ruotian Ma, Yicheng Zou, Xuanting Chen, Tao Gui, Qi Zhang, Xuan-Jing Huang, Rui Xie, and Wei Wu. 2022b. Making parameter-efficient tuning more efficient: A unified framework for classification tasks. In Proceedings of the 29th International Conference on Computational Linguistics, pages 7053–7064. Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, et al. 2013. Ontonotes release 5.0 ldc2013t19. *Linguistic Data Consortium, Philadelphia, PA*, 23. We use four English datasets, including SST-2 (Socher et al., 2013) for sentiment classification (Li et al., 2021b; Zhou et al., 2022b), AGNEWS (Zhang et al., 2015) for topic classification, CoNLL2003 (Tjong Kim Sang and De Meulder, 2003) and OntoNotes5 (Weischedel et al., 2013) for named entity recognition (Yu et al., 2020; Ma et al., 2022). We follow the official dataset split for AGNEWS, CoNLL2003 and OntoNotes5. The test set for SST-2 is not publicly available, the reported results of SST-2 tasks are tested on the official development set. The statistics of datasets in our experiments are shown in Table 4. | Dataset | Domain | # Train | #Test | #Labels | |------------|----------|-----------|---------|-----------| | SST-2 | Movie | 67349 | 872 | 2 | | AGNEWS | News | 120000 | 7600 | 4 | | CoNLL2003 | News | 14041 | 3453 | 9 | | OntoNotes5 | General | 59924 | 8262 | 37 | Table 4: Statistics of the datasets. ## A.2 Privacy Metrics In our experiments, we use three metrics to measure privacy. Next, we will describe these three metrics in a formulaic way. TopK. Top-K accuracy is defined as the proportion of times that the real words is among the top K predictions made by the attack model, where K is a pre-defined parameter. Mathematically, it can be represented as: $$T o p K={\frac{1}{N}}\sum_{i=1}^{N}[y_{i}\in t o p_{k}(p_{i})]\qquad(7)$$ where N is the total number of representation, yiis the real word of representation i, piis the predicted probability distribution of the attack model, and topk(pi) is the set of top k words with highest probability for representation i. RougeL (Lin, 2004). RougeL is a widely used metric to evaluate the quality of text summarization. So we do not describe the details of RougeL, but just state that we take the top1 word of the attack results to compose the sentences for calculating RougeL. Set. The attack results of MLC-Attack are unordered sets of words, a one-to-one metric like TopK cannot be used for this attack, so we use Set to measure the attack success rate of MLC-Attack. Given that set A is different words in a sentence, the set B is the prediction results of MLC-Attack, the Set metirc can be represented as: $$S e t={\frac{|A\cap B|}{|A|}}\qquad\qquad(8)$$ This metric measures how many words in the original sentence are in the set of predicted results of the MLC-Attack. ## Algorithm 1 Re-Division Algorithm Require: Representations Matrices Xˆ ; Word Assignment M; Prototype Initialization P; Task-Related Words T. 1: for tc ∈ T do 2: // Set of prototypes assigned to other classes 3: conf lict ← {M(x) ∈ T|x /∈ tc} 4: for x ∈ tc do 5: // Divide x to other prototype if conflict occurs 6: if M(x) ∈ conf lict then 7: xˆ ← get representation of x from Xˆ 8: M(x) ← argminj /∈conf lict d(xˆ, pj ) 9: end if 10: end for 11: end for 12: // Update P based on the new M 13: for pi ∈ P do 14: Pi ← {x ∈ Xˆ |M(x) = pi} 15: pi ← 1 |Pi| Px∈Pi x 16: end for 17: return M, P ## A.3 Re-Division Algorithm As mentioned in Section 3.2.2, after we find the category-related words T = {tc} nc c=1 for each category using TF-IDF, we re-divide the word Assignment M and prototype Initialization P. The algorithm is inspired by constrained K-means clustering (Wagstaff et al., 2001), but we only apply it once after clustering. The process of re-division is shown in Algorithm 1. ## A.4 Implementation Details In this subsection, we describe the implementation details and the replication process of both attack and defence methods. All methods are based on Roberta*base* with 125 million parameters. Dataset and models are all loaded from huggingface2. Our experiments are conducted on NVIDIA GeForce RTX 3090. Details for Defence Methods. We reference publicly available code, implement DPNR3, CAPE4 and SanText+5 ourselves. We also conduct a grid search on the hyperparameters to reproduce the baselines on our setting. For each defence method, we train 50 epochs on SST2, CoNLL2003 and Ontonotes5, 30 epochs on AGNews to guarantee convergence, the AdamW optimizer and a linear learning rate scheduler are used during training. The default learning rate is 5e-5, we do not adjust the learning rate unless we encounter the case of non-convergence. For **DPNR**, we search noise scale ϵ on [0.05, 0.1, 0.5, 1, 5] and the word dropout rate on [0, 0.1, 0.3]. For **CAPE**, we search the adversarial training weights λ on [0.01, 0.05, 0.1, 0.5, 1, 5] and noise scale ϵ on [0.05, 0.1, 0.5, 1, 5]. For **Santext+**, we follow the author's setting and use GloVe (Pennington et al., 2014) to guide the word replacement, the probability of non-sensitive words to be sanitized p defaults to 0.3 and the sensitive word percentage w defaults to 0.9. We search the privacy parameter ϵ on [1, 2, 3]. For **TextObfuscator**, we use the K-Means to cluster representation for sentencelevel tasks, and the number of clusters defaults to 100. We search the close loss weights γ1 from [0.1, 0.5, 1] and away loss weights γ2 from [0.1, 0.3, 0.5]. Although the noise scale can also be adjusted, we found that the most commonly used parameter (ϵ=1) is sufficient, so we kept the noise scale constant in all experiments. We select the best performance and privacy (but prefer privacy) results from the experimental results to report. The best hyperparameters we tuned are shown in Table 5. Dataset Method lr bsz λadv ϵn ϵw ϵp γ1 γ2 CoNLL2003 Finetune 2e-5 32 - - - - - - DPNR 1e-5 64 - 5 0.1 - - - CAPE 1e-5 32 0.1 5 - - - - Santext+ 1e-5 64 - - - 3 - - TextObfuscator 5e-5 128 - 1 - - 0.5 0.3 Finetune 2e-5 32 - - - - - - DPNR 1e-5 64 - 5 0.1 - - - CAPE 1e-5 32 0.05 5 - - - - Santext+ 1e-5 64 - - - 3 - - TextObfuscator 5e-5 128 - - - - 0.5 0.3 | CoNLL2003 OntoNotes SST-2 AGNEWS | |------------------------------------| Finetune 2e-5 32 - - - - - - DPNR 1e-5 64 - 0.5 0.1 - - - CAPE 1e-5 32 0.1 0.5 - - - - Santext+ 1e-5 64 - - - 3 - - TextObfuscator 1e-5 256 - - - - 0.5 0.1 Finetune 2e-5 32 - - - - - - DPNR 1e-5 64 - 1 0.1 - - - CAPE 1e-5 32 0.1 0.5 - - - - Santext+ 1e-5 64 - - - 1 - - TextObfuscator 5e-5 168 - - - - 0.5 0.1 Details for Attack Methods. In our implementation of the **KNN-Attack**, we employed the use of the embedding matrix of the roberta*base* to calculate the Euclidean distance between the client representation. When attack the CAPE and DPNR methods, which employ max-min normalization on the representation, we also applied the same normalization technique on the embedding matrix before calculating distance. For **Inversion-Attack** and MLC-Attack, the training data is generated by the client model to be attacked on the training set of the target task. We use the Roberta*base* model as the backbone for the inversion model and search the learning rate from [1e-4, 1e-5, 1e-6] and train 10 epochs to guarantee convergence. We take the words with a probability higher than 0.5 as the prediction result of MLC. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? section 8 ✓ A2. Did you discuss any potential risks of your work? section 8 ✓ A3. Do the abstract and introduction summarize the paper's main claims? abstract and section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3 And Section 4 ✓ B1. Did you cite the creators of artifacts you used? section 4 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Appendix A.1 ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section 4 and section 5 ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? The datasets we use are publicly available. And we need to perform NER tasks that involve identifying the names of people, which are usually not anonymized. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? section 4 and Appendix A.1 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Appendix A.1 ## C ✓ **Did You Run Computational Experiments?** Section 4 ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Appendix A.4 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Appendix A.4 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Appendix A.4 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
marchisio-etal-2023-mini
Mini-Model Adaptation: Efficiently Extending Pretrained Models to New Languages via Aligned Shallow Training
https://aclanthology.org/2023.findings-acl.338
Prior work shows that it is possible to expand pretrained Masked Language Models (MLMs) to new languages by learning a new set of embeddings, while keeping the transformer body frozen. Despite learning a small subset of parameters, this approach is not compute-efficient, as training the new embeddings requires a full forward and backward pass over the entire model. We propose mini-model adaptation, a compute-efficient alternative that builds a shallow mini-model from a fraction of a large model{'}s parameters. New language-specific embeddings can then be efficiently trained over the mini-model and plugged into the aligned large model for rapid cross-lingual transfer. We explore two approaches to learn mini-models: MINIJOINT, which jointly pretrains the primary model and the mini-model using a single transformer with a secondary MLM head at a middle layer; and MINIPOST, where we start from a regular pretrained model, build a mini-model by extracting and freezing a few layers, and learn a small number of parameters on top. Experiments on XNLI, MLQA and PAWS-X show that mini-model adaptation matches the performance of the standard approach using up to 2.3x less compute on average.
# Mini-Model Adaptation: Efficiently Extending Pretrained Models To New Languages Via Aligned Shallow Training Kelly Marchisio1,2 ∗ Patrick Lewis2 † **Yihong Chen**3,4 **Mikel Artetxe**5 † 1Johns Hopkins University 2Cohere AI 3Meta AI 4University College London 5Reka AI [email protected], [email protected] ## Abstract Prior work shows that it is possible to expand pretrained Masked Language Models (MLMs) to new languages by learning a new set of embeddings, while keeping the transformer body frozen. Despite learning a small subset of parameters, this approach is not computeefficient, as training the new embeddings requires a full forward and backward pass over the entire model. We propose *mini-model* adaptation, a compute-efficient alternative that builds a shallow *mini-model* from a fraction of a large model's parameters. New languagespecific embeddings can then be efficiently trained over the mini-model and plugged into the aligned large model for rapid cross-lingual transfer. We explore two approaches to learn mini-models: MINIJOINT, which jointly pretrains the primary model and the mini-model using a single transformer with a secondary MLM head at a middle layer; and MINIPOST, where we start from a regular pretrained model, build a mini-model by extracting and freezing a few layers, and learn a small number of parameters on top. Experiments on XNLI, MLQA and PAWS-X show that mini-model adaptation matches the performance of the standard approach using 2.3x less compute on average. ## 1 Introduction Recent work on multilingual NLP has focused on pretraining (masked) language models on unlabeled corpora in multiple languages (Pires et al., 2019; Conneau et al., 2020; Xue et al., 2021). The resulting models can then be finetuned using labeled downstream data in a single language (typically English), and zero-shot transferred to the rest of the languages. While effective, existing models rarely cover more than a few dozen languages, and pretraining new models from scratch to support additional languages can be prohibitively expensive. ∗Work done during an internship at Meta AI †Work done at Meta AI ![0_image_0.png](0_image_0.png) Motivated by this, a recent line of work has explored pretraining an initial model in a few languages, and expanding it to new languages posthoc in a continual learning fashion (M'hamdi et al., 2022). More concretely, Artetxe et al. (2020) showed that it is possible to expand an English masked language model (MLM) to new languages by freezing the transformer body and learning a new embedding layer using the original MLM objective. Recent work has reported improved results by using a better initialization scheme (Pfeiffer et al., 2021), or learning additional languagespecific parameters through adapters (Pfeiffer et al., 2022). All these approaches are parameter-efficient, as they only learn a small number of parameters for each language, while the rest remain frozen. However, learning such parameters is not computeefficient, as it requires a full forward and backward pass over the entire model, including the frozen transformer body. We introduce *mini-model adaptation*, a new approach to extend MLMs to new languages that is both parameter- and compute-efficient. *Minimodels* are shallow models that are aligned with a ![1_image_0.png](1_image_0.png) larger parent model. Thanks to this, one can efficiently train a new embedding layer for a new language over the mini-model, and plug it directly into the parent for strong cross-lingual performance. As shown in Figure 2, we explore two approaches to learn mini-models, depending on whether we start from an existing primary model and learn a mini-model posthoc (MINIPOST), or we jointly learn the primary model and the mini-model from scratch (MINIJOINT). In MINIPOST, we extract the bottom layers from the existing MLM, and learn a small number of parameters on top to make it a usable small MLM itself. In the MINIJOINT variant, we pretrain an MLM from scratch including a secondary head at a middle layer. Both heads are optimized jointly, creating a complete, wellaligned MLM contained within a larger MLM. We evaluate our approach on natural language inference (XNLI), question answering (MLQA) and paraphrase identification (PAWS-X). As shown in Figure 1, mini-model adaptation can match the performance of the standard method from Artetxe et al. (2020) using 1.6x and 2.3x less compute for MINIPOST and MINIJOINT, respectively (averaged over tasks), and retains >98% of performance when trained to completion. All in all, our work shows that it is possible to adapt language models to new tasks (in this case, new languages) using smaller aligned models for training. While we focus on the problem of crosslingual lifelong learning to validate this idea, we believe that this new paradigm opens exciting opportunities to make finetuning large language models more affordable. ## 2 Proposed Method 2.1 Standard Adaptation Artetxe et al. (2020) develop a four-step pipeline for cross-lingual transfer from a monolingual model, visualized in Figure 2 (top). First, one trains a monolingual MLM in the source language (Lsrc, usually English). Second, the transformer body is frozen, embeddings are re-initialized,1and the model is trained with MLM in the target language (Ltrg). The trainable embeddings are tied with the output projection layer in the MLM head. Third, the Lsrc embeddings are swapped back into the model and frozen, and the transformer body is finetuned on the downstream data in Lsrc. Finally, the Ltrg embeddings are swapped back into the finetuned model for zero-shot transfer into Ltrg. We build two baselines based on this framework: a standard 12-layer (BL_BASE), and a smaller 4layer version (BL_SMALL). ## 2.2 Mini-Model Adaptation Our proposed approach follows a similar four-step training paradigm. However, we learn two aligned models in Step 1: the primary model and a shallow mini-model. In Step 2, the Ltrg embeddings are learned over the mini-model, saving compute with respect to standard adaptation. Steps 3 and 4 are run as usual over the primary model, resulting in a full-size Ltrg model. For Step 1, we explore the following two alternatives depending on whether we start from an existing Lsrc model, or we are training one from scratch: MINIJ**OINT**. In this variant, we pretrain a dualhead 12-layer Lsrc transformer from scratch, attaching a secondary head to an intermediary Nth layer (Figure 2, center). The model is trained to minimize the average MLM loss over the two heads. As such, the whole model receives gradient updates from the primary head, and the bottom layers also get updates from the secondary head. Having done that, we extract the bottom N layers and the secondary head to create the mini-model for Step 2. Unless otherwise indicated, we use N = 4. MINIPOST. Here, we start with a regular 12layer MLM in Lsrc (same as BL_BASE), and build an aligned mini-model in Step 1b (Figure 2, bottom). To that end, we first copy the bottom N layers into a new, shallower model, along with the embeddings and the MLM head. However, this does not work out of the box, as we must bridge the gap 1Following Pfeiffer et al. (2021), we initialize the Ltrg embeddings with overlapping tokens from Lsrc for all methods throughout. Non-overlapping tokens are randomly initialized using the normal distribution with µ = 0*.0, σ* = 0.02. GB Language Family Word Order Syn. Dist. Phylo. Dist. ar 28.0 Afro-Asiatic: Semitic SVO/VSO 0.57 1.00 ![2_image_0.png](2_image_0.png) bg 58.0 Indo-European: Slavic SVO 0.48 0.86 de 67.0 Indo-European: Germanic SVO/SOV 0.42 0.43 el 47.0 Indo-European: Greek SVO/VSO 0.52 0.83 en 301.0 Indo-European: Germanic SVO 0.00 0.00 es 54.0 Indo-European: Romance SVO 0.40 0.90 fr 57.0 Indo-European: Romance SVO 0.46 0.90 hi 21.0 Indo-European: Indic SOV 0.59 0.90 ru 279.0 Indo-European: Slavic SVO 0.49 0.90 sw 1.7 Niger-Congo: Bantu SVO 0.57 1.00 th 72.0 Tai-Kadai: Kam-Tai SVO 0.56 1.00 tr 21.0 Altaic: Turkic SOV 0.70 1.00 ur 5.7 Indo-European: Indic SOV 0.67 0.90 vi 138.0 Austro-Asiatic: Viet-Muong SVO 0.57 1.00 zh 47.0 Sino-Tibetan: Chinese SVO 0.57 1.00 between the output of the N bottom layers and the input of the MLM head, which goes through 12−N additional layers in the original model. To that end, we add 2 randomly-initialized layers between the N bottom layers and the MLM head, and train them with the MLM objective in Lsrc while keeping the rest of the parameters frozen. Because the new layers are unfrozen, they update to "complete" the MLM—bridging representations from the bottom layers' output to the MLM head's input, and resulting in a mini-model with N + 2 layers that is fully functional and aligned with the primary model. ## 3 Experimental Settings Languages and Data. Following common practice, we use English as the source language (Lsrc), and experiment with 14 other languages as the target (Ltrg). We use CC-100 (Conneau et al., 2020) as our training corpus, which is a filtered version of CommonCrawl. We report the full list of languages along with their corpus size and linguistic details in Table 1. Each language is preprocessed individually using SentencePiece (Kudo and Richardson, 2018) with a vocabulary size of 50,000. Models. We use the RoBERTaBASE (Liu et al., 2019) architecture throughout from *fairseq* (Ott et al., 2019). Embeddings are tied. As said in §2, we compare 4 systems: 2 variants of Artetxe et al. (2020) (BL_BASE with 12 layers and BL_SMALL with 4 layers), and 2 variants of our proposed approach where we set N = 4 (MINIJOINT, which jointly trains a 12-layer primary model and a 4layer mini-model from scratch, and MINIPOST, which starts from a regular 12-layer model and constructs a 6-layer mini-model post-hoc). BL_BASE is a performance upper-bound, as it is the original 12-layer model that is used for adaptation. BL_SMALL is a lower-bound, demonstrating performance of the standard approach using an adaptation model of similar size as ours. Models are trained for 125,000 steps with a global batch size of 2048, sequence length of 512, and learning rate of 7e-4 with 10,000 warmup updates and linear decay, both for the original pretraining (Step 1), and cross-lingual extension into each language (Step 2). As such, models see 131.1 billion training tokens per language. Step 1b in MINIPOST uses the same training hyperparameters. Evaluation. We evaluate on 3 tasks: natural language inference in XNLI (Conneau et al., 2018), question answering in MLQA (Lewis et al., 2020), and adversarial paraphrase identification in PAWSX (Yang et al., 2019). We also report XQuAD (Artetxe et al., 2020) results in §A.2. In all cases, the model is finetuned using the corresponding training data in English (Step 3), and zero-shot transferred into the rest of languages (Step 4). We perform 5 independent finetuning runs with different random seeds, and report average results. During finetuning, we use a peak learning rate of 1e-5 for XNLI and PAWS-X, and 3e-5 for MLQA and XQuAD. Each uses a warmup ratio of 0.06 and linear decay, and is finetuned for 3 epochs. Estimating FLOPs. We compare training efficiency of different approaches using floating point operations (FLOPs). To calculate FLOPs, we estimate analytically using an adaptation of the formula from Narayanan et al. (2021), detailed in §A.1. When doing so, we exclusively consider the cost of expanding the model to a new language (Step 2), which is the most significant in the crosslingual lifelong learning setup that our work addresses.2 We also report NVIDIA V100 GPU training days as a more interpretable number, which we estimate analytically using an estimated throughput of 30 TFLOP/s, or 1 V100 day = 2.592 EFLOPs. In some of our experiments, we are interested in estimating the training FLOPs required to achieve 2While Step 1 can also be expensive, it is amortized over time: the initial model is trained only once, but extended to new languages many times. The cost of Step 1 is similar for BL_BASE and MINIJOINT, as the overhead of the second head is small (∼30.4 vs. ∼32.2 V100 days for a 12-layer model). MINIPOST incurs extra cost from Step 1b, but this is relatively small compared to the cost of pretraining (see §A.1). certain performance. However, this cannot be computed precisely, as we only have a limited number of intermediate checkpoints.3 For that reason, we identify the checkpoints immediately before and after which the model first scores the desired performance, and use linear interpolation to estimate the step at which the exact score would have been hit. For instance, if MINIPOST obtains an accuracy of 48% at the 5,000 update checkpoint (∼1.17 EFLOPs) and 52% at the 10,000 update checkpoint (∼2.34 EFLOPs), we estimate that the accuracy of 50% was achieved at 7,500 steps (∼1.76 EFLOPs). ## 4 Main Results 4.1 Performance At Training Completion Table 2 reports performance at training completion (i.e., after 125,000 updates in Step 2). As expected, BL_BASE obtains the best results, but its training cost is also the highest. In contrast, MINIJOINT requires nearly one third of the compute, while obtaining similar results. More concretely, it is marginally better on PAWS-X, while moderately (1-2 points) worse on MLQA and XNLI. Averaged over tasks, MINIJOINT retains 98.7% of BL_BASE's performance4at 39% of its cost. This validates the core hypothesis of our work—learning target language embeddings over the mini-model is almost as effective as learning them over the original model, while being significantly cheaper. MINIPOST follows a similar trend, retaining 99.3% of BL_BASE's performance at nearly half of its cost. This shows that mini-models do not need to be trained from scratch, but one can take any existing English model and build it's corresponding mini-model post-hoc. BL_SMALL performs substantially worse than our proposed approach. BL_SMALL has the same training cost as MINIJOINT, but is 4.0 points worse on XNLI, 4.8 points worse on MLQA, and 9.0 points worse on PAWS-X. This shows that our idea of having two aligned models—a shallow one for efficient adaptation and a deep one for best performance at test time—is critical, as using a shallow model both for adaptation and inference performs considerably worse. 3We checkpoint every 5000 updates for BL_SMALL, MINIJOINT, MINIPOST. As each step of BL_BASE is more expensive, we checkpoint every 1000 updates for more finegrained estimates. We save extra checkpoints for MINIJOINT and MINIPOST at steps 1000, 2000, 3000 and 4000 for De, Fr, Es, and Zh, as these adapt rapidly for certain tasks. 4( 70.3 72.0 + 56.0 56.9 + 83.5 83.4 )/3 ≈ 0.987 5477 | Train cost | XNLI (acc) | | | | | | | | | | | | | | | |--------------|--------------|------|---------------------------------------------------------------------------------------------|----------------------------------------------------------------------------|----|----|----|----|----|----|----|----|----|----|-----| | EFLOPs days | ar | bg | de | el | es | fr | hi | ru | sw | th | tr | ur | vi | zh | avg | | Standard | BL_BASE | 54.1 | 20.9 | 70.2 78.4 76.2 76.1 79.2 78.9 65.6 72.5 68.2 70.1 67.1 60.9 72.1 72.4 72.0 | | | | | | | | | | | | | BL_SMALL | 21.1 | 8.1 | 65.4 71.1 68.0 69.1 71.3 71.0 61.8 66.2 63.8 64.9 64.4 57.9 65.7 67.6 66.3 | | | | | | | | | | | | | | Proposed | MINIPOST | 29.3 | 11.3 | 70.1 77.8 75.7 75.5 78.5 78.2 63.9 72.2 67.5 70.2 64.8 59.2 70.8 72.0 71.2 | | | | | | | | | | | | | MINIJOINT | 21.1 | 8.1 | 69.3 77.6 75.0 74.7 78.4 77.7 62.4 71.7 66.9 68.8 63.7 58.1 69.2 70.8 70.3 (a) XNLI results | | | | | | | | | | | | | | Train cost | MLQA (F1) | PAWS-X (acc) | | | | | | | | | | | |-----------------------------|-------------|----------------|------------------------------------|------------------------------------|--------------------------|----|-----|----|----|----|----|-----| | EFLOPs days | ar | de | es | hi | vi | zh | avg | de | fr | es | zh | avg | | Standard | BL_BASE | 54.1 | 20.9 | 51.2 61.0 66.6 48.5 57.3 56.8 56.9 | 84.7 85.8 86.0 77.3 83.4 | | | | | | | | | BL_SMALL | 21.1 | 8.1 | 46.0 54.6 59.8 43.1 53.2 50.8 51.2 | 74.2 76.4 76.6 70.8 74.5 | | | | | | | | | | Proposed | MINIPOST | 29.3 | 11.3 | 50.8 60.4 66.7 48.9 56.6 56.7 56.7 | 83.8 86.2 86.3 76.0 83.1 | | | | | | | | | MINIJOINT | 21.1 | 8.1 | 50.8 60.1 66.0 46.4 57.2 55.6 56.0 | 84.4 85.1 86.7 77.9 83.5 | | | | | | | | | | (b) MLQA and PAWS-X results | | | | | | | | | | | | | ## 4.2 Gpu Days To Near-Maximal Performance While we previously compared approaches at training completion, one can also apply early stopping, sacrificing some performance to gain on efficiency. This also allows to compare different approaches head-to-head according to the compute they require to achieve a given score—assuming we stop training as soon as the desired performance is hit. To that end, we fix our target score as 95% of the performance obtained by BL_BASE at the end of training, which we call *near-maximal performance*. 5 Results are in Table 3, and average speedup of our approach over standard adaptation is in Figure 1. 6 Overall, MINIJOINT does best: when perlanguage speedup is averaged across languages, we see that it requires about half to one-third the compute of BL_BASE to achieve the same performance in all tasks. MINIPOST has more modest speedups, but is still substantially faster than standard adaptation to hit the desired performance. This shows that, if possible, it is preferable to pretrain minimodels jointly with the primary model, but our approach can also bring substantial speedups when starting with an existing pretrained model. It is also remarkable that there is a considerable variance across tasks. In particular, all approaches require substantially less compute to achieve the target performance in PAWS-X when compared to XNLI and MLQA. The relative speedup of minimodel adaptation is also considerably higher on PAWS-X. We also observe a high variance across languages, which we analyze in more detail in §5.4. ## 5 Analysis 5.1 Training Curves We visualize the training curves of the different approaches in Figure 3. Consistent with our previous findings, we observe that MINIJOINT is usually the leftmost curve—signifying the most rapid adaptation—at the cost of a slightly lower final score. In contrast, BL_BASE is by far the slowest system approaching its peak performance, while BL_SMALL gets stuck at a poor performance compared to other approaches. Finally, we find that all methods adapt rapidly in PAWS-X, which suggests that this tasks might be easier than the others. ## 5.2 Mini-Model Depth We recall that the mini-model in MINIJOINT has 4 layers, whereas the one in MINIPOST has 6 (the bottom 4 taken from the primary model + 2 additional ones trained on top). We made this decision early in the development process based on preliminary experiments. In this section, we more systematically study the effect of mini-model depth on ef- | XNLI | MLQA | PAWS-X | | | | | | | | | | | | | | | | | | | | | | |-----------|-------------------------------------------------------------|-----------------------------|---------------------|----|----|----|----|----|----|----|----|----|--------|----|----|----|----|----|--------|----|----|----|--------| | ar | bg | de | el | es | fr | hi | ru | sw | th | tr | ur | vi | zh avg | ar | de | es | hi | vi | zh avg | de | fr | es | zh avg | | BL_BASE | 2.5 1.2 1.1 1.5 0.8 0.8 3.2 1.8 1.4 1.8 2.3 8.3 1.0 1.7 2.1 | 2.6 1.1 0.8 3.4 1.2 1.8 1.8 | 0.7 0.6 0.6 0.7 0.6 | | | | | | | | | | | | | | | | | | | | | | MINIPOST | 1.7 0.9 0.8 0.9 0.4 0.4 2.5 1.3 1.1 1.3 3.0 6.5 0.8 1.1 1.6 | 1.7 0.8 0.5 1.7 0.9 1.2 1.1 | 0.3 0.3 0.3 0.4 0.3 | | | | | | | | | | | | | | | | | | | | | | MINIJOINT | 1.0 0.5 0.5 0.6 0.3 0.3 5.9 0.6 0.6 0.7 5.3 5.4 0.6 0.8 1.6 | 1.0 0.6 0.4 5.3 0.5 0.9 1.5 | 0.2 0.2 0.2 0.2 0.2 | | | | | | | | | | | | | | | | | | | | | Table 3: **Estimated V100 training days to achieve near-maximal performance.** Near-maximal performance is defined as 95% of the score of BL_BASE at training completion. ↓ is better. BL_SMALL never achieves nearmaximal performance, except on XNLI for Turkish (1.7 days) and Urdu (6.4 days). ![5_image_0.png](5_image_0.png) ficiency and performance. To that end, we build models with the same architecture as MINIJOINT, but placing the secondary MLM head after layers 2, 6, 10, or 12.7 We experiment with Arabic, German and Turkish due to compute constraints. Figure 4 shows the XNLI training curve averaged over 3 languages. We see more rapid adaptation with shallower attachment of the second head, at a cost to final performance. §A.3 shows curves for PAWS-X, MLQA, and XQuAD. For PAWS-X, high performance was rapidly achieved by all models. End-of-training results are in Table A3. Table 4 reports estimated V100 days to achieve near-maximal performance as defined in §4.2, and upper and lower estimates are in §A.3. We find that the optimal depth of the mini-model is largely language-dependent. Specifically, Arabic and Turkish never hit the target performance with 2 layers, whereas German does so quickly. For Arabic, 4 layers provides the most rapid adaptation, while Turkish requires at least 6. This suggests that it is critical to have some minimum number of layers to achieve good performance, which varies from language to language. But, as long as this minimum is met, shallower mini-models are generally more efficient. ![5_image_1.png](5_image_1.png) Figure 4: XNLI training curve for MINIJOINT **with** secondary head attached at varying layers. Results are averaged over Arabic, German and Turkish. Final performance is in Table A3. ## 5.3 English Performance While all of our results so far correspond to the target languages, we next look into the source language performance. As described in §2.2, MINIPOST uses BL_BASE as the primary model, so their English performance is exactly the same. However, MINIJOINT jointly pretrains the primary model and its aligned mini-model. To understand the effect of the joint pretraining on the monolingual quality of the model, we compare the full MINIJOINT model and its corresponding minimodel with BL_BASE and BL_SMALL. As shown in Table 5, we find that dual-head training does not | Layer: | 2 | 4 | 6 | 8 | 10 | 12 | | |----------|------|-----|-----|-----|------|------|-----| | XNLI | 0.4 | 0.5 | 0.7 | 1.0 | 1.1 | 1.4 | | | de | MLQA | 0.7 | 0.6 | 0.6 | 1.1 | 1.0 | 1.4 | | PAWS | 0.2 | 0.2 | 0.4 | 0.5 | 0.7 | 1.0 | | | XNLI | ∞ | 1.0 | 0.9 | 1.3 | 1.8 | 2.4 | | | ar | MLQA | ∞ | 1.0 | 1.1 | 2.2 | 2.1 | 2.5 | | tr | XNLI | ∞ | 5.3 | 1.5 | 1.5 | 1.5 | 1.6 | | BL_BASE | 86.4 | |------------------------|--------| | BL_SMALL | 79.6 | | MINIJOINT (full) | 86.2 | | MINIJOINT (mini-model) | 79.2 | Table 5: **English XNLI accuracy.** §5.3 for details. damage performance: the full MINIJOINT model performs on-par with the 12-layer baseline, and the 4-layer extracted mini-model performs on-par with the 4-layer baseline. ## 5.4 Variance Across Languages While we obtain strong results across the board, there are 3 languages that prove challenging: Hindi, Turkish and Urdu. As shown in Table 3, MINIJOINT takes more than 5 V100 days to achieve near-maximal performance on XNLI for these languages, whereas the rest of the languages require at most 1 day. As seen in §5.2 this can be mitigated by using a deeper mini-model in the case of Turkish. However, we observe that even BL_BASE struggles with Urdu and, to a lesser extent, Hindi. This suggests that there is something making these languages particularly challenging for cross-lingual adaptation, affecting not only our method but also the standard approach from Artetxe et al. (2020). One hypothesis is that this is due to the high linguistic distance between these languages and English. In Table 1, these are the languages that are the most syntactically distant from English according to *lang2vec*, 8and the only ones with a pure SOV word order. This is also consistent with German, Spanish and French—the 3 languages that are the closest to English—generally obtaining the fastest adaptation times. In the future, we would like to explore starting with a multilingual model covering a few diverse languages akin to Pfeiffer 8https://github.com/antonisa/lang2vec et al. (2022), which could facilitate adapting to languages that are distant from English but might share features with some of the other languages. Another potential factor is that Hindi, Turkish and Urdu, along with Swahili, have the smallest training corpora. However, despite having the smallest training corpus with only 1.7GB—∼1/3 the size of Urdu and ∼1/12 of Hindi and Turkish— Swahili exceeds the aforementioned three on both adaptation speed and raw performance on XNLI. Exploring the impact of corpus size was outside of the scope of this work, but we believe that this is an interesting question to address in future work. ## 6 Related Work Multilinguality in NLP. One way to create a LM for a particular language is to collect enough data and train from scratch (e.g. Martin et al., 2020; de Vries et al., 2019; Chan et al., 2020). For the majority of languages, however, not enough data exists to train a high-quality model from scratch. Alternatively, one may pretrain a multilingual model on unlabeled data from many languages, which can then be finetuned on labeled data for zero-shot cross-lingual transfer (e.g. Devlin et al., 2019; Conneau and Lample, 2019; Conneau et al., 2020). Multilingual LMs are not without challenges; they are large and expensive to train, suffer from the curse of multilinguality, low-resource language performance can lag due to underrepresentation in the training corpus, and they cannot benefit from language-specific tokenization (Conneau and Lample, 2019; Wu and Dredze, 2020; Rust et al., 2021; Doddapaneni et al., 2021, for a survey). Furthermore, not all languages are created alike in multilingual models; Muller et al. (2021) find that some "easy" languages perform well in mBERT out-ofthe-box and others are successfully after finetuning with monolingual data, some "hard" languages perform poorly in mBERT even after tuning. Alternatively, one may adapt a pretrained model by finetuning, adding language- or domain-specific adapters (e.g. Rebuffi et al., 2017; Houlsby et al., 2019; Pfeiffer et al., 2022), retraining the lexical embedding layer (Tran, 2020; Artetxe et al., 2020; de Vries and Nissim, 2021), or translating the train, finetuning, or test set (e.g. Wang et al., 2022). Efficient Adaptation of Language Models. Adapters are a **parameter-efficient** way to extend LMs by training a small number of parameters that can be swapped-in for on-the-fly adaptation at test time as opposed to needing to store full separate models per task or language. Pfeiffer et al. (2020) train small stackable language- and taskspecific adapters with respect to a frozen transformer body that is shared between all languages and tasks, allowing simple and quick cross-lingual transfer at test-time. Bapna and Firat (2019) inject adapter layers into a neural machine translation (NMT) model for domain adaptation to obviate the need for full-model finetuning, and use languagespecific adapters for high-resource languages to recover from catastrophic forgetting during multilingual NMT training. Alabi et al. (2022) argue that their finetuned mBERT for 17 African languages is parameter efficient because they maintain highperformance with a single model rather than requiring separate models per language. Like Abdaoui et al. (2020), they reduce model size by removing vocabulary tokens not needed for target languages. LoRa adds small trainable matrices corresponding to low-rank decompositions of a weight updates within transformer attention, allowing rapid updates during finetuning (Hu et al., 2022). Prefixtuning methods are also parameter-efficient (Li and Liang, 2021; Liu et al., 2021). Compute-efficient methods aim reduce the computation (FLOPs or wall-time) required to train a model. Several authors developed vocabulary adaptation methods which reduce the need to extensively finetune a model or train from scratch (e.g. Chronopoulou et al., 2020; Sachidananda et al., 2021). Though Wang et al. (2020) continued-train mBERT with an extended vocabulary for a new language, convergence is faster than with a bilingual BERT model trained from scratch. Kocmi and Bojar (2020)'s vocabulary adaptation method improves time-to-convergence of a NMT system adapted to a new language. While de Vries and Nissim (2021) learn a new lexical embedding layer on top of GPT-2, which is computationally expensive, they employ engineering strategies to decrease training time, such as 16-bit mixed precision training, reduced window size, and maximum batch size with gradient accumulation. Though they must backpropogate through the entire model during embedding layer relearning, training stabilizes quickly. They adapt larger models by initializing the embedding layer using transformations of embeddings developed on smaller models, noting that the better initialization speeds training. Variance across languages. Prior work observes similar variation between languages in LM adaptation. When adapting BERT, Tran (2020) see that Hindi showed the slowest growth and lowest final XNLI score of six assessed languages, acknowledging word-order differences. Several authors see performance lags on NLP benchmarks for SOV languages when probing large multilingual models (Doddapaneni et al., 2021, for a review). Pires et al. (2019) find that zero-shot part-of-speech tagging is best when the model has been finetuned on a language that shares word order with the target language. Limisiewicz et al. (2020) attribute the disparity to underrepresentation of SOV languages in the training corpus. ## 7 Conclusion And Future Work Our work shows that it is possible to extend pretrained models to new languages using only a fraction of their parameters. We achieve this by learning a new embedding layer over a shallow *minimodel* aligned with the primary model. We explore two approaches to learn mini-models: MINIJOINT augments a transformer with a second MLM head during pretraining, adapting with an average 2.3x speedup over the standard method from Artetxe et al. (2020), and MINIPOST builds a mini-model by extracting a small number of layers from a pretrained model, providing an average 1.6x speedup. Our analysis reveals that shallower mini-models converge faster but plateau at lower performance. As such, one might explore combining multiple mini-models of different depths, using the shallowest at the beginning of cross-lingual adaptation, and then deeper ones as training progresses. One could add multiple MLM heads to a MINIJOINT model and train all simultaneously to facilitate this. We would also like to explore applications of mini-model adaptation beyond the multilingual scenario. In particular, by adapting rapidly on models significantly smaller than the base model used for inference, MINIJOINT/MINIPOST might be used to finetune large LMs on modest hardware. This could allow for a new paradigm whereby one shares a small model for adaptation while keeping a large aligned model private behind an API. Clients could then learn parameters for their task on the small model, which are later plugged into the large model for better performance. Shortly after us, Xiao et al. (2023) proposed *Offsite-Tuning*, an adaptation method similar to ours but motivated by privacy. ## Limitations Our study is limited to the adaptation of MLMs to new languages. While we believe that our proposed approach could also be applied more broadly (e.g., autoregressive models instead of MLMs, or adapting to new downstream tasks instead of new languages), further experiments are necessary to empirically verify this. In addition, we observe a considerable variance across languages (§5.4), the reasons for which are not entirely clear. Ideally, we would have a broader set of languages to better study this, as our language set is limited and skewed towards the Indo-European family. Finally, we average results over 5 finetuning runs, but computational restrictions prevented us from also averaging over multiple pretraining runs. As discussed in §A.5, we observed a non-negligible variance over pretraining runs in a preliminary experiment, but a more systematic exploration is necessary to better understand its impact. ## Acknowledgements The authors would like to thank Patrick Littell for helpful discussions about *lang2vec*, along with Philipp Koehn, Elina Baral, Sophia Hager, and Mathias Unberath for helpful discussions and feedback. ## References Amine Abdaoui, Camille Pradel, and Grégoire Sigel. 2020. Load what you need: Smaller versions of mutililingual BERT. In *Proceedings of SustaiNLP:* Workshop on Simple and Efficient Natural Language Processing, pages 119–123, Online. Association for Computational Linguistics. Jesujoba O. Alabi, David Ifeoluwa Adelani, Marius Mosbach, and Dietrich Klakow. 2022. Adapting pretrained language models to African languages via multilingual adaptive fine-tuning. In Proceedings of the 29th International Conference on Computational Linguistics, pages 4336–4349, Gyeongju, Republic of Korea. International Committee on Computational Linguistics. Mikel Artetxe, Sebastian Ruder, and Dani Yogatama. 2020. On the cross-lingual transferability of monolingual representations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4623–4637, Online. Association for Computational Linguistics. Ankur Bapna and Orhan Firat. 2019. Simple, scalable adaptation for neural machine translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1538– 1548, Hong Kong, China. Association for Computational Linguistics. Branden Chan, Stefan Schweter, and Timo Möller. 2020. German's next language model. In *Proceedings of* the 28th International Conference on Computational Linguistics, pages 6788–6796, Barcelona, Spain (Online). International Committee on Computational Linguistics. Alexandra Chronopoulou, Dario Stojanovski, and Alexander Fraser. 2020. Reusing a Pretrained Language Model on Languages with Limited Corpora for Unsupervised NMT. In *Proceedings of the 2020* Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2703–2711, Online. Association for Computational Linguistics. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 8440– 8451, Online. Association for Computational Linguistics. Alexis Conneau and Guillaume Lample. 2019. Crosslingual language model pretraining. *Advances in* neural information processing systems, 32. Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. XNLI: Evaluating crosslingual sentence representations. In *Proceedings of* the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2475–2485, Brussels, Belgium. Association for Computational Linguistics. Wietse de Vries and Malvina Nissim. 2021. As good as new. how to successfully recycle English GPT-2 to make models for other languages. In Findings of the Association for Computational Linguistics: ACLIJCNLP 2021, pages 836–846, Online. Association for Computational Linguistics. Wietse de Vries, Andreas van Cranenburgh, Arianna Bisazza, Tommaso Caselli, Gertjan van Noord, and Malvina Nissim. 2019. Bertje: A dutch bert model. arXiv preprint arXiv:1912.09582. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Sumanth Doddapaneni, Gowtham Ramesh, Anoop Kunchukuttan, Pratyush Kumar, and Mitesh M Khapra. 2021. A primer on pretrained multilingual language models. *arXiv preprint arXiv:2107.00676*. Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for nlp. In *International Conference on Machine Learning*, pages 2790–2799. PMLR. Edward J Hu, yelong shen, Phillip Wallis, Zeyuan AllenZhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022. LoRA: Low-rank adaptation of large language models. In International Conference on Learning Representations. Tom Kocmi and Ondˇrej Bojar. 2020. Efficiently reusing old models across languages via transfer learning. In Proceedings of the 22nd Annual Conference of the European Association for Machine Translation, pages 19–28, Lisboa, Portugal. European Association for Machine Translation. Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66–71, Brussels, Belgium. Association for Computational Linguistics. Patrick Lewis, Barlas Oguz, Ruty Rinott, Sebastian Riedel, and Holger Schwenk. 2020. MLQA: Evaluating cross-lingual extractive question answering. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7315– 7330, Online. Association for Computational Linguistics. Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4582– 4597, Online. Association for Computational Linguistics. Tomasz Limisiewicz, David Marecek, and Rudolf Rosa. ˇ 2020. Universal Dependencies According to BERT: Both More Specific and More General. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 2710–2722, Online. Association for Computational Linguistics. Patrick Littell, David R Mortensen, Ke Lin, Katherine Kairis, Carlisle Turner, and Lori Levin. 2017. Uriel and lang2vec: Representing languages as typological, geographical, and phylogenetic vectors. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, volume 2, pages 8–14. Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, and Jie Tang. 2021. Gpt understands, too. *arXiv preprint arXiv:2103.10385*. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*. Louis Martin, Benjamin Muller, Pedro Javier Ortiz Suárez, Yoann Dupont, Laurent Romary, Éric de la Clergerie, Djamé Seddah, and Benoît Sagot. 2020. CamemBERT: a tasty French language model. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7203– 7219, Online. Association for Computational Linguistics. Meryem M'hamdi, Xiang Ren, and Jonathan May. 2022. Cross-lingual lifelong learning. Benjamin Muller, Antonios Anastasopoulos, Benoît Sagot, and Djamé Seddah. 2021. When being unseen from mBERT is just the beginning: Handling new languages with multilingual language models. In *Proceedings of the 2021 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 448–462, Online. Association for Computational Linguistics. Deepak Narayanan, Mohammad Shoeybi, Jared Casper, Patrick LeGresley, Mostofa Patwary, Vijay Korthikanti, Dmitri Vainbrand, Prethvi Kashinkunti, Julie Bernauer, Bryan Catanzaro, Amar Phanishayee, and Matei Zaharia. 2021. Efficient large-scale language model training on gpu clusters using megatronlm. In *Proceedings of the International Conference* for High Performance Computing, Networking, Storage and Analysis, SC '21, New York, NY, USA. Association for Computing Machinery. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)*, pages 48–53, Minneapolis, Minnesota. Association for Computational Linguistics. Jonas Pfeiffer, Naman Goyal, Xi Lin, Xian Li, James Cross, Sebastian Riedel, and Mikel Artetxe. 2022. Lifting the curse of multilinguality by pre-training modular transformers. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3479–3495, Seattle, United States. Association for Computational Linguistics. Jonas Pfeiffer, Ivan Vulic, Iryna Gurevych, and Se- ´ bastian Ruder. 2020. MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7654–7673, Online. Association for Computational Linguistics. Jonas Pfeiffer, Ivan Vulic, Iryna Gurevych, and Sebas- ´ tian Ruder. 2021. UNKs everywhere: Adapting multilingual language models to new scripts. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10186–10203, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual BERT? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4996–5001, Florence, Italy. Association for Computational Linguistics. Sylvestre-Alvise Rebuffi, Hakan Bilen, and Andrea Vedaldi. 2017. Learning multiple visual domains with residual adapters. *Advances in neural information processing systems*, 30. Phillip Rust, Jonas Pfeiffer, Ivan Vulic, Sebastian Ruder, ´ and Iryna Gurevych. 2021. How good is your tokenizer? on the monolingual performance of multilingual language models. In *Proceedings of the 59th Annual Meeting of the Association for Computational* Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3118–3135, Online. Association for Computational Linguistics. Vin Sachidananda, Jason Kessler, and Yi-An Lai. 2021. Efficient domain adaptation of language models via adaptive tokenization. In *Proceedings of the Second* Workshop on Simple and Efficient Natural Language Processing, pages 155–165, Virtual. Association for Computational Linguistics. Ke Tran. 2020. From english to foreign languages: Transferring pre-trained language models. *arXiv* preprint arXiv:2002.07306. Xinyi Wang, Sebastian Ruder, and Graham Neubig. 2022. Expanding pretrained models to thousands more languages via lexicon-based adaptation. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long* Papers), pages 863–877, Dublin, Ireland. Association for Computational Linguistics. Zihan Wang, Karthikeyan K, Stephen Mayhew, and Dan Roth. 2020. Extending multilingual BERT to lowresource languages. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 2649–2656, Online. Association for Computational Linguistics. Shijie Wu and Mark Dredze. 2020. Are all languages created equal in multilingual BERT? In *Proceedings* of the 5th Workshop on Representation Learning for NLP, pages 120–130, Online. Association for Computational Linguistics. Guangxuan Xiao, Ji Lin, and Song Han. 2023. Offsitetuning: Transfer learning without full model. arXiv preprint arXiv:2302.04870. Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer. In *Proceedings* of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 483–498, Online. Association for Computational Linguistics. Yinfei Yang, Yuan Zhang, Chris Tar, and Jason Baldridge. 2019. PAWS-X: A cross-lingual adversarial dataset for paraphrase identification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3687–3692, Hong Kong, China. Association for Computational Linguistics. ## A Appendix A.1 Floating Point Operations (Flops) We estimate total FLOPs for training using the formula from Narayanan et al. (2021), amended for RoBERTa without activation recomputation. Like the authors, we omit calculations over biases, activation functions, softmax, and other minor costs. Assume hidden size h, vocabulary size V , number of layers l, token mask probability p, sequence length s, batch size B, and total training updates U, the total FLOPs during training are: $$72U B s l h^{2}(1+\frac{s}{6h}+\frac{p}{12l}+\frac{p V}{12h l})\qquad\mathrm{(A1)}$$ Derivation Recall that multiplying A ∈ R m×n by B ∈ R n×prequires 2mnp FLOPs. Each transformer layer consists of a multi-head self-attention block and a linear projection. The attention block has four weight matrices Wq, Wk, Wv, Wo ∈ R h×h. 9 The input x ∈ R s×his projected with Wq, Wk and Wv, requiring 2sh2 FLOPs each: $$Q=x W_{q}\qquad K=x W_{k}\qquad V=x W_{v}$$ Self-attention followed by output projection is: $$(\mathrm{softmax}(\frac{Q K^{T}}{\sqrt{h}})V)W_{O}$$ Multiplying QKTand multiplying the result by V both require 2hs2 FLOPs. Multiplying with WO costs 2sh2 FLOPs. In sum, there are 8sh2 + 4hs2 FLOPs to compute the forward pass of the attention block. The output of the attention block (x ∈ R s×h) is then passed through two linear layers: F0 ∈ R h×4hand F1 ∈ R 4h×h. These multiplications cost 8sh2 FLOPs each, so total FLOPs per layer is: $$\mathrm{FLOP_{layer}}=24s h^{2}+4h s^{2}$$ The output x ∈ R s×h passes through the MLM head: a dense layer of size R h×hfor 2sh2 FLOPs, and an output projection of size R h×Vthat costs: $$\mathrm{FLO}_{\mathrm{output}}=2s h V$$ Only masked tokens are passed through MLM head, so the total flops in the LM head is FLOPlm = p(2sh2 + 2shV ) In sum, the total estimated FLOPs for a forward pass of RoBERTa with a batch size of 1 is: $$\begin{array}{c}{{l(\mathrm{FLOP_{layer}})+\mathrm{FLOP_{lm}}}}\\ {{=l(24s h^{2}+4h s^{2})+p(2s h^{2}+2s h V)}}\end{array}$$ (A2) To account for the backward pass, one typically triples the forward pass FLOPs. This is because (1) to backpropogate the error, one calculates the partial derivatives of the loss with respect to the input (activations): ∂δ ∂a , and (2) to make a weight update, one first must calculate the partial derivatives with respect to the weights: ∂δ ∂w . Calculating each partial derivative requires the same number of FLOPs as the forward pass, meaning that the backward pass is doubly as expensive.10 Tripling Equation A2 to account for the backward pass, multiplying by batch size and total updates, and reducing gives Equation A1 for full pretraining. Adaptation requires an amended equation for the backward pass because layers are frozen (Step 2: Ltrg embedding training). The trainable embeddings are tied to the output projection layer in the MLM head: thus, trainable input embeddings are passed through frozen layers, which passes through the MLM head consisting of a frozen dense layer and *trainable* output projection. To backpropogate the error to the embeddings, we must (1) calculate ∂δ ∂a for the entire model, requiring the same number of FLOPs as the forward pass.11 Because the MLM head's output projection layer is also trainable, we also calculate ∂δ ∂w here on the backward pass. In total, this gives the below equation for Step 2, after multiplying for batch size and total updates: $$U B(2l\mathrm{FLOP}_{\mathrm{layer}}+2\mathrm{FLOP}_{\mathrm{lm}}+p\mathrm{FLOP}_{\mathrm{outputj}})$$ $$=48U B s l h^{2}(1+{\frac{s}{6h}}+{\frac{p}{12l}}+{\frac{p V}{8h l}})$$ (A3) Thus, adaptation with 4 layers requires ∼21.1 EFLOPs versus ∼29.3 EFLOPs during pretraining. For 12 layers, adaptation requires ∼54.1 EFLOPs versus ∼78.8 in pretraining. MINIPOST **FLOPs in Step 1b** Step 1b of MINIPOST builds small mini-model with embeddings and first lf layers frozen. These frozen layers do not require the backward pass. Furthermore, the frozen LM head does not require calculating ∂δ ∂w , only ∂δ ∂a . Of the trainable layers, each require both ∂δ ∂a and ∂δ ∂w , except the first trainable layer which only needs ∂δ ∂w (because it does not pass back the error). Given trainable layers lt, the total cost for creating the mini-model in MINIPOST is: = *UB(l*(FLOPlayer) + 2FLOPlm + (2lt − 1)FLOPlayer) $$=UB((l+2l_{t}-1)\text{FLOP}_{\text{layer}}+2\text{FLOP}_{\text{lm}})$$ $$=UB((l+2l_{t}-1)(24sh^{2}+4hs^{2})+4psh^{2}+4pshV)$$ (A4) Concretely, the cost of training a 6-layer minimodel in this work is ∼21.6 EFLOPs. In comparison, pretraining the vanilla 12-layer RoBERTa base model requires ∼78.8 EFLOPs. | XQuAD | | | | | | | | | | | |-----------|---------------------------------------------|----|----|----|----|----|----|----|----|-----| | ar | de | el | es | hi | ru | th | tr | vi | zh | avg | | BL_BASE | 2.3 1.1 1.7 0.8 3.3 1.9 2.1 2.2 1.2 1.5 1.8 | | | | | | | | | | | MINIPOST | 1.4 0.8 1.0 0.4 1.3 1.3 1.4 1.2 0.8 1.0 1.1 | | | | | | | | | | | MINIJOINT | 0.8 0.4 0.6 0.5 3.1 0.6 0.6 ∞ 0.5 0.6 0.9* | | | | | | | | | | ## A.2 Xquad The Cross-lingual Question Answering Dataset (XQuAD; Artetxe et al., 2020) covers a more extensive set of languages than MLQA. We evaluate the same models tuned for QA in the main body of the paper on XQuAD. Final F1 and V100 days to achieve near-maximal performance are in Tables A1 and A2. We also show the growth curve for F1 through the first V100-week in Figure A1. Table A1: **Estimated V100 training days to achieve** near-maximal performance (see §4.2) on XQuAD. ∞: never hit target performance. BL_SMALL never achieves near-maximal performance. ∗excludes Turkish, which never hit near-maximal performance. ![12_image_0.png](12_image_0.png) ## A.3 Mini-Model Depth: Mlqa, Paws-X, And Xquad We extend the results of §5.2 to MLQA, PAWSX, and XQuAD, shown in Figure A2. Figure A3 shows training curves for the particularly challenging language of Turkish on XNLI and XQuAD. Table A3 shows performance at training completion. Table A2: **XQuAD performance at training completion.** Both variants of our approach nearly match the performance of BL_BASE at a substantially lower cost, while BL_SMALL significantly lags behind. | Train cost | XQuAD (acc) | | | | | | | | | | | | | |--------------|---------------|------|------|------|------|------|------|------|------|------|------|-----------|-----------| | EFLOPs | days | ar | de | el | es | hi | ru | th | tr | vi | zh | avg | | | Standard | BL_BASE | 54.1 | 20.9 | 52.4 | 69.2 | 70.0 | 74.8 | 53.8 | 68.7 | 57.1 | 57.2 | 65.9 | 53.7 62.3 | | BL_SMALL | 21.1 | 8.1 | 48.2 | 61.2 | 63.3 | 65.8 | 46.2 | 61.0 | 50.4 | 52.1 | 60.3 | 46.2 55.5 | | | Proposed | MINIPOST | 29.3 | 11.3 | 52.4 | 68.8 | 69.9 | 75.1 | 55.3 | 68.2 | 56.9 | 58.6 | 65.7 | 53.5 62.4 | | MINIJOINT | 21.1 | 8.1 | 53.8 | 70.1 | 69.9 | 73.7 | 51.8 | 68.6 | 56.5 | 53.2 | 64.9 | 52.5 61.5 | | | Figure: | Fig. 4 | A2(a) | A2(b) | A2(c) | A3(a) | A3(b) | |-----------|----------|---------|---------|---------|---------|---------| | 2 | 66.2 | 51.5 | 83.8 | 54.1 | 58.3 | 46.8 | | 4 | 69.3 | 55.4 | 84.4 | 59.0 | 63.7 | 53.2 | | 6 | 70.3 | 56.2 | 83.7 | 59.8 | 65.6 | 56.0 | | 8 | 70.6 | 54.8 | 83.3 | 58.6 | 66.7 | 55.0 | | 10 | 71.4 | 55.8 | 82.9 | 58.8 | 67.9 | 53.2 | | 12 | 71.1 | 56.0 | 81.5 | 60.3 | 68.5 | 57.4 | | BL_Base | 71.2 | 56.1 | 84.7 | 59.6 | 67.1 | 57.2 | | N = | | | | | | | ![13_image_0.png](13_image_0.png) ![13_image_1.png](13_image_1.png) | Layers: | 2 | 4 | 6 | 8 | 10 | 12 | | |-----------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------| | XNLI | 0.4 (0.2 - 0.4) | 0.5 (0.3 - 0.7) | 0.7 (0.5 - 0.9) | 1.0 (0.6 - 1.2) | 1.1 (0.7 - 1.4) | 1.4 (0.8 - 1.7) | | | XQUAD | 0.4 (0.2 - 0.4) | 0.4 (0.3 - 0.7) | 0.6 (0.5 - 0.9) | 0.9 (0.6 - 1.2) | 0.7 (0.7 - 1.4) | 1.3 (0.8 - 1.7) | | | MLQA | 0.7 (0.6 - 0.8) | 0.6 (0.3 - 0.7) | 0.6 (0.5 - 0.9) | 1.1 (0.6 - 1.2) | 1.0 (0.7 - 1.4) | 1.4 (0.8 - 1.7) | | | de | PAWS | 0.2 (0.0 - 0.2) | 0.2 (0.2 - 0.3) | 0.4 (0.0 - 0.5) | 0.5 (0.0 - 0.6) | 0.7 (0.0 - 0.7) | 1.0 (0.8 - 1.7) | | XNLI | ∞ | 1.0 (0.7 - 1.0) | 0.9 (0.9 - 1.4) | 1.3 (1.2 - 1.7) | 1.8 (1.4 - 2.1) | 2.4 (1.7 - 2.5) | | | ar | XQUAD | ∞ | 0.8 (0.7 - 1.0) | 0.9 (0.5 - 0.9) | 1.6 (1.2 - 1.7) | 1.8 (1.4 - 2.1) | 2.4 (1.7 - 2.5) | | MLQA | ∞ | 1.0 (1.0 - 1.3) | 1.1 (0.9 - 1.4) | 2.2 (1.7 - 2.3) | 2.1 (1.4 - 2.1) | 2.5 (1.7 - 2.5) | | | XNLI | ∞ | 5.3 (5.2 - 5.5) | 1.5 (1.4 - 1.8) | 1.5 (1.2 - 1.7) | 1.5 (1.4 - 2.1) | 1.6 (0.8 - 1.7) | | | tr | XQUAD | ∞ | ∞ | 1.3 (0.9 - 1.4) | 3.0 (2.9 - 3.5) | ∞ | 2.7 (2.5 - 3.3) | ## A.4 Upper/Lower Estimates On Time To Near-Maximal Performance In §4.2, we use linear interpolation to estimate GPU days to near-maximal performance if target performance occurred between checkpoints. In Table A4, we show the upper and lower estimates. Models are checkpointed every 5000 updates, so a lower estimate of 0.0 implies that the target score was achieved before first checkpoint. Because MINIJOINT with the secondary head attached at layer 4 was part of the main experiments, it was also checkpointed on steps 1000, 2000, 3000, and 4000. As such, estimates lower than 0.3 from this model imply that the target score was achieved hit before step 5000 (the first checkpoint for other models). ## A.5 Variance Across Pretraining Runs While we average results over 5 finetuning runs, we always use the same pretrained model. Early in development, we noticed that there could be a difference between different pretraining runs. While it was not feasible to repeat all experiments with different pretraining seeds due to computational cost, we performed 3 additional runs of BL_BASE for Arabic. We see a difference up to 3 points across runs in Table A5. This is task dependent, as the best run on XNLI is the worst on MLQA. | XNLI | MLQA | | |----------|--------|------| | Run #1 | 67.7 | 51.4 | | Run #2 | 69.3 | 51.1 | | Run #3 | 68.7 | 52.7 | | Main run | 69.6 | 49.6 | Table A5: **Arabic development performance for** BL_BASE **with different pretraining seeds.** Results averaged over 5 finetuning runs. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitations ✓ A2. Did you discuss any potential risks of your work? Limitations ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract/Intro ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? Not applicable. Left blank. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Not applicable. Left blank. ## C ✓ **Did You Run Computational Experiments?** Sections 4, 5, Appendix ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? V100 GPU days are reported throughout. We cite the base architecture and report architectural changes. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 3 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Sections 4 & 5 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 3 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
lv-etal-2023-dsp
{DSP}: Discriminative Soft Prompts for Zero-Shot Entity and Relation Extraction
https://aclanthology.org/2023.findings-acl.339
Prompt-based methods have shown their efficacy in transferring general knowledge within pre-trained language models (PLMs) for low-resource scenarios. Typically, prompt-based methods convert downstream tasks to cloze-style problems and map all labels to verbalizers.However, when applied to zero-shot entity and relation extraction, vanilla prompt-based methods may struggle with the limited coverage of verbalizers to labels and the slow inference speed. In this work, we propose a novel Discriminate Soft Prompts (DSP) approach to take advantage of the prompt-based methods to strengthen the transmission of general knowledge. Specifically, we develop a discriminative prompt method, which reformulates zero-shot tasks into token discrimination tasks without having to construct verbalizers.Furthermore, to improve the inference speed of the prompt-based methods, we design a soft prompt co-reference strategy, which leverages soft prompts to approximately refer to the vector representation of text tokens. The experimental results show that, our model outperforms baselines on two zero-shot entity recognition datasets with higher inference speed, and obtains a 7.5{\%} average relation F1-score improvement over previous state-of-the-art models on Wiki-ZSL and FewRel.
# Dsp: Discriminative Soft Prompts For Zero-Shot Entity And Relation Extraction Bo Lv1,2,3**, Xin Liu**2∗ , Shaojie Dai1,2,3 Nayu Liu1,2, Fan Yang2,3, Ping Luo 1,2,3∗ **and Yue Yu**2∗ 1Key Laboratory of Intelligent Information Processing Institute of Computing Technology, Chinese Academy of Sciences 2Peng Cheng Laboratory, 3University of Chinese Academy of Sciences {lvbo19,daishaojie22,liunayu18,yang22}@mails.ucas.ac.cn [email protected], [email protected], [email protected] ## Abstract Prompt-based methods have shown their efficacy in transferring general knowledge within pre-trained language models (PLMs) for lowresource scenarios. Typically, prompt-based methods convert downstream tasks to clozestyle problems and map all labels to verbalizers. However, when applied to zero-shot entity and relation extraction, vanilla promptbased methods may struggle with the limited coverage of verbalizers to labels and the slow inference speed. In this work, we propose a novel Discriminative Soft Prompts (DSP) approach to take advantage of the prompt-based methods to strengthen the transmission of general knowledge. Specifically, we develop a discriminative prompt method, which reformulates zero-shot tasks into token discrimination tasks without having to construct verbalizers. Furthermore, to improve the inference speed of the prompt-based methods, we design a soft prompt co-reference strategy, which leverages soft prompts to approximately refer to the vector representation of text tokens. The experimental results demonstrate that, our model outperforms baselines on two zero-shot entity recognition datasets with higher inference speed, and obtains a 7.5% average relation F1score improvement over previous state-of-theart models on Wiki-ZSL and FewRel. ## 1 Introduction Zero-shot entity and relation extraction (Levy et al., 2017; Chen and Li, 2021) aim to extract novel entities and their relations by transferring semantic knowledge from seen classes to unseen ones. It is a fundamental problem in information extraction, which can be decomposed into two subtasks: zero-shot named entity recognition (ZSNER) (Li et al., 2020, 2022) and zero-shot relation extraction (ZSRE) (Sainz et al., 2021). Recent works (Li et al., 2020; Chen and Li, 2021) focus on fine-tuning ∗Corresponding author. ![0_image_0.png](0_image_0.png) PLMs with extra classifiers to leverage the rich lexical, syntactic, and factual knowledge (Petroni et al., 2019) within PLMs to compensate for the lack of domain knowledge in the task training. However, the significant objective gap between pre-training and fine-tuning may drive the parameters of the PLMs away from their initial values, resulting in a substantial loss of general knowledge. Recent efforts (Ding et al., 2021) on probing knowledge have demonstrated that formalizing downstream tasks in the same form as pre-training is an efficient way to enhance the transmission of general knowledge. Inspired by this, prompt-based learning (Schick and Schütze, 2021) that reformulates downstream tasks as cloze questions has been introduced. Typically, for the entity type classification task, a template is used to convert [X] into a cloze task (e.g.,"[X] E is a [MASK] entity."), where [X] is the placeholder for input sentences, and E is a candidate entity to be classified. The PLMs, such as BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019), are asked to infer the words to fill in [MASK], and the words are further mapped to corresponding labels through a verbalizer (e.g., "people" for label "PERSON"). However, two issues impede the application of cloze-style prompts to the zero-shot entity and relation extraction task as follows: (1) we can see from Figure 1(a) that multiple words or phrases can represent the same class. Building a verbalizer that can cover all candidate words comprehensively is challenging in a zero-shot setting, and a poorly designed verbalizer can limit the accuracy of predictions. (2) for the entity recognition task, the n-grams method needs to be employed for enumeration when generating candidate entities, resulting in a serious efficiency issue. Figure 1(b) shows that prompt methods need to run n(n + 1)/2 times to recognize all entities in the sentence of length n during inference, which is unacceptable in realworld applications. We argue that the primary reason for these limitations is that the existing prompt-based methods imitate masked language model (MLM), which needs to map the labels to verbalizers. Unlike MLM, the token discrimination task of discriminative pre-trained models (DLM) appears to be more compatible with zero-shot entity and relation extraction. In this work, we introduce a Discriminative Soft Prompts approach, which utilizes prompt discriminative language models (e.g., ELECTRA (Clark et al., 2020) ) to address the general knowledge forgetting problem caused by modifying the structure of PLMs. Specifically, we present a discriminative prompt strategy, which leverages the label information to construct a template to convert input sentences into a discriminative language modeling problem. As shown in Figure 1(b), our discriminative prompt method recognizes candidate entities by classifying entity type into binary categories (i.e., original, replaced), thereby bridging the gap between pre-training and task-training without the need for verbalizers. Furthermore, we design a soft prompt co-reference strategy, which leverages soft prompts to approximately refer to the vector representation of text tokens. By classifying soft prompts into binary categories, the inference speed of our model has significant improvement. Especially for entity recognition task, it only needs to run the model once to extract all entities of the same type in the sentence. Extensive experiments are conducted on two zero-shot tasks, ZSNER and ZSRE. Specifically, our method advances the state-of-the-art SMXM(Aly et al., 2021) model on two ZSNER datasets and gains a 7.5% average relation F1-score improvement over the previous best model on WikiZSL and FewRel. Moreover, the inference speed of the DSP is up to 120 times faster than the clozestyle prompt method on ZSNER. Our main contributions are summarized as follows: - We reformulate ZSNER and ZSRE as token discrimination tasks, taking advantage of the prompt method to strengthen the transmission of general knowledge without having to construct a verbalizer. - We propose a soft prompt co-reference strategy, which significantly improves the inference efficiency of the discriminative prompt method for zero-shot entity and relation extraction tasks. - Experiments on four datasets demonstrate the effectiveness of our model in both ZSNER and ZSRE tasks. Moreover, the inference speed of the DSP is up to 120 times faster than the cloze-style prompt method on ZSNER. ## 2 Method In this section, we first formally define the problem of zero-shot entity and relation extraction. Then we introduce our Discriminative Soft Prompts (DSP) method. The following is a detailed description. ## 2.1 Problem Definition In zero-shot entity and relation extraction, the goal is to learn from the seen dataset and generalize to the unseen dataset. The seen and unseen label sets are denoted as Cs = {c1, c2*, ..., c*n} and Ct = {cˆ1, cˆ2*, ...,* cˆm}, where n = |Cs| and m = |Ct| are the sizes of seen and unseen label sets, and Cs ∩ Ct = ∅. Let S = ns1, s2*, ..., s* L(L+1) 2 obe all the possible spans in sentence X up to length L. The problem can be decomposed into two subtasks: Zero-Shot Named Entity Recognition The task is to predict an entity type ye(si) ∈ Ct or ye(si) = ∅ for each span si ∈ S, indicating that span siis not an entity. The output of the task is Y*zsner* = {(si, e) : si ∈ *S, e* ∈ Ct}, where Y*zsner* is the set of all (si, e) pairs such that span iis associated with entity type e, and si ∈ S and e ∈ Ct. Zero-Shot Relation Extraction The task is, for every pair of spans si ∈ S and sj ∈ S, to predict a relation type yr(si, sj ) ∈ Ct, or to indicate that there is no relation between them with yr(si, sj ) = ∅. The output of the task is Y*zsre* = {(si, sj , r) : si ∈ S, sj ∈ *S, r* ∈ Ct}. ## 2.2 Discriminative Prompt Strategy Discriminative pre-trained language models (DLMs) (Yao et al., 2022; Xia et al., 2022) are a compelling alternative to mask pre-trained language models (MLMs) and have shown potential for low-resource scenarios. By casting NLP tasks as a discriminative language modeling problem, discriminative prompt strategies can help bridge the gap between pre-training and task-specific tuning. The inputs of DLM are formulated with an input sentence X, and a template T . As shown in Figure 1(b), an example of ZSNER with the entity types set C = {c1, c2*, ..., c*n}, we define a template that contains candidate entity T (·*, e, c*). Given an input text X (e.g., "Jordan is a basketball star."), DLM fills the input text into the template as follows: $$T(X,e,c_{i})=X,$$ T (*X, e, c*i) = X, The entity type of e is ci (1) Where e refers to the candidate entity, i is i-th entity type belonging to C. After template filling, T is fed into DLM to obtain the hidden representations h[CLS], h1, ..., hn, ..., he*, ..., h*ci , h[SEP] . The model then discriminates whether the entity type ciis accurate, and the score of ciis calculated as follows: $${\mathcal{D}}(T([c_{i}]))=1-\sigma(h_{D L M}^{\top}h_{[c_{i}]})$$ ⊤DLM h[ci]) (2) where hDLM is the reused classifier of DLM, and σ(·) is the sigmoid activation. DLM then rounds the output scores into binary categories, i.e., {0, 1} corresponding {replaced, original}. If an entity type name consists of multiple tokens, such as " WORK OF ART", we perform an or operation on the binary classification results of all tokens. Since the pre-training task of DLM is similar to this task, it bridges the gap between pre-training and downstream tasks without requiring a verbalizer design. ## 2.2.1 **Problems Of Prompt-Based Methods For** Zsner Nevertheless, prompt-based methods have high complexity in solving ZSNER task. During the process of inference, the candidate entities s i j that denote the span starting from xi and ending with xj need to be enumerated in order to obtain all possible spans: $$s_{j}^{i}=E n u m e r a t e(\left\{x_{i},...,x_{j}\right\},i,j\in\left\{1..n\right\})$$ (3) For example, the template of MLM can take the form as "X, The entity type of s i j is [MASK].", where the cloze-style prompt method predicts an entity label word at [MASK] (e.g., people) corresponding to an entity label (e.g., PERSON). As the sequence length increases, the decoding time also increases, rendering this decoding method timeconsuming. ## 2.3 **Discriminative Soft Prompts Co-Reference** For Zsner $${\mathrm{f}}\;e\;{\mathrm{is}}\;c$$ We propose a soft prompts co-reference strategy to solve the slow inference speed of DLM on ZSNER tasks. The main idea of our method is to perform binary classification twice for each token in the text, to identify whether it is the head or tail token of an entity. For example, given the input X as "Badaling Great Wall is located...", the model performs binary classification twice for each token. This yields a sequence [1,1,0], representing whether the token is the head of an entity, and a sequence [0,0,1], representing whether the token is the tail of an entity. Finally, the head and tail tokens are combined as nearest neighbors to obtain two entity span s 31 ("Badaling Great Wall") and s 32 ("Great Wall"), respectively. However, DLM has only one classification layer, and adding a new layer could disrupt the model's structure and potentially harm its performance. To address this, we design two soft prompts ([s] , [e]), which can be easily incorporated into the input with minimal modification. It is worth noting that we only use two soft prompts, which are copied to refer to all tokens. Figure 2 illustrates that we first link the position ID of each token with two soft prompts and assign them identical position embeddings. Through multi-layer Transformer calculations, the embedding of the soft prompts will be closest in proximity to the token that has the same position embedding. To avoid damaging the fluency of sentences, we next modify the attention mask matrix, which is shown in Appendix B. Specifically, each soft prompt is only visible to partnering soft prompts that refer to the same span and is invisible to text tokens. At the same time, the soft prompts attend upon the text tokens to aggregate information from their corresponding spans. By classifying these soft prompts, we obtain a matrix of span positions to identify all entities. Formally, we form a new sequence Xb consisting of soft prompts, original text, and template: $$\hat{X}=x_{1},...,x_{l},[s_{1}]...,[s_{l}],[e_{1}]...,[e_{l}],t_{1},...t_{m}\tag{6}$$ (4) where X is a sequence of l text tokens, and [sl] , [el] represent the soft prompts that have the same position embedding as the l-th token xl. As illustrated in Figure 2, we input Xb to the DLM and obtain: $${\mathcal{D}}({\widehat{X}}([s1]))=1-\sigma(h_{D L M}^{\top}h_{[s_{1}]})\qquad\mathrm{(5)}$$ The DLM outputs the "original" label corresponding to the positions [s1] , [s2] , [e3], indicating that ([s1] , [e3]) (*Badaling Great W all*) and ([s2] , [e3]) (*Great W all*) belong to the entity type of "Work Of Art". ## 2.4 **Discriminative Soft Prompts Co-Reference** For Zsre We adopt a pipeline approach to tackle the ZeroShot Relation Extraction (ZSRE) task. Specifically, we employ DSP-ZSNER to identify entity mentions, and then perform Zero-Shot Relation Classification (ZSRC) task to classify the relations between all pairs of mentions. The discriminative prompt method can only determine if two entities belong to a particular relation in the prompt template at a time. Therefore, if there are K preset relations, the model needs to run K times to determine the relationship between the two entities. To improve inference speed on the ZSRC task, we employ a co-reference strategy, similar to the approach used in the ZSNER task. Given an input sequence X and two entity spans es and eo, we utilize a soft prompt [r] to represent the relation between the two entities in the template. All labels in the relation label set share the same position embedding as [r], enabling each label to obtain the contextual representation of [r] approximately. Furthermore, to maintain semantic integrity, labels in the relation label set are not visible to each other in the mask matrix. We form a new sequence Xb consisting of the soft prompt, relation label set, original text, and template, given by: $$\widehat{X}=x_{1},...,x_{n},e_{o},...,\left[r\right],...e_{s},r_{1},...,r_{i},...r_{K}\tag{6}$$ We input $\widehat{X}$ into the model and obtain: $$\mathcal{D}(\widehat{X}(r_{i}))=1-\sigma(h_{D L M}^{\top}h_{[r_{i}]})$$ $$\mathbf{\Pi}^{0}$$ If D(Xb(ri)) outputs *original*, it indicates that the relation between es and eo is ri. ## 2.5 Training Loss Function To facilitate optimization and prevent overfitting, the final training loss combines the cross-entropy (CE) loss with parameter regularization loss (λ is a hyper-parameter). $${\mathcal{L}}={\mathcal{L}}(c e)+\lambda{\mathcal{L}}(w)$$ $$(8)$$ The cross-entropy loss is as follows: $$\begin{array}{l}{{{\mathcal{L}}(c e)=\sum_{i}(-y_{i}l o g{\mathcal{D}}({\widehat{X}}(c_{i}))}}\\ {{{}}}\\ {{{}}}\\ {{-(1-y_{i})l o g(1-{\mathcal{D}}({\widehat{X}}(c_{i}))))}}\end{array}$$ $$\quad(9)$$ The parametric regularization is defined as: $${\mathcal{L}}(w)={\frac{1}{2}}\sum_{j\in S}(w_{j}-w_{j}^{0})^{2}\qquad\qquad(10)$$ where w 0 j represents the initial parameters of the j-th layer of the pre-trained language model, and wj represents the parameters of the j-th layer of the discriminative prompt model during task-training. ## 3 Experiments 3.1 Setup Datasets For the ZSNER task, we evaluate our approach on two popular zero-shot NER datasets: OntoNotes 5.01(Pradhan et al., 2013) and MedMentions (Mohan and Li, 2019). To assess the model's performance in recognizing nested entities, we follow Aly et al. (2021) to gather all entities of each type from the dataset and mapped them into sentences based on their respective types. As shown in Table 8, the datasets are divided into a training set, development set and test set according to the entity type. For ZSRE task, we utilize the 1https://catalog.ldc.upenn.edu/LDC2013T19 ![4_image_0.png](4_image_0.png) | Dataset | #Sents | #Ents (#Types) | #Rels (#Types) | |----------------|----------|------------------|------------------| | OntoNotes-ZS | 76.7k | 58.1k (18) | - | | MedMentions-ZS | 46.9k | 116.2k (21) | - | | Wiki-ZSL | 94.3k | - | 77.6k (113) | | FewRel | 56.0k | - | 72.9k (80) | Table 1: The statistics of the adopted datasets. datasets released by Chia et al. (2022) and adhere to their prescribed splitting method2(consisting of five folds) for both training and evaluation. Additionally, we make use of their recommended data pre-processing methods. Table 1 shows the statistics of each dataset. For each dataset, we set the unseen label size to m ∈ {5, 10, 15}, while treating the remaining labels as seen labels during training in the experiments. Details of the datasets are described in Appendix C. Evaluation metrics We follow the standard evaluation protocol and use F1-score as the evaluation metric. For ZSNER task, the unbalanced number of samples per class necessitates employing evaluation metrics that focus on per-class averaged scores to properly account for the imbalance. Therefore, like Aly et al. (2021), we evaluate our model using the macro average F1-score indicator. We evaluate ZSRC using the Macro F1 metric to be consistent with Chia et al. (2022). ZSRE first identifies entities, then predicts the relation between each pair of entities, which will predict a large number of negative samples. Thus we use the micro F1-score, which is standard in structured prediction tasks (Zhong and Chen, 2020) and report the precision (P.) and recall (R.). The details for evaluation metrics are in Appendix D. Implementation details We adopt ELECTRABase as the backbone of our model and initialize with the corresponding pre-trained cased weights. Models are implemented using Pytorch framework3and Huggingface transformers4. DSPZSNER and DSP-ZSRC are optimized by AdamW (Loshchilov and Hutter, 2017) with the learning rate of 2e-5. The training batch size used is 16 for all models. For ZSNER task, the soft prompts and entity-type descriptions take up part of the input length, so the maximum length of the DSP is limited to 150 tokens. For the ZSRE task, the maximum length of input token is 256. For both tasks, we employ an early stopping scheduler to stop training when there is no improvement on the validation F1 score. We then conduct three runs of experiments to mitigate instability issues for all experiments5. Ontonotes-ZS MedMentions-ZS Model Dev Test Dev Test BEM 18.0 11.0 19.0 22.0 MRC 15.0 18.0 21.0 26.0 SMXM(base) 19.0 20.0 20.0 21.0 SMXM 23.0 25.0 23.0 27.0 DLM-Point 18.3 28.4 26.1 28.6 DSP-ZSNER **27.0 31.6 29.8 32.7** ## 3.2 Zero-Shot Named Entity Recognition 3.2.1 Baselines We compare our DSP-ZSNER with current stateof-the-art models in both NER and related zeroshot tasks. **Binary Entailment Model(BEM)** is a ZSNER model obtained by modifying the stateof-the-art zero-shot text classification model (Yin et al., 2019) by Aly et al. (2021). They add a binary output layer based on BERT to generate binary output for each class, and the negative prediction of all classes predicts negative classes. MRC is an approach by Li et al. (2020) who construct queries for entity classes and modifie the model structure to transform NER into fully supervised machine reading comprehension tasks for flat and nested entities. Similar to MRC, **SMXM** (Aly et al., 2021) uses entity type descriptions to aid encoding, and subsequently feeds the entity encoding into a linearly transformed layer for classification. **DLM-Point** is a DLM-based sequence labeling method proposed by us, which is introduced in Appendix A. ## 3.2.2 Results We show the ZSNER result in Table 2. Some observations are summarized from the experimental results: (1) Our approach outperforms the baselines based on fine-tuning by modifying the structure of PLMs on both Ontonotes-ZS and MedMentionsZS datasets, and obtains a +6.3% F1-score improvement on MedMentions-ZS. MedMentions-ZS contains twice as many entities as Ontonotes-ZS and has a low correlation between training and testing data. It shows that the DSP-ZSNER can well preserve the initial general knowledge of the PLMs to better model the interrelation between entities and entity types. (2) With the same PLM (Electra-Base), DSP-ZSNER achieves an absolute ![5_image_0.png](5_image_0.png) F1 improvement of +8.7% over DLM-Point on Ontonotes-ZS dev, which shows the advantage of soft prompts co-reference strategy in identifying nested entities. In addition, the soft prompt, which explicitly represents the boundary of the span, is also a key factor for the improvement. ## 3.3 Zero-Shot Relation Classification 3.3.1 Baselines There are four main categories of competing methods for the ZSRC task. **R-BERT** (Wu and He, 2019) is a relation classification model, but it can adapt to the zero-shot setting by designing a matching module based on BERT to perform the nearest neighbor search over the label embeddings. CIM (Rocktäschel et al., 2015) is an entailment-based method that takes sentences and each possible relation as input to determine whether the relation matches the sentence semantically. **ZS-BERT** (Chen and Li, 2021) learns the independent projection function to align input sentences with their candidate relations in the embedded space and to judge the relation between pairs of entities by measuring their distances in a new space. **RelationPrompt** (Chia et al., 2022) prompts GPT2 (Radford et al., 2019) to generate synthetic data, and modifies the Bart (Lewis et al., 2020) generation decoder to learn the ability to generate relation triplets from these data. **NoGen** indicates that it does not use generated synthetic samples for training and the other settings are the same as RelationPrompt. ## 3.3.2 Results By providing entity-pair information in the prompt template, DSP can convert ZSRC task to the exact same task format as ELECTRA pre-training. | Labels | Model | Pre-trained Model | Wiki-ZSL | FewRel | | | | | |-----------------------------------|------------------------------------|---------------------|------------|----------|-------|------|------|------| | Unseen | P. | R. | F1 | P. | R. | F1 | | | | TableSequence (Wang and Lu, 2020) | GPT-2 | 43.7 | 3.5 | 6.3 | 15.23 | 1.9 | 3.4 | | | NoGen (Chia et al., 2022) | BART | 15.6 | 43.2 | 22.3 | 9.5 | 36.7 | 14.6 | | | m=5 | RelationPrompt (Chia et al., 2022) | GPT-2& BART | 29.1 | 31.0 | 30.0 | 20.8 | 24.3 | 22.3 | | DSP-ZSNER & DSP-ZSRC | ELECTRA | 42.7 | 43.4 | 43.0 | 40.1 | 27.0 | 32.3 | | | TableSequence (Wang and Lu, 2020) | GPT-2 | 45.3 | 3.6 | 6.4 | 28.9 | 3.6 | 6.4 | | | NoGen (Chia et al., 2022) | BART | 9.6 | 45.0 | 15.7 | 6.4 | 41.7 | 11.0 | | | m=10 | RelationPrompt (Chia et al., 2022) | GPT-2& BART | 30.2 | 32.3 | 31.2 | 21.6 | 28.7 | 24.6 | | DSP-ZSNER & DSP-ZSRC | ELECTRA | 26.3 | 48.0 | 34.0 | 35.9 | 27.1 | 30.9 | | | TableSequence (Wang and Lu, 2020) | GPT-2 | 44.4 | 3.5 | 6.4 | 19.0 | 2.0 | 3.5 | | | NoGen (Chia et al., 2022) | BART | 7.3 | 43.7 | 12.3 | 4.6 | 36.4 | 8.1 | | | m=15 | RelationPrompt (Chia et al., 2022) | GPT-2& BART | 26.2 | 32.1 | 28.9 | 17.7 | 23.2 | 20.1 | | DSP-ZSNER & DSP-ZSRC | ELECTRA | 27.7 | 32.4 | 29.9 | 27.9 | 25.4 | 26.6 | | Ontonotes-ZS MedMentions-ZS F1Speed (sent/s) F1Speed Model (sent/s) MLM (BERT) 6.3 0.3 3.7 0.3 SMXM 24.0 28.0 25.0 27.2 DLM 30.5 0.1 32.0 0.1 DLM-Point 23.4 86.4 27.4 78.4 DSP-ZSNER 29.3 41.6 31.3 36.0 As shown in Table 3, our approach outperforms previous methods by strict F1-score of +6.1% on Wiki-ZSL and +4.7% on FewRel. It is worth noting that our prompt-based method retains more general knowledge of PLM. When the invisible label set size m increases and the training data decreases, our prompt-based method can utilize this knowledge to maintain relatively high classification F1 performance. This trend suggests that our promptbased method can be better extended to a larger set of invisible tags, which is more critical for realworld open domain applications. ## 3.4 Zero-Shot Relation Extraction 3.4.1 Baselines For the ZSRE, we use several baseline methods provided by Chia et al. (2022) for comparison with our DSP method. **TableSequence** (Wang and Lu, 2020) is a table-based method that extracts entity relations by encoding different types of information in the learning process. Since it cannot directly solve ZSRE, Chia et al. (2022) used the composite samples from the relation generator to provide supervision data for it. Other methods have been described in Section 3.3.1. 3.4.2 Results For ZSRE, we use a pipeline approach to train DSPZSNER and DSP-ZSRC models, respectively. During the inference phase, the DSP-ZSNER model extracts entities from the text and then classifies the pairs of entities using the DSP-ZSRC model. We compare DSP with the baselines on ZSRE for Wiki-ZSL and FewRel datasets in Table 4, our approach consistently outperforms the previous best methods in F1-score metrics. Compared to the previous state-of-the-art model, RelationPrompt, our approach achieves an absolute F1 improvement of +13% and +10.0% on Wiki-ZSL and FewRel, respectively, with fewer parameters, under the setting of m = 5. Such improvement from RelationPrompt indicates the effectiveness of modeling through pre-training tasks to limit excessive changes in model parameters during task-tuning. ## 3.5 Inference Speed In this section, we compare the model's inference speed on an V100 GPU with a batch size of 32. Speed of ZSNER We conduct an evaluation of the inference speed of BERT, SMXM, DLM, DLM-Point, and DSP on the Ontonotes-ZS and MedMentions-ZS datasets. The results are presented in Table 5, which indicate that our DSPZSNER model achieved a higher F1-score and faster inference speed than SMXM. Despite sacrificing 1.2% and 0.7% F1-score on Ontonotes ZS | Model | F1 | LM | Parameter | |--------------|-----------|------|-------------| | Loss | Variation | | | | BERT | - | 5.5 | - | | BERT+FFN | 19.5 | 38.0 | 2609.5 | | ELECTRA | - | 3.7 | - | | DSP-ZSNER | 29.3 | 4.6 | 8.7 | | DSP(w/o PR ) | 27.1 | 7.9 | 70.8 | | ElECTRA+FFN | 23.9 | 12.1 | 163.2 | and MedMentions-ZS, respectively, DSP-ZSNER obtained 416x and 360x speedup compared to the DLM model. Moreover, DSP-ZSNER achieved a speedup of up to 138.7x and 120x compared to MLM on the two datasets, with an F1-score increase of +23% and +27.6%. These results indicate that it is appropriate to utilize the soft prompts coreference strategy to identify entities is an effective way to solve the problem of slow inference speed in prompt methods. Speed of ZSRC We compare our methods to the best previous method, RelationPrompt. Table 7 shows that the inference speed of our DSPZSRC model is faster than RelationPrompt on both datasets. RelationPrompt needs to be trained at the inference stage using pseudo data generated by the GPT-2, which reduces its inference efficiency. Under the setting of an unseen label of 10, the DLM needs to run ten times to predict the relation between entities. DSP-ZSRC with soft prompts co-reference strategy can discriminate all candidate relations in one run, obtaining a 5.9× speedup on Ontonotes-ZS and a 6.5× speedup on MedMentions-ZS. On the other hand, this strategy only leads to a small performance drop and the F1score decreases by only 0.2% and 0.3% on the two datasets. ## 3.6 Analysis Of Parameter Variation And Lm Loss To investigate the impact of retaining the knowledge acquired during pre-training phase of the PLMs on zero-shot tasks, we conduct a ZS- | Wiki-ZSL | FewRel | | | | |----------------|----------------|------|-------|------| | F1 | Speed (sent/s) | F1 | Speed | | | Model | (sent/s) | | | | | RelationPrompt | 71.5 | 63.1 | 80.0 | 59.8 | | DLM | 77.1 | 13.8 | 84.5 | 11.6 | | DSP-ZSRC | 76.9 | 81.6 | 84.2 | 76.0 | NER task-tuning experiment on the Ontonotes-ZS dataset. BERT+FFN denotes the addition of two fully connected layers based on BERT to perform ZSNER tasks. Similarly, ElECTRA+FFN employs the hidden vector output by the transformer as input to a new fully connected layer for task tuning. BERT+FFN refers to adding two full connection layers based on BERT to implement ZSNER tasks. We randomly select 1000 pieces of training data and segregate them into 100 groups, each comprising ten pieces of data. For BERT and BERT+FFN, we replace words with [MASK] randomly with a a probability of 10% in each data group, calculate the loss value of the predicted [MASK] token, and finally average the loss values of 100 groups to obtain the LM loss. For models based on ELECTRA, we scramble the text order, replace the phrases randomly, and calculate the loss of whether the tokens in the text should be replaced. Table 6 illustrates that the performance of PLMs on the pre-training task worsens as the parameters change, suggesting that the model tends to forget some of the general knowledge acquired during the pre-training stage while learning new tasks. Additionally, Figure 4 shows that the LM Loss is 6.9x larger than the initial BERT model, and we observe a decline in the F1-score of the model on the ZSNER task with the increase of LM Loss. This suggests that the knowledge acquired by PLMs during the pre-training stage is beneficial for zero-shot tasks. ## 4 Related Work 4.1 Zero-Shot Entity And Relation Extraction In recent years, zero-shot entity and relation extraction (Ma et al., 2016; Ye et al., 2022; Wang et al., 2021a) has attracted great attention from academia. Fine-tuning PLMs for ZSNER and ZSRE tasks has achieved promising performance. SMXM (Aly et al., 2021) achieves state-of-the-art results in ZSNER by incorporating entity-type description into entity encoding. There are other works converting ZSNER to machine reading comprehension framework (Li et al., 2020; Wang et al., 2021b). ZS-BERT (Chen and Li, 2021) learns the independent projection functions to predict relations and obtains a good performance on ZSRE task. However, ZS-BERT can only infer relations and assumes that the ground-truth entity pairs are readily available, which is unrealistic in real scenarios. RelationPrompt (Chia et al., 2022) is the first approach to extract the whole relation triplet under the zero-shot setting by modifying the BART (Lewis et al., 2020) generation decoder to generate relation triplets. Unlike them, DSP converts input sentences into a discriminative language modeling problem, which bridges the gap between pre-training and fine-tuning. ## 4.2 Prompt-Based Learning Stemming from the GPT models (Radford et al., 2018, 2019), the prompt-based learning has been widely discussed. The core idea of cloze-style prompt methods (Tam et al., 2021) is to transform a classification problem into a cloze-style task with textual templates, and then map label words to the verbalizer. Schick and Schütze (2021) use manually defined templates and verbalizers for prompting text classification tasks. To alleviate manual efforts, Jiang et al. (2020) propose a mining approach for automatically searching for templates. Meanwhile, several approaches have explored the designation of verbalizers. Cui et al. (2022) train prototype vectors as verbalizers by contrastive learning. Hu et al. (2021) expand the label word space of the verbalizer using external knowledge, and refine the verbalizer space with the training data. However, there is no training data to refine the verbalizer under the zero-shot setting, and it is still very difficult to map the label words to the verbalizer. In addition, the cloze-style prompt methods can predict only one token label per template, which is extremely slow for inference in token-level tasks, such as ZSNER. To solve the above problems, we propose DSP to formulate ZSNER and ZSRE tasks into label discrimination tasks without build verbalizers, while all entities of the same type in a sentence are extracted using only one inference. ## 5 Conclusion This paper presents a novel Discriminative Soft Prompts method for zero-shot entity and relation extraction. Unlike the cloze-style prompt method that converts a specific task into an MLM problem, we reformulate ZSNER and ZSRC as a discriminative language modeling problem, which takes advantage of the prompt learning to strengthen the transmission of general knowledge without having to construct a verbalizer. Furthermore, we propose a soft prompt co-reference strategy, which significantly improves the inference efficiency of the discriminative prompt method. Experiments on four datasets demonstrate the effectiveness of our model in both ZSNER and ZSRE tasks. Also, the inference speed of the DSP is up to 120 times faster than the cloze-style prompt method on ZSNER. ## Limitations The main limitation of our work is that we can not use a unified model to complete the zero-shot entity and relationship extraction tasks. Specifically, our method trains two models, DSP-ZSNER and DSPZSRC, to extract the entities in the text first and then classify the relation of each pair of entities. This method needs to train and store two models, which is troublesome to maintain in practical applications. In addition, although our method has dramatically improved the inference speed of the previous prompt method, the method still affects the reasoning speed of the model. In the followup works, we will be committed to solving this problem. ## Acknowledgements We thank all the reviewers for their efforts to make the paper comprehensive and solid. This work is supported in part by the fund of National Key Research and Development Program of China (Grants No. 2021ZD0112905), National Natural Science Foundation of China (Grants No. 62206140, 62076231), and China Postdoctoral Science Foundation (Grants No. 2022M711726). ## References Chih-Yao Chen and Cheng-Te Li. 2021. Zs-bert: Towards zero-shot relation extraction with attribute representation learning. *arXiv preprint* arXiv:2104.04697. Yew Ken Chia, Lidong Bing, Soujanya Poria, and Luo Si. 2022. Relationprompt: Leveraging prompts to generate synthetic data for zero-shot relation triplet extraction. *arXiv preprint arXiv:2203.09101*. Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. 2020. Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555. Ganqu Cui, Shengding Hu, Ning Ding, Longtao Huang, and Zhiyuan Liu. 2022. Prototypical verbalizer for prompt-based few-shot tuning. arXiv preprint arXiv:2203.09770. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Ning Ding, Yulin Chen, Xu Han, Guangwei Xu, Pengjun Xie, Hai-Tao Zheng, Zhiyuan Liu, Juanzi Li, and Hong-Gee Kim. 2021. Prompt-learning for fine-grained entity typing. *arXiv preprint* arXiv:2108.10604. Xu Han, Hao Zhu, Pengfei Yu, Ziyun Wang, Yuan Yao, Zhiyuan Liu, and Maosong Sun. 2018. FewRel: A large-scale supervised few-shot relation classification dataset with state-of-the-art evaluation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4803–4809, Brussels, Belgium. Association for Computational Linguistics. Shengding Hu, Ning Ding, Huadong Wang, Zhiyuan Liu, Juanzi Li, and Maosong Sun. 2021. Knowledgeable prompt-tuning: Incorporating knowledge into prompt verbalizer for text classification. arXiv: Computation and Language. Zhengbao Jiang, Frank F. Xu, Jun Araki, and Graham Neubig. 2020. How Can We Know What Language Models Know? *Transactions of the Association for* Computational Linguistics, 8:423–438. Omer Levy, Minjoon Seo, Eunsol Choi, and Luke Zettlemoyer. 2017. Zero-shot relation extraction via reading comprehension. *arXiv preprint* arXiv:1706.04115. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 7871–7880, Online. Association for Computational Linguistics. Bangzheng Li, Wenpeng Yin, and Muhao Chen. 2022. Ultra-fine entity typing with indirect supervision from natural language inference. *Transactions of the* Association for Computational Linguistics, 10:607– 622. Xiaoya Li, Jingrong Feng, Yuxian Meng, Qinghong Han, Fei Wu, and Jiwei Li. 2020. A unified MRC framework for named entity recognition. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5849–5859, Online. Association for Computational Linguistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. *CoRR*, abs/1907.11692. Ilya Loshchilov and Frank Hutter. 2017. Decoupled weight decay regularization. *Learning*. Yukun Ma, Erik Cambria, and Sa Gao. 2016. Label embedding for zero-shot fine-grained named entity typing. In *Proceedings of COLING 2016, the 26th* International Conference on Computational Linguistics: Technical Papers, pages 171–180. Sunil Mohan and Donghui Li. 2019. Medmentions: A large biomedical corpus annotated with umls concepts. *arXiv preprint arXiv:1902.09476*. Fabio Petroni, Tim Rocktäschel, Patrick S. H. Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H. Miller, and Sebastian Riedel. 2019. Language models as knowledge bases. *Empirical Methods in Natural* Language Processing. Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Hwee Tou Ng, Anders Björkelund, Olga Uryupina, Yuchen Zhang, and Zhi Zhong. 2013. Towards robust linguistic analysis using OntoNotes. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning, pages 143–152, Sofia, Bulgaria. Association for Computational Linguistics. Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. 2018. Improving language understanding by generative pre-training. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. *OpenAI* blog, 1(8):9. Tim Rocktäschel, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kocisk ˇ y, and Phil Blunsom. 2015. ` Reasoning about entailment with neural attention. arXiv preprint arXiv:1509.06664. Oscar Sainz, Oier Lopez de Lacalle, Gorka Labaka, Ander Barrena, and Eneko Agirre. 2021. Label verbalization and entailment for effective zero and fewshot relation extraction. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1199–1212, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Timo Schick and Hinrich Schütze. 2021. Exploiting cloze-questions for few-shot text classification and natural language inference. In *Proceedings of the* 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 255–269, Online. Association for Computational Linguistics. Derek Tam, Rakesh R. Menon, Mohit Bansal, Shashank Srivastava, and Colin Raffel. 2021. Improving and simplifying pattern exploiting training. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 4980–4991, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Chenguang Wang, Xiao Liu, Zui Chen, Haoyun Hong, Jie Tang, and Dawn Song. 2021a. Zero-shot information extraction as a unified text-to-triple translation. arXiv preprint arXiv:2109.11171. Jue Wang and Wei Lu. 2020. Two are better than one: Joint entity and relation extraction with tablesequence encoders. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1706–1721, Online. Association for Computational Linguistics. Yaqing Wang, Haoda Chu, Chao Zhang, and Jing Gao. 2021b. Learning from language description: Lowshot named entity recognition via decomposed framework. *arXiv preprint arXiv:2109.05357*. Shanchan Wu and Yifan He. 2019. Enriching pretrained language model with entity information for relation classification. conference on information and knowledge management. Mengzhou Xia, Mikel Artetxe, Jingfei Du, Danqi Chen, and Veselin Stoyanov. 2022. Prompting ELECTRA: Few-shot learning with discriminative pre-trained models. In *Proceedings of the 2022 Conference on* Empirical Methods in Natural Language Processing, pages 11351–11361, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Yuan Yao, Bowen Dong, Ao Zhang, Zhengyan Zhang, Ruobing Xie, Zhiyuan Liu, Leyu Lin, Maosong Sun, and Jianyong Wang. 2022. Prompt tuning for discriminative pre-trained language models. *arXiv preprint* arXiv:2205.11166. Deming Ye, Yankai Lin, Peng Li, and Maosong Sun. 2022. Packed levitated marker for entity and relation extraction. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics* (Volume 1: Long Papers), pages 4904–4917. Wenpeng Yin, Jamaal Hay, and Dan Roth. 2019. Benchmarking zero-shot text classification: Datasets, evaluation and entailment approach. *arXiv preprint* arXiv:1909.00161. Zexuan Zhong and Danqi Chen. 2020. A frustratingly easy approach for entity and relation extraction. north american chapter of the association for computational linguistics. ## A Dlm-Point Method For Zsner ![11_image_2.png](11_image_2.png) We try to combine sequence labeling with a discriminative prompt method, and the template is "Replace the 'work of art' type entities in next sentence.". The model outputs "replace" of the tokens belonging to this entity type, and then decodes the entity span based on the output. However, this method does not recognize **nested entities**. For instance *Badaling Great Wall* is a "work of art" entity and *Great Wall* is also a "work of art" entity, but model can not recognize it. ![11_image_3.png](11_image_3.png) ## B Examples Of The Attention Mask Matrixes Figure 5 shows examples of the attention mask matrixes of soft prompts co-reference. The token marked with "1" in the matrix participates in the attention calculation, and the token marked with "0" is masked out and does not participate in the calculation. ![11_image_0.png](11_image_0.png) ![11_image_1.png](11_image_1.png) (a) Attention Mask Matrix of Soft Prompts Co-reference For ZSNER (b) Attention Mask Matrix of Soft Prompts Co-reference For ZSRC ## C Datasets OntoNotes 5.0 (Pradhan et al., 2013) is a large corpus comprising various genres of text (news, conversational telephone speech, weblogs, newsgroups, broadcast, talk shows) with structural information (syntax and predicate argument structure) and shallow semantics (word sense linked to an ontology and co-reference). MedMentions (Mohan and Li, 2019) corpus consists of 4,392 papers (titles and abstracts) randomly selected from among papers released on PubMed in 2016 that were in the biomedical field, published in the English language, and had both a title and an abstract. FewRel (Han et al., 2018) was hand annotated for few-shot relation extraction, and Chia et al. (2022) made it suitable for the zero-shot setting after data splitting into disjoint relation label sets for training, validation and testing. Wiki-ZSL (Chen and Li, 2021) is constructed through distant supervision using Wikipedia articles and the Wikidata knowledge base. ## D Evaluation Metrics We follow the standard evaluation protocol and use F1-score as the evaluation metric. For ZSNER task, the unbalanced number of samples per class necessitates the use of evaluation metrics that focus on per-class averaged scores to properly account for the imbalance. Therefore, we use the macro average F1-score to evaluate our model. We evaluate on ZSRC using the Macro F1-score to be consistent with Chia et al. (2022). In the ZSRE task, the model first identifies entities and then predicts the relation between each pair of entities, resulting in a large number of negative samples. Therefore, we use the micro F1-score which is standard in struc- | Biologic Function, Chemical, Healthcare Activity, Anotomical Structure, Finding, Spatial | | | |--------------------------------------------------------------------------------------------|-----------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------| | Train | PERSON, GPE, ORG, DATE | Concept, Intellectual Product, Research Activity, Eukaryote, Population Group, Medical Device Organization, Injury or Poisoning, Clinical | | Dev | NORP, MONEY, ORDINAL, PERCENT, | Attribute, Virus, Biomedical Occupation or | | EVENT, PRODUCT, LAW | Discipline | | | Test | CARDINAL, TIME, LOC, WORK OF ART, | Bacterium, Professional or Occupational Group, | | FAC, QUANTITY, LANGUAGE | Food, Body Substance, Body System | | tured prediction tasks (Zhong and Chen, 2020) and report the precision (P.) and recall (R.). ## E Hyperparameters Choice We select the learning rate with the best validation accuracy by conducting a grid search from the values of 1e-5, 2e-5, and 5e-5. The batch size is chosen based on the available GPU VRAM. For the weight λ in the regulation loss of Equation 10, we conduct a grid search experiment to determine the optimal value (λ = 0.1) from a set of values {10, 1, 0.1, 0.01}, based on the performance on the validation set for all models. For all other experiments, we follow the default settings of the ELECTRA (Clark et al., 2020). ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section "Limitations" ✓ A2. Did you discuss any potential risks of your work? Section "Limitations" ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section "1 Introduction" ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? No response. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? No response. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? No response. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No response. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? No response. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. No response. ## C ✓ **Did You Run Computational Experiments?** Section "3.1 Setup" ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section "3.1 Setup" The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section "3.1 Setup" ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section "3.2.2 Results",Section "3.3.2 Results",Section "3.4.2 Results" ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section "3.1 Setup" D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
zhu-rao-2023-exploring
Exploring Robust Overfitting for Pre-trained Language Models
https://aclanthology.org/2023.findings-acl.340
We identify the robust overfitting issue for pre-trained language models by showing that the robust test loss increases as the epoch grows. Through comprehensive exploration of the robust loss on the training set, we attribute robust overfitting to the model{'}s memorization of the adversarial training data. We attempt to mitigate robust overfitting by combining regularization methods with adversarial training. Following the philosophy that prevents the model from memorizing the adversarial data, we find that flooding, a regularization method with loss scaling, can mitigate robust overfitting for pre-trained language models. Eventually, we investigate the effect of flooding levels and evaluate the models{'} adversarial robustness under textual attacks. Extensive experiments demonstrate that our methods can mitigate robust overfitting upon three top adversarial training methods and further promote adversarial robustness.
# Exploring Robust Overfitting For Pre-Trained Language Models Bin Zhu and **Yanghui Rao**∗ School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China [email protected], [email protected] ## Abstract We identify the robust overfitting issue for pretrained language models by showing that the robust test loss increases as the epoch grows. Through comprehensive exploration of the robust loss on the training set, we attribute robust overfitting to the model's memorization of the adversarial training data. We attempt to mitigate robust overfitting by combining regularization methods with adversarial training. Following the philosophy to prevent the model from memorizing the adversarial data, we find that flooding, a regularization method with loss scaling, can mitigate robust overfitting for pretrained language models. Eventually, we investigate the effect of flooding levels and evaluate the models' adversarial robustness under textual adversarial attacks. Extensive experiments demonstrate that our method can mitigate robust overfitting upon three top adversarial training methods and further promote adversarial robustness. ## 1 Introduction Deep neural networks (DNNs) suffer from adversarial robustness issues (Goodfellow et al., 2015; Szegedy et al., 2014; Papernot et al., 2016a). Recent literature has revealed their vulnerability to crafted adversarial examples on a wide range of natural language processing (NLP) tasks (Papernot et al., 2016b; Ren et al., 2019; Jin et al., 2020; Li et al., 2020). Among the corresponding defensive methods, gradient-based adversarial training (AT) is often considered as the most effective one. Building upon standard training, AT additionally solves a max-min optimization problem to learn an adversarially robust model (Goodfellow et al., 2015; Kurakin et al., 2017; Madry et al., 2018; Zhang et al., 2020). Surprisingly, a widely observed fact is that AT, which is challenging to optimize, can also converge quickly on pre-trained ∗The corresponding author. ![0_image_0.png](0_image_0.png) language models (PrLMs) (Li and Qiu, 2021; Li et al., 2021b). That is, due to their overparameterization, PrLMs can achieve zero robust training error within a few epochs. It is common in practice to achieve zero training error without harming the generalization performance when sufficient data is available, which indicates that overfitting does not occur in the standard training of many modern deep learning tasks (Zhang et al., 2017; Neyshabur et al., 2017; Belkin et al., 2019). Nevertheless, whether PrLMs will overfit when trained to zero robust training error is yet to be explored. As revealed by a recent work (Rice et al., 2020), robust overfitting dominates the training procedure of the image classification task, in which the robust test loss increases as the learning rate decays. In contrast, the robust training loss continues to decrease. This motivates us to identify the robust overfitting issue in adversarially robust learning for NLP models. We first visualize the robust test loss of various effective AT methods developed for NLP tasks. We adopt the simple yet effective Projected Gradient Descent (PGD) attack (Madry et al., 2018) rather than any other textual adversarial attacks to get universal results. This is because textual adversarial attacks integrate too many strategies, and the results under a particular textual adversarial attack may not be generalizable. We can observe from Figure 1 that the robust test loss only increases as the training epochs grow, which is counterintuitive. It also violates the common practice of taking the last checkpoint as an adversarially robust model. In contrast, the robust training loss converges to zero quickly. We refer to the difference between the two robust losses as an adversarial generalization gap. What is worse, the generalization gap appears in the early stage of AT and grows during the whole training phase. This initial finding inspires us to explore the convergence and generalization of AT in-depth and to ask the following question: - *Why does the robust test loss continue to increase as adversarial training goes?* We further explore the robust loss and accuracy curves on the training set. More specifically, we re-perform a PGD-10 attack on the training set to check the robust learning curves. We surprisingly observe that on the training set, both the robust loss and robust error under PGD-10 converge to small values. We also evaluate the adversarial robustness under different settings, such as datasets, model architectures, etc., and similar results are observed. With extensive empirical results, we argue that the model overfits the threat model used in AT and loses the adversarial generalization ability. We hypothesise that the model simply memorizes the adversarial data during training and fails to generalize to robust testing. Thus a poor adversarial generalization performance is observed on the test set. We make several attempts to mitigate robust overfitting issues in AT using a series of regularization methods. The underlying philosophy is to prevent the model from memorizing adversarial data. In this way, we prevent the adversarially trained model from robust overfitting. Eventually, we evaluate our methods against textual adversarial attacks and obtain improvements upon the existing AT methods. Our contributions can be summarized as follows: - We identify the robust overfitting issue in AT for PrLMs. Through in-depth explorations, we attribute the robust overfitting to memorizing the adversarial training data. - We make empirical attempts to mitigate robust overfitting using a series of regularization methods. We propose calibrating the model's overconfident prediction in AT1. Extensive experimental results demonstrate that our methods can mitigate robust overfitting and improve the adversarial robustness of models upon three top AT methods. ## 2 Related Work In this section, we briefly review the relevant work on AT and robust overfitting, especially for NLP tasks. ## 2.1 Adversarial Training Let D = {(xi, yi)} n i=1 be the training set, in which xi ∈ X is an input sample with its corresponding true label yi ∈ Y. AT aims to learn adversarially robust models by expanding the training set with adversarial data, which can be formulated as the following max-min optimization problem: $$\operatorname*{min}_{\theta}\mathbb{E}_{({\mathcal{X}},{\mathcal{Y}})\sim{\mathcal{D}}}\left[\operatorname*{max}_{\|\delta\|\leq\epsilon}{\mathcal{L}}(f_{\theta}({\mathcal{X}}+\delta),{\mathcal{Y}})\right],\quad(1)$$ where fθ is a neural network parameterized by θ, L(·) is the loss function, δ is the adversarial perturbation, and ϵ is the allowed perturbation size. To tackle the intractable problem, Goodfellow et al. (2015) first proposed to use a one-step gradient-based method to generate adversarial examples, also known as the Fast Gradient Sign Method (FGSM). Madry et al. (2018) extended it to a multi-step method with random starts known as the PGD method. Unfortunately, AT always leads to a drop in the standard accuracy. Zhang et al. (2019b) theoretically identified a trade-off between robustness and accuracy and proposed TRADES to trade adversarial robustness off against accuracy. Considering that PGD-based AT is time-consuming, there is another line of work focused on accelerating AT (Shafahi et al., 2019; Zhang et al., 2019a; Wong et al., 2020). For NLP tasks, Miyato et al. (2017) first found that AT could help generalization in a semisupervised manner. To make AT more reasonable, Sato et al. (2018) proposed to generate interpretable adversarial perturbations in the embedding space to improve standard accuracy. Zhu et al. 1Our code is available in public at https://github.com/ zedzx1uv/GAT. (2020) proposed a model named FreeLB to understand natural languages better. To exploit the implicit information in the text, Li and Qiu (2021) crafted fine-grained perturbations for tokens in their model named TAVAT and obtained improvements on both the standard and robust accuracy. Wang et al. (2021a) improved AT from an information theoretic perspective termed InfoBERT. Dong et al. (2021b) proposed RIFT to encourage the model to retain the information from the original pre-trained model. To benchmark the existing defensive methods, Li et al. (2021b) gave a systematic analysis of them under the same attack settings. They also found that removing the norm-bounded projection and increasing adversarial steps could improve adversarial robustness. To defend against the widely used adversarial word substitutions, Jia et al. (2019) captured the perturbation in a hyper-rectangle and obtained certified robustness. Dong et al. (2021a) further modelled the word substitution attack space as a convex hull to enhance adversarial robustness. Wang et al. (2021c) proposed to project the perturbed word embedding to a valid one so that the crafted adversarial examples are reasonable. By learning a robust word embedding space where synonyms have similar representations, Yang et al. (2022) promoted models' robustness and maintained competitive standard accuracy. Discrete adversarial data augmentation (Ren et al., 2019; Jin et al., 2020; Zang et al., 2020; Li et al., 2020; Si et al., 2021; Li et al., 2021a) can also significantly improve adversarial robustness by generating valid adversarial examples to expand the training set. However, the adversarially trained model suffers from degraded generalization performance. Another disadvantage is that it only helps defend against the same attacking method with adversarial data augmentation. To this end, Zhu et al. (2022) developed friendly adversarial data augmentation to improve adversarial robustness without hurting standard accuracy. AT empirically boosts the adversarial robustness of models, but no guarantees can be given for the robustness. Therefore, another series of work devotes to obtaining certified robustness under given adversarial strengths by using randomized smoothing (Ye et al., 2020), interval bound propagation (Jia et al., 2019; Huang et al., 2019; Shi et al., 2020), differential privacy (Wang et al., 2021b), etc. ## 2.2 Robust Overfitting Robust overfitting occurs immediately after the learning rate decays in AT across datasets, model architectures, and AT methods in computer vision (Rice et al., 2020; Rebuffi et al., 2021; Dong et al., 2022a). The robust training loss continues to decrease while the robust test loss begins to increase. They also found that only the combination of early stopping and semi-supervised data augmentation works better than early stopping alone. Since it is common in practice to train deep models as long as possible in computer vision, robust overfitting counteracts the gains of robustness by recent variants of AT. From the perspective of the weight loss landscape, Wu et al. (2020) proposed adversarial weight perturbation to improve robust generalization. Chen et al. (2021) empirically injected learned smoothing into AT to avoid overfitting in AT. Dong et al. (2022a) introduced a new insight into the relationships between noisy labels and robust overfitting. Rebuffi et al. (2021) found that data augmentation with model weight averaging could also mitigate robust overfitting. Similarly, Dong et al. (2022b) integrated temporal ensemble into AT frameworks, which could be seen as another form of weight averaging. Yu et al. (2022) explored robust overfitting from data loss distributions. They attributed robust overfitting to the small-loss data under a large perturbation size. In this paper, we mainly focus on the convergence and robust overfitting of gradient-based AT methods, which have rarely been studied in the NLP field. ## 3 Robust Overfitting For Prlms In this section, we explore the robust learning curves for PrLMs. By comparing the data loss distributions between training and testing, we identify the robust overfitting issue and attribute it to the model's memorization of adversarial data. ## 3.1 Identifying Robust Overfitting Motivated by the findings from Figure 1, we make a further study on the training set to see whether the model simply memorizes the adversarial training data. We re-perform PGD-10 attacks on the training set. Since the PGD adversary randomly initializes the starting point x (0) at a ϵ−ball centred by the input x, **we expect a decrease in the** robustness of the model on the training set. ![3_image_0.png](3_image_0.png) We adopt three AT methods, FreeLB (Zhu et al., 2020), TAVAT (Li and Qiu, 2021), and InfoBERT (Wang et al., 2021a), to provide comprehensive results. "-10" refers to the number of attack iterations used in AT is set to 10. As can be seen in Figure 2, the robust losses of the three methods decrease at early epochs, which indicates that the model memorizes the adversarial data quickly. In subsequent epochs, "TAVAT-10" and "FreeLB-10" maintain small losses. The robust loss of "InfoBERT" gradually increases but is still less than 0.8. For robust accuracy, similar results are observed. It is not surprising that "InfoBERT-10" has a slightly large robust loss and a degraded robust accuracy since we have observed that its robust test loss is abnormally large compared to others in Figure 1. Comparing the robust loss on the training set with that on the test set, we can conclude that the model can not generalize to the adversarial test set, although it achieves about 100% robust accuracy during training. We next vary the attack iterations in the reperformed PGD attack to show the model's robustness against unseen attacks with larger perturbations. As shown in Figure 3, when the perturbation size exceeds that used for adversarial training (10 iterations), the robust losses and accuracies begin to sharply increase and decrease, respectively. Our findings indicate that the model overfits the threat model seen during AT, which has also been shown in (Stutz et al., 2020; Chen et al., 2021). We answer the question raised in Section 1 that due to the overparameterization of PrLMs, they can easily memorize the adversarial data generated during AT, resulting in robust overfitting. Thus the adversarially learned model cannot generalize well ![3_image_1.png](3_image_1.png) on the adversarial test set and the robust test loss continues to increase during robust testing. ## 3.2 More Empirical Evidence To better support our hypothesis, we provide more empirical evidence across different datasets and model architectures, which can be found in Appendix A. ## 4 Mitigating Robust Overfitting In this section, we make several attempts to prevent PrLMs from getting overfitting in AT. In standard training, regularization methods can mitigate overfitting and promote test performance. Thus, it is intuitive to use regularization methods in AT to avoid robust overfitting. ![4_image_0.png](4_image_0.png) ## 4.1 Ensemble Methods Dropout (Srivastava et al., 2014) randomly drops units from the model during training, which can be recognized as sampling from an exponential number of models. At test time, the model uses all the units to make predictions, which can be seen as an ensemble model. Dropout is widely used in modern deep learning as a regularizer. We vary the dropout ratio for the attention probabilities and all the fully connected layers in the embeddings, encoder, and pooler for PrLMs. In this way, we aim to see whether dropout can mitigate robust overfitting and whether a large dropout ratio helps. Figure 4(a) and Figure 4(b) show the robust test loss and accuracy when the dropout ratio is in [0.1, 0.4]. For different dropout ratios, the robust loss decreases as the ratio increases. However, the robust loss still increases as the epoch grows. The robust accuracy also decreases as the dropout ratio increases. In Figure 4(c), when the dropout ratio is in [0.7, 0.9], the robust test loss begins to decrease rather than increase. Nevertheless, we can observe in Figure 4(d) that the corresponding robust test accuracy maintains low because the robust loss is still large. It indicates that a large dropout ratio can hurt the robust test performance, though it ostensibly alleviates robust overfitting. This finding also suggests that a proper regularization technique may address robust overfitting. ## 4.2 Weight Decay Weight decay (Krogh and Hertz, 1991) aims to adjust the effect of model complexity on the loss function, also known as L2 regularization. It forces the parameters to converge to smaller values and avoids overfitting. Formally, weight decay adds a ![4_image_1.png](4_image_1.png) regularization term in the loss function as follows: $$J=J_{0}+\frac{\lambda}{2N}\sum_{w}w^{2},\qquad\qquad(2)$$ where J0 is the original loss function, λ is the coefficient of the regularization term, N is the number of samples in the training set, and w is the set of model parameters. To assess the effect of weight decay in mitigating robust overfitting, we vary the coefficient λ in a wide range and report the robust loss during testing. From Figure 5(a), we can observe that weight decay can not avoid robust overfitting in AT since the robust test loss continues to increase. Although a slightly larger λ (5 and 10) can make the robust test loss smaller in later epochs, a too-large λ increases the robust test loss overall. Figure 5(b) shows the robust test accuracy of different weight decay coefficients. Similarly, large coefficients hurt the robust accuracy, while small coefficients have little effect on robust accuracy. ## 4.3 Flooding Conventional regularization methods contribute little to alleviating robust overfitting. However, we have shown that proper regularization may help mitigate robust overfitting in Section 4.1. Recall ![5_image_0.png](5_image_0.png) | Methods | Flooding level | Clean % | RA % | |-----------------------|------------------|-----------|--------| | 0 | 91.97 | 7.00 | | | 0.0125 | 91.82 | 4.93 | | | 0.025 | 91.76 | 7.49 | | | 0.05 | 91.97 | 4.13 | | | BERT-base | | | | | (Devlin et al., 2019) | | | | our hypothesis that the model memorizes all the adversarial training data and fails to generalize to robust testing. Following the philosophy to prevent the model from memorizing all the adversarial data, it is intuitive and reasonable to calibrate the model's prediction when it gets zero robust training loss. Thus, making the model less confident in some small-loss data is significant. To this end, we propose to combine "flooding" with AT methods. Ishida et al. (2020) have found that flooding could help generalization. Flooding intentionally prevents further reduction of the training loss when it reaches a reasonably small value b as follows: $$J=a b s(J_{0}-b)+b,$$ J = abs(J0 − b) + b, (3) where J0 is the original loss function, b is the flooding level, and abs() is the absolute value function. Therefore, we expect that flooding can help mitigate robust overfitting in AT. It is worth noting that Liu et al. (2022) have claimed that flooding could improve adversarial robustness without AT. However, through empirical experiments, we find that flooding, as a regularizer, can not promote adversarial robustness only by itself, which contradicts their results. We first show the adversarial robustness of models using flooding only. Then we combine flooding with AT methods, exploring its effect in avoiding robust overfitting and improving adversarial robustness. Table 1 reports the models' adversarial robustness regularized by flooding against TextFooler (Jin et al., 2020). Although it is claimed that flooding can boost adversarial robustness without AT, our ![6_image_0.png](6_image_0.png) results indicate that flooding contributes little to adversarial robustness. We then combine flooding with AT to exploit its effect in avoiding overfitting (Ishida et al., 2020). Figure 6 shows the robust test loss and accuracy against PGD attacks across datasets and model architectures. With flooding, the robust test loss of three AT methods maintains a low level and no longer increases as the epoch grows, indicating that flooding can mitigate robust overfitting and bridge the adversarial generalization gap. The robust accuracy also improves, verifying the regularization effect of flooding in AT. ## 5 Discussion In this section, we discuss the effect of flooding in AT and investigate why flooding can help adversarial generalization. ## 5.1 Effect Of Flooding Levels We investigate the effect of flooding levels in mitigating robust overfitting. We vary the flooding level b from 0 to 0.5, and the corresponding robust test loss and accuracy are shown in Figure 7. When the flooding level is set to 0, the robust test loss continues to increase, as we have shown in previous sections. As the flooding level grows, the overall robust loss decreases and reaches the minimum when the flooding level is 0.2. Larger flooding levels increase the robust loss. However, the robust loss curve no longer rises, which verifies that a reasonable flooding level not only helps alleviate robust overfitting issues but also promotes adversarial robustness. Regarding robust accuracy, similarly, we observe that a proper flooding level can boost the adversarial robustness against PGD attacks. ## 5.2 Memorization We first give an intuitive explanation of the memorization in AT from the perspective of loss magnitude. Figure 8 demonstrates that the adversarial loss without flooding dominates the training. Therefore the adversarially trained model gets robust overfitting. The learning curves of adversarial loss with flooding is not shown because its value can predictably fluctuate around the flooding level. We vary the adversarial search steps and report the results in Appendix B. To verify our hypothesis that flooding can prevent the model from memorizing adversarial training data, we investigate if models can achieve zero training error when their training loss are scaling with flooding. We show in Figure 9(a) the learning curves of training accuracy with BERT-base on the SST2 dataset. We conclude that the model gives up on memorizing all the adversarial training data as the flooding level gets higher. In Figure 9(b) we report the learning curves of training accuracy with DeBERTa-v3-base on the AGNEWS dataset. Similarly, the model gives up on memorizing all the adversarial training data. To conclude, we demonstrate that flooding can mitigate memorization in AT with several model architectures and datasets. ## 5.3 Robustness Against Textual Adversarial Attacks We have shown that flooding can mitigate robust overfitting against PGD attacks. To provide comprehensive evidence that flooding helps adversarial generalization, we evaluate the model's robustness against textual adversarial attacks. ![7_image_0.png](7_image_0.png) ![7_image_1.png](7_image_1.png) ## 5.3.1 Experimental Setup Datasets We conduct experiments on two widely used text classification datasets, SST2 (Socher et al., 2013) 2and AGNEWS (Zhang et al., 2015) 3. SST2 is a sentiment analysis dataset which contains 67349 training samples and 872 validation samples. We use the GLUE (Wang et al., 2019) version of the SST2 dataset. The average text length is 17. AGNEWS is a category classification dataset with four news topics: World, Sports, Business, and Science/Technology. It contains 12000 training samples and 7600 test samples. The average text length is 43. The maximum sentence length kept for the two datasets is 40. For training, we split 10% of the training set as the validation set. Adversarial Training Methods We adopt three AT methods, FreeLB (Zhu et al., 2020), TAVAT (Li and Qiu, 2021), and InfoBERT (Wang et al., 2021a), as our AT baselines. The three AT methods help boost models' generalization ability and adversarial robustness. The adversarial settings are set consistently. The number of adversarial steps is 10; the step size is 0.01; the adversarial maximum norm is 1; the magnitude of initial adversarial perturbation is 0.02; and all the other settings follow their original papers. Attacking Methods We adopt TextFooler (Jin et al., 2020), a word-level textual adversarial attacking method, as our attacking baseline. TextFooler is widely used in related literatures on adversarial | Methods | Clean % | RA % | |---------------------------------|-----------|--------| | BERT-base (Devlin et al., 2019) | 91.97 | 7.00 | | +FreeLB (Zhu et al., 2020) | 92.32 | 8.94 | | +FreeLB & flooding | 92.66 | 11.93 | | +TAVAT (Li and Qiu, 2021) | 92.66 | 14.56 | | +TAVAT & flooding | 93.58 | 11.24 | | +InfoBERT (Wang et al., 2021a) | 92.32 | 6.31 | | +InfoBERT & flooding | 93.35 | 6.77 | attacks and robustness. We use TextAttack's (Morris et al., 2020) 4implementation of TextFooler to provide fair results. Model Architectures We use BERT-base (Devlin et al., 2019) and DeBERTa-v3-base (He et al., 2021b,a) as our baseline models and load their weights from HuggingFace Transformers5. For these models, BERT-base has achieved great performance on NLP tasks as the first pre-trained language model. DeBERTa-v3-base is an advanced variant among the BERT family. ## 5.3.2 Attacking Results Table 2 and Table 3 show the standard accuracy (**Clean %**) and robust accuracy (**RA %**) across datasets and model architectures. On the SST2 dataset, all three AT methods obtain improvements in the standard accuracy. FreeLB and TAVAT boost the adversarial robustness compared with BERT-base, while InfoBERT has degraded robustness. Furthermore, flooding can promote robustness upon FreeLB and InfoBERT, but the combination of TAVAT and flooding has a relatively low robust accuracy compared with TAVAT. For the AGNEWS dataset, all the combinations can promote standard accuracy except for TAVAT. Like the robust accuracy on the SST2 dataset, flooding can improve adversarial robustness upon FreeLB and InfoBERT while having a degraded robust accuracy compared with TAVAT. It is an interesting observation that flooding can not boost adversarial robustness upon TAVAT against TextFooler. However, this work mainly focuses on mitigating robust overfitting issues against PGD attacks. This observation indicates that adversarial generalization gaps exist when the model | Methods | Clean % | RA % | |------------------------------------|-----------|--------| | DeBERTa-v3-base (He et al., 2021a) | 93.10 | 12.60 | | +FreeLB (Zhu et al., 2020) | 95.20 | 26.00 | | +FreeLB & flooding | 94.70 | 26.60 | | +TAVAT (Li and Qiu, 2021) | 91.90 | 25.60 | | +TAVAT & flooding | 93.58 | 21.10 | | +InfoBERT (Wang et al., 2021a) | 93.90 | 27.50 | | +InfoBERT & flooding | 94.90 | 29.50 | defends against different attacks (e.g., PGD-based attacks and word-level textual adversarial attacks). It may be the generalization gap caused by TAVAT itself. Overall, we leave this question for another promising direction of future work. It is also surprising that InfoBERT can not promote robustness on the SST2 dataset with the BERT-base architecture. This may be because we fix the attack iterations to 10 instead of using the settings in the original paper. ## 6 Conclusion Robust overfitting prevents further improvement of adversarial robustness on PrLMs. While we adopt strong regularizers in AT, weight decay and dropout contribute little to mitigating robust overfitting. To prevent the model from simply memorizing the adversarial training data, we combine flooding with AT. Experimental results on extensive datasets and model architectures demonstrate that a reasonable flooding level helps mitigate robust overfitting. As a preliminary study, this work identifies the robust overfitting issue for PrLMs. We hope the community can take robust overfitting into account when performing AT to achieve adversarially robust models. ## Limitations In this work, we mainly identify robust overfitting for PrLMs using PGD attacks instead of textual adversarial attacks. The reasons are two folds. First, we aim to check the learning curves during AT. Second, the results of textual adversarial attacks may not be generalizable since they integrate different strategies. In practice, however, it is more inclined to use some textual adversarial attack methods (e.g., TextFooler, TextBugger (Li et al., 2019)) to evaluate the robustness of NLP models. As we have clarified in Section 5.3.2, there exists an adversarial generalization gap when the model defends against PGD-based gradient attacks and textual adversarial attacks. While it is difficult to check their robust loss and accuracy curves during AT, it is necessary and promising to explore robust overfitting under textual adversarial attacks and provide helpful insights for promoting the adversarial robustness of PrLMs. ## Acknowledgements The authors would like to thank the anonymous reviewers for their helpful suggestions and comments. This work has been supported by the National Natural Science Foundation of China (61972426). ## References Mikhail Belkin, Daniel Hsu, Siyuan Ma, and Soumik Mandal. 2019. Reconciling modern machinelearning practice and the classical bias–variance trade-off. *Proceedings of the National Academy of* Sciences, pages 15849–15854. Tianlong Chen, Zhenyu Zhang, Sijia Liu, Shiyu Chang, and Zhangyang Wang. 2021. Robust overfitting may be mitigated by properly learned smoothening. In *International Conference on Learning Representations*. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Chengyu Dong, Liyuan Liu, and Jingbo Shang. 2022a. Label noise in adversarial training: A novel perspective to study robust overfitting. In *Advances in Neural Information Processing Systems*, pages 17556– 17567. Xinshuai Dong, Anh Tuan Luu, Rongrong Ji, and Hong Liu. 2021a. Towards robustness against natural language word substitutions. In *International Conference on Learning Representations*. Xinshuai Dong, Anh Tuan Luu, Min Lin, Shuicheng Yan, and Hanwang Zhang. 2021b. How should pretrained language models be fine-tuned towards adversarial robustness? In *Advances in Neural Information Processing Systems*, pages 4356–4369. Yinpeng Dong, Ke Xu, Xiao Yang, Tianyu Pang, Zhijie Deng, Hang Su, and Jun Zhu. 2022b. Exploring memorization in adversarial training. In *International Conference on Learning Representations*. Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. In *International Conference on Learning Representations*. Pengcheng He, Jianfeng Gao, and Weizhu Chen. 2021a. Debertav3: Improving deberta using electra-style pretraining with gradient-disentangled embedding sharing. *CoRR*, abs/2111.09543. Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2021b. Deberta: Decoding-enhanced bert with disentangled attention. In *International* Conference on Learning Representations. Po-Sen Huang, Robert Stanforth, Johannes Welbl, Chris Dyer, Dani Yogatama, Sven Gowal, Krishnamurthy Dvijotham, and Pushmeet Kohli. 2019. Achieving verified robustness to symbol substitutions via interval bound propagation. In *Proceedings of the* 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4083–4093, Hong Kong, China. Association for Computational Linguistics. Takashi Ishida, Ikko Yamane, Tomoya Sakai, Gang Niu, and Masashi Sugiyama. 2020. Do we need zero training loss after achieving zero training error? In Proceedings of the 37th International Conference on Machine Learning, pages 4604–4614. Robin Jia, Aditi Raghunathan, Kerem Göksel, and Percy Liang. 2019. Certified robustness to adversarial word substitutions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4129–4142, Hong Kong, China. Association for Computational Linguistics. Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2020. Is BERT really robust? A strong baseline for natural language attack on text classification and entailment. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 8018–8025. Anders Krogh and John Hertz. 1991. A simple weight decay can improve generalization. In *Advances in* Neural Information Processing Systems, pages 950– 957. Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio. 2017. Adversarial machine learning at scale. In *International Conference on Learning Representations*. Dianqi Li, Yizhe Zhang, Hao Peng, Liqun Chen, Chris Brockett, Ming-Ting Sun, and Bill Dolan. 2021a. Contextualized perturbation for textual adversarial attack. In *Proceedings of the 2021 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5053–5069, Online. Association for Computational Linguistics. Jinfeng Li, Shouling Ji, Tianyu Du, Bo Li, and Ting Wang. 2019. Textbugger: Generating adversarial text against real-world applications. In *26th Annual* Network and Distributed System Security Symposium, NDSS 2019, San Diego, California, USA, February 24-27, 2019. Linyang Li, Ruotian Ma, Qipeng Guo, Xiangyang Xue, and Xipeng Qiu. 2020. BERT-ATTACK: Adversarial attack against BERT using BERT. In *Proceedings of the 2020 Conference on Empirical Methods* in Natural Language Processing (EMNLP), pages 6193–6202, Online. Association for Computational Linguistics. Linyang Li and Xipeng Qiu. 2021. Token-aware virtual adversarial training in natural language understanding. In *Thirty-Fifth AAAI Conference on Artificial* Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 8410–8418. Zongyi Li, Jianhan Xu, Jiehang Zeng, Linyang Li, Xiaoqing Zheng, Qi Zhang, Kai-Wei Chang, and Cho-Jui Hsieh. 2021b. Searching for an effective defender: Benchmarking defense against adversarial word substitution. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3137–3147, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Qin Liu, Rui Zheng, Bao Rong, Jingyi Liu, ZhiHua Liu, Zhanzhan Cheng, Liang Qiao, Tao Gui, Qi Zhang, and Xuanjing Huang. 2022. Flooding-X: Improving BERT's resistance to adversarial attacks via lossrestricted fine-tuning. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5634– 5644, Dublin, Ireland. Association for Computational Linguistics. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations. Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. 2017. Virtual adversarial training: a regularization method for supervised and semisupervised learning. *CoRR*, abs/1704.03976. John Morris, Eli Lifland, Jin Yong Yoo, Jake Grigsby, Di Jin, and Yanjun Qi. 2020. TextAttack: A framework for adversarial attacks, data augmentation, and adversarial training in NLP. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 119–126, Online. Association for Computational Linguistics. Behnam Neyshabur, Srinadh Bhojanapalli, David McAllester, and Nati Srebro. 2017. Exploring generalization in deep learning. In *Advances in Neural* Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5947–5956. Nicolas Papernot, Patrick D. McDaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik, and Ananthram Swami. 2016a. The limitations of deep learning in adversarial settings. In *IEEE European Symposium on* Security and Privacy, EuroS&P 2016, Saarbrücken, Germany, March 21-24, 2016, pages 372–387. Nicolas Papernot, Patrick D. McDaniel, Ananthram Swami, and Richard E. Harang. 2016b. Crafting adversarial input sequences for recurrent neural networks. *CoRR*, abs/1604.08275. Sylvestre-Alvise Rebuffi, Sven Gowal, Dan Andrei Calian, Florian Stimberg, Olivia Wiles, and Timothy Mann. 2021. Data augmentation can improve robustness. In *Advances in Neural Information Processing Systems*, pages 29935–29948. Shuhuai Ren, Yihe Deng, Kun He, and Wanxiang Che. 2019. Generating natural language adversarial examples through probability weighted word saliency. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1085– 1097, Florence, Italy. Association for Computational Linguistics. Leslie Rice, Eric Wong, and J. Zico Kolter. 2020. Overfitting in adversarially robust deep learning. In *Proceedings of the 37th International Conference on* Machine Learning, pages 8093–8104. Motoki Sato, Jun Suzuki, Hiroyuki Shindo, and Yuji Matsumoto. 2018. Interpretable adversarial perturbation in input embedding space for text. In *Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July* 13-19, 2018, Stockholm, Sweden, pages 4323–4330. Ali Shafahi, Mahyar Najibi, Mohammad Amin Ghiasi, Zheng Xu, John Dickerson, Christoph Studer, Larry S Davis, Gavin Taylor, and Tom Goldstein. 2019. Adversarial training for free! In Advances in Neural Information Processing Systems, pages 3358–3369. Zhouxing Shi, Huan Zhang, Kai-Wei Chang, Minlie Huang, and Cho-Jui Hsieh. 2020. Robustness verification for transformers. In International Conference on Learning Representations. Chenglei Si, Zhengyan Zhang, Fanchao Qi, Zhiyuan Liu, Yasheng Wang, Qun Liu, and Maosong Sun. 2021. Better robustness by more coverage: Adversarial and mixup data augmentation for robust finetuning. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021*, pages 1569–1576, Online. Association for Computational Linguistics. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In *Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing*, pages 1631–1642, Seattle, Washington, USA. Association for Computational Linguistics. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. *Journal of Machine Learning Research*, 15(56):1929–1958. David Stutz, Matthias Hein, and Bernt Schiele. 2020. Confidence-calibrated adversarial training: Generalizing to unseen attacks. In *Proceedings of the 37th International Conference on Machine Learning*, pages 9155–9166. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In International Conference on Learning Representations. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In *International Conference on Learning Representations*. Boxin Wang, Shuohang Wang, Yu Cheng, Zhe Gan, Ruoxi Jia, Bo Li, and Jingjing Liu. 2021a. Info{bert}: Improving robustness of language models from an information theoretic perspective. In *International* Conference on Learning Representations. Wenjie Wang, Pengfei Tang, Jian Lou, and Li Xiong. 2021b. Certified robustness to word substitution attack with differential privacy. In *Proceedings of the* 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1102–1112, Online. Association for Computational Linguistics. Xiaosen Wang, Yichen Yang, Yihe Deng, and Kun He. 2021c. Adversarial training with fast gradient projection method against synonym substitution based text attacks. In *Thirty-Fifth AAAI Conference on Artificial* Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 13997–14005. Eric Wong, Leslie Rice, and J. Zico Kolter. 2020. Fast is better than free: Revisiting adversarial training. In International Conference on Learning Representations. Dongxian Wu, Shu-Tao Xia, and Yisen Wang. 2020. Adversarial weight perturbation helps robust generalization. In *Advances in Neural Information Processing Systems*, pages 2958–2969. Yichen Yang, Xiaosen Wang, and Kun He. 2022. Robust textual embedding against word-level adversarial attacks. In Uncertainty in Artificial Intelligence, Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence, UAI 2022, 1-5 August 2022, Eindhoven, The Netherlands, Proceedings of Machine Learning Research, pages 2214–2224. Mao Ye, Chengyue Gong, and Qiang Liu. 2020. SAFER: A structure-free approach for certified robustness to adversarial word substitutions. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 3465– 3475, Online. Association for Computational Linguistics. Chaojian Yu, Bo Han, Li Shen, Jun Yu, Chen Gong, Mingming Gong, and Tongliang Liu. 2022. Understanding robust overfitting of adversarial training and beyond. In *Proceedings of the 39th International Conference on Machine Learning*, pages 25595–25610. Yuan Zang, Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Meng Zhang, Qun Liu, and Maosong Sun. 2020. Word-level textual adversarial attacking as combinatorial optimization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6066–6080, Online. Association for Computational Linguistics. Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. 2017. Understanding deep learning requires rethinking generalization. In *International Conference on Learning Representations*. Dinghuai Zhang, Tianyuan Zhang, Yiping Lu, Zhanxing Zhu, and Bin Dong. 2019a. You only propagate once: Accelerating adversarial training via maximal principle. In Advances in Neural Information Processing Systems, pages 227–238. Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric Xing, Laurent El Ghaoui, and Michael Jordan. 2019b. Theoretically principled trade-off between robustness and accuracy. In *Proceedings of the 36th International Conference on Machine Learning*, pages 7472– 7482. Jingfeng Zhang, Xilie Xu, Bo Han, Gang Niu, Lizhen Cui, Masashi Sugiyama, and Mohan S. Kankanhalli. 2020. Attacks which do not kill training make adversarial learning stronger. In Proceedings of the 37th International Conference on Machine Learning, pages 11278–11287. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In *Advances in Neural Information Processing Systems*, pages 649–657. Bin Zhu, Zhaoquan Gu, Le Wang, Jinyin Chen, and Qi Xuan. 2022. Improving robustness of language models from a geometry-aware perspective. In *Findings of the Association for Computational Linguistics: ACL 2022*, pages 3115–3125, Dublin, Ireland. Association for Computational Linguistics. Chen Zhu, Yu Cheng, Zhe Gan, Siqi Sun, Tom Goldstein, and Jingjing Liu. 2020. Freelb: Enhanced adversarial training for natural language understanding. In *International Conference on Learning Representations*. ## A More Evidence For Robust Overfitting We provide more results across datasets and model architectures to identify robust overfitting for PrLMs in Figure 10, which also empirically verifies our hypothesis that the model's memorization of adversarial training data results in robust overfitting. ## B **Comparison Of The Clean Cross-Entropy** Loss And The Adversarial Loss We vary the adversarial search steps and report the clean cross-entropy loss and the adversarial loss during training. In Figure 11, the adversarial loss becomes higher as the number of search steps gets larger, which implies that the adversarial loss dominates the training, leading to robust overfitting. ![13_image_0.png](13_image_0.png) ![14_image_0.png](14_image_0.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 7. ✗ A2. Did you discuss any potential risks of your work? There are no potential risks in this work. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1. ✓ A4. Have you used AI writing assistants when working on this paper? We use Grammarly to check this paper for all the sections. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 5. ✓ B1. Did you cite the creators of artifacts you used? Section 5. ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? We would discuss the license in our codes. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section 5. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Not applicable. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Not applicable. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 5. ## C ✓ **Did You Run Computational Experiments?** Sections 4 And 5. ✗ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? In this work, we do not focus on these terms, and we mainly discuss the performance gained. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 5. ✗ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Following previous work, we provide results with a single run. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 5. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
chen-etal-2023-improving-cross
Improving Cross-task Generalization of Unified Table-to-text Models with Compositional Task Configurations
https://aclanthology.org/2023.findings-acl.341
There has been great progress in unifying various table-to-text tasks using a single encoder-decoder model trained via multi-task learning (Xie et al., 2022).However, existing methods typically encode task information with a simple dataset name as a prefix to the encoder. This not only limits the effectiveness of multi-task learning, but also hinders the model{'}s ability to generalize to new domains or tasks that were not seen during training, which is crucial for real-world applications. In this paper, we propose compositional task configurations, a set of prompts prepended to the encoder to improve cross-task generalization of unified models. We design the task configurations to explicitly specify the task type, as well as its input and output types. We show that this not only allows the model to better learn shared knowledge across different tasks at training, but also allows us to control the model by composing new configurations that apply novel input-output combinations in a zero-shot manner. We demonstrate via experiments over ten table-to-text tasks that our method outperforms the UnifiedSKG baseline by noticeable margins in both in-domain and zero-shot settings, with average improvements of +0.5 and +12.6 from using a T5-large backbone, respectively.
# Improving Cross-Task Generalization Of Unified Table-To-Text Models With Compositional Task Configurations Jifan Chen1∗ Yuhao Zhang2 Lan Liu2 **Rui Dong**2 Xinchi Chen2 Patrick Ng2 William Yang Wang2 **Zhiheng Huang**2 1The University of Texas at Austin 2AWS AI Labs [email protected] {yhzhang, liuall, ruidong}@amazon.com {xcc, patricng, wyw, zhiheng}@amazon.com ## Abstract There has been great progress in unifying various table-to-text tasks using a single encoderdecoder model trained via multi-task learning (Xie et al., 2022). However, existing methods typically encode task information with a simple dataset name as a prefix to the encoder. This not only limits the effectiveness of multitask learning, but also hinders the model's ability to generalize to new domains or tasks that were not seen during training, which is crucial for real-world applications. In this paper, we propose *compositional task configurations*, a set of prompts prepended to the encoder to improve cross-task generalization of unified models. We design the task configurations to explicitly specify the task type, dataset name, as well as its input and output types. We show that this not only allows the model to better learn shared knowledge across different tasks at training, but also allows us to control the model by composing new configurations that apply novel input-output combinations in a zero-shot manner. We demonstrate via experiments over ten table-to-text tasks that our method outperforms the UnifiedSKG baseline by noticeable margins in both in-domain and zero-shot settings, with average improvements of +0.5 and +12.6 from using a T5-large backbone, respectively. ## 1 Introduction Table-to-text tasks, such as table-based question answering (Pasupat and Liang, 2015; Herzig et al., 2020), summarization (Parikh et al., 2020), or fact verification (Chen et al., 2019), are of high interest to the NLP community and have been applied in many real-world applications. Traditionally, these tasks have been studied individually, with methods commonly optimized for one or a few tasks (Liu et al., 2021; Shi et al., 2021). However, with the recent popularity of pre-trained transformer models (Raffel et al., 2020; Lewis et al., 2020; Xue et al., *Work done during an internship at AWS AI Labs. ![0_image_0.png](0_image_0.png) Figure 1: An example of unifying different tasks with a single encoder-decoder model with dataset name as a prefix. The model is trained on short-form table QA and table summarization tasks, and tested on a new longform table QA task. As there is a mismatch between the training and test tasks, the model is unable to generalize. 2021), there has been a paradigm towards unifying multiple NLP tasks with a single encoder-decoder model (Khashabi et al., 2020; Sanh et al., 2022). More recently, UnifiedSKG (Xie et al., 2022) extended this paradigm to table-to-text tasks by flattening the structured input (e.g., tables) into text format, and unifying all tasks with a T5 model (Raffel et al., 2020). By training the model over 21 datasets with structured input, it has established new state-of-the-art results for most of these tasks. Despite the success, existing work often rely on a simple trick to encode task information: the name of the dataset is often used as a prefix to the encoder at both training and test time. We argue that this overly simplified design has at least two major limitations. First, since no detailed information about the task is provided, any sharable knowledge between tasks is learned in a latent manner. Second, with this design, models are trained and evaluated on their abilities to solve specific *datasets*, rather than *tasks*. As a result, we may see substantial performance degradation when we apply the model to an unseen task at test time. Figure 1 illustrates the aforementioned limitations: a unified model (such as UnifiedSKG) is trained on *short-form table QA* (Zhong et al., 2017) and *table-based summarization* (Parikh et al., 2020), and we want to test the trained model on long-form table QA (Nan et al., 2022a), where the model should take a question and a table as input and output an abstractive sentence as the answer. As there is no way to instruct the model about the information of the new task, the model can only make an educated guess by generating the most plausible text "Best Vocals" according to the training datasets, which fails to serve as a good longform answer. We therefore argue that it is critical to test a table-to-text model's *cross-task generalizability*, which is captured in neither the training methods nor the evaluation setup in existing work. In this paper, we propose the use of *compositional task configurations*, a set of text prompts prepended to the encoder to improve the cross-task generalizability of unified table-to-text models. For a given task, we design its configuration prompt to be compositional, describing the task type, dataset name, input type, and output type. This design offers at least two key advantages. First, the task configurations explicitly inform the model what is shared between different tasks. For example, the model is able to learn from the configurations that table-based fact checking and table-based QA share the same inputs but different outputs. Second and more importantly, using task configurations allows us to have explicit control over the model's behaviors. For the example in Figure 1, we can now compose a new configuration for long-form table QA at test time to instruct the model to first produce a set of relevant cells and synthesize them to produce a long-form answer, which is within the capabilities of the two training tasks. We discuss this further in the next section. Our evaluation focuses model's cross-task generalizability. Specifically, we train our model on 5 table-to-text datasets and test it on an additional set of 5 new datasets that cover either a new domain of an existing task or a new task of which the capabilities can be composed by the ones learned through the 5 training datasets. Our main findings can be summarized as follows: - Our method not only outperforms the strong UnifiedSKG baseline consistently on the 5 indomain datasets, but also demonstrates much stronger cross-task generalization. - In zero-shot evaluation on the 5 test-only tasks, our model outperforms UnifiedSKG by a substantial margin of +6.5 and +12.6 average scores from using T5-base and T5-large, respectively. Notably, we find that in zero-shot evaluation on FETAQA (Nan et al., 2022a), a long-form table QA task, while the baseline completely fails with a 0.6 F1 score, our method leads to much better generalization, achieving a 21.2 F1 score. - We also show that using the compositional task configurations allows the model to output supporting table cells that supplement its final prediction in a zero-shot manner. Human evaluation of the generated supporting cells for the TABFACT dataset reveals that more than 80% of the generated cells have high relevance to the task. ## 2 Method & Tasks Prompting is a natural and feasible way to impose explicit control over the behaviors of pre-trained language models (Wei et al., 2021; Chung et al., 2022; Sanh et al., 2022). In this work, we implement the task configurations as prompts of an encoder-decoder model. Each task configuration contains the following four aspects: task type, input type, output type, and dataset name. The task type is the end goal of a task, e.g., QA and summarization, as shown in Figure 2. Input and output types specify the inputs of the encoder and the outputs of the decoder of table-to-text models, respectively. These types can be compositional, for example, both long-form and short-form table QA in Figure 2 require the decoder to output a set of relevant cells and the final answer. The dataset name specifies the dataset used for training. As different datasets can share the same task type, input and output types, we assume that the model is able to learn the shared and the unique knowledge across different datasets by adding the dataset names as configurations. When testing the model on a new dataset, we can simply omit the dataset name since it is not trained. One of the major advantages of having explicit task configurations is that it enables the model to learn the mapping between a configuration and its behavior. At test time, we can compose a new set of configurations which suits best for an unseen task using the trained configurations. Figure 2 demon- ![2_image_0.png](2_image_0.png) strates that by training on short-form table QA, the model learns the ability to generate a set of relevant table cells according to the question and then derive the answer based on those cells. By training on table summarization, the model learns to produce a summary based on a set of table cells.1 At test time, by reformulating the long-form table QA as a query-based summarization task, our model is able to first generate a set of relevant cells (learned through short-form table QA) and then synthesize those cells to yield a long-form answer (learned from table summarization). Note that our method is an efficient extension to the original UnifiedSKG model. It only requires a small input prefix, comprising less than 5% of the total sequence length, making it flexible for generalization to more tasks and datasets. ## 2.1 Datasets And Task Configurations A detailed list of our datasets with their task information is shown in Table 1. We consider 5 datasets as in-domain datasets for both training and testing: WIKISQL (Zhong et al., 2017), WIKITQ (Pasupat and Liang, 2015), SQUAD (Rajpurkar et al., 2016), TOTTO (Parikh et al., 2020) and TABFACT (Chen et al., 2019). For SQUAD and TOTTO, since no official test set is released, we follow UnifiedSKG (Xie et al., 2022) and report results on the official development sets. In addition, we consider 5 datasets for test only: NQ-TABLES (Kwiatkowski et al., 2019; Herzig et al., 2021), HYBRIDQA (Chen et al., 2020), TATQA (Zhu et al., 2021), FETAQA (Nan et al., 2022b) and FEVEROUS (Aly et al., 2021). The test-only evaluation setup aims to assess the effectiveness of our method in enabling the model to generalize to unseen tasks with new compositional configurations, as well as to a new dataset with existing configurations. Specifically, we test if the model can benefit from a combination of input configurations for tasks such as HYBRIDQA, TAT-QA, and FEVROUS, which involve both passages and tables as inputs, despite the model is only trained on one or the other during training. Similarly, we examine if the configuration for FETAQA, a combination of WIKISQL and TOTTO as shown in Figure 2, allows for explicit control over the model's behaviors, resulting in improved generalization. Finally, we assess the model's ability to generalize to a new dataset, NQTABLES, which has the same configurations as WIKISQL and WIKITQ. For all of these datasets, we linearize the tables following the strategy used in UnifiedSKG (Xie et al., 2022). By inserting several special tokens like vertical bars to indicates the boundaries between cells and rows, a table can be linearized as: "Headers: h1|...|hm, Row 1: c11|...|c1m ... Row n: cn1|...|cnm". Here, hi denotes the ith header of a table and cij denotes the cell content in the ith row and the jth column. For simplicity, we fix the order of the task configurations to be task type, dataset name, input type, and output type. We prepend the task configuration to the original input of a dataset and feed it to the model. To make our input and output better aligned with the configurations, we also introduce some special markups to separate different parts of inputs and outputs: Figure 3 illustrates the actual model's input and output of the short-form table QA example from Figure 2. See Appendix B for inputs and outputs constructed for all of the datasets and Appendix A for preprocessing details of each dataset. | Dataset | Task Type | Input | Output | Unseen? | |-----------|------------------|-------------------------|---------------------------|-----------| | WIKISQL | QA (table) | query + table | cells + short-form answer | - | | WIKITQ | QA (table) | query + table | short-form answer | - | | SQUAD | QA (text) | query + passage | short-form answer | - | | TOTTO | Summarization | table | cells + summary | - | | TABFACT | Fact-check | query + table | binary answer | - | | NQ-TABLES | QA (table) | query + table | short-form answer | - | | HYBRIDQA | QA (hybrid) | query + passage + table | short-form answer | ✓ | | TAT-QA | QA (hybrid) | query + passage + table | short-form answer | ✓ | | FETAQA | QA (abstractive) | query + table | cells + long-form answer | ✓ | | FEVEROUS | Fact-check | query + passage + table | binary answer | ✓ | ## 3 Experiments Experimental settings We evaluate our method by following the experimental setup shown in Table 1. We follow the experimental settings of UnifiedSKG (Xie et al., 2022) and use T5 (Raffel et al., 2020) as the backbone of our table-to-text model. Our implementation is based on the publicly released code of UnifiedSKG which is developed based on the transformer library (Wolf et al., 2019). To balance the size of different datasets during training, we use the temperature upsampling method proposed in the original T5 paper Encoder Input ![3_image_0.png](3_image_0.png) and set the temperature to 2. For all experiments, we use a batch size of 128 and AdamW (Loshchilov and Hutter, 2018) as the optimizer with the initial learning rate set to 5e-5. We limit the maximum length of the input, including task configuration and the actual inputs, to be 1080 sentence-piece tokens. We train both the T5-base and T5-large models on the training set for 20 epochs and we use early stopping with the patience set to 2. We use deepspeed (Rasley et al., 2020) to reduce the GPU memory loads when training the T5-large model. The approximate GPU hours for T5-base and T5-large are 250 and 650 respectively on A100 GPUs with 40G memory. | Train + Test Test-only | |--------------------------| Baseline We mainly compare our method against UnifiedSKG (Xie et al., 2022), a strong baseline that was shown to achieve state-of-the-art results on many table-to-text tasks via multi-task training. In UnifiedSKG, for each task, a dataset name is prepended to the encoder during multi-task fine- | Zero-shot Test-only Tasks | | | | | | | | |-----------------------------|------------|-------------|-----------|--------|----------|------|------| | Models | NQ-TABLES | FETAQA | HYBRIDQA | TAT-QA | FEVEROUS | Avg. | | | BLEU | EM | EM | EM | Acc. | - | | | | Single Task | 51.6 | 29.9 | 54.3 | 34.5 | 81.3 | 50.3 | | | T5-base | UnifiedSKG | 37.8 | 0.6 | 22.5 | 18.2 | 67.5 | 29.3 | | Task Configs (Ours) | 39.4 | 21.0 | 28.9 | 20.8 | 68.9 | 35.8 | | | Single Task | 52.2 | 33.0 | 56.6 | 36.2 | 82.1 | 52.0 | | | T5-large | UnifiedSKG | 42.6 | 0.7 | 34.1 | 20.4 | 41.4 | 27.8 | | Task Configs (Ours) | 43.0 | 25.2 | 38.0 | 20.8 | 75.0 | 40.4 | | | In-domain Tasks | | | | | | | | | Models | WIKISQL | WIKITQ | TOTTO | SQUAD | TABFACT | Avg. | | | EM | EM | BLEU (dev.) | EM (dev.) | Acc. | - | | | | Single Task | 81.6 | 35.8 | 36.7 | 83.6 | 76.1 | 62.8 | | | T5-base | UnifiedSKG | 82.9 | 41.1 | 37.2 | 82.5 | 77.1 | 64.2 | | Task Configs (Ours) | 83.5 | 42.5 | 37.4 | 83.0 | 77.5 | 64.8 | | | Single Task | 85.5 | 43.4 | 37.8 | 86.0 | 81.0 | 66.7 | | | T5-large | UnifiedSKG | 86.0 | 48.5 | 38.7 | 86.1 | 83.0 | 68.5 | | Task Configs (Ours) | 86.7 | 50.0 | 38.7 | 86.2 | 83.3 | 69.0 | | tuning as a pseudo-task configuration. For fair comparisons, we re-trained UnifiedSKG models on the five in-domain datasets by using the authors' implementation.2 Evaluation For the in-domain tasks, we simply train on their training sets and evaluate on their test sets. For the test-only tasks, we evaluate our method in two settings: 1) a **zero-shot** setting, where we directly apply the model trained on indomain datasets and use a new set of task configs designed for each test dataset; 2) a **few-shot** setting, where for each test dataset, we further fine-tune the model using n randomly sampled training examples (where n is small). Since we observed that the few-shot training is unstable and heavily depends on the sampled examples, we report average performance from 5 different random seeds (each with a different set of few-shot examples). ## 4 Results 4.1 Main Results We present the in-domain and zero-shot evaluation results for all datasets in Table 2 and the few-shot evaluation results for OOD datasets in Figure 4. We have the following observations: First, **using compositional task configs shows** much stronger performance on zero-shot datasets unseen at training time (Table 2). For example, the UnifiedSKG baseline fails to generalize at test time to FETAQA, a long-form table QA task where the input is a question-table pair and the output is a long-form abstractive answer. This is due to the baseline model having no clue of what format of output should be produced and what knowledge learned through the training datasets should be leveraged for this task. In contrast, by reformulating the long-form table QA as a querybased summarization task and composing the input configurations to be *table* and *query* as well as the output task configs to be *relevant cell* and *summary*, our method notably improves the zero-shot performance and closes the gap between zero-shot and single-task finetuning results. Note that among the zero-shot datasets, NQ-TABLES represents a new dataset for an existing task (short-form table QA), whereas others represent new tasks unseen at train- ![5_image_0.png](5_image_0.png) ing. Nevertheless, we found the improvements to be consistent for all zero-shot datasets, with average improvements of +6.5 and +12.6 for base and large models, respectively. Second, in most cases, **using compositional** task configs consistently improves the in-domain performance over the UnifiedSKG baseline and single-task training (Table 2). The observation is consistent for both base and large model sizes, with average improvements of +0.6 and +0.5 over UnifiedSKG, respectively. The improvement over single-task fine-tuning is even greater for all datasets. One explanation to this improvement is that by adding task configs we explicitly encourage the model to learn the shared knowledge between different tasks and datasets. Last but not the least, in few-shot evaluation (as depicted in Figure 4), we find that **using task configurations has improved few-shot learning performance for most test-time tasks**. Overall the difference between our method and the UnifiedSKG baseline is particularly notable when the number of supervised examples (n) is small; and the performance gap diminishes for HYBRIDQA, TAT-QA, and FEVEROUS as n gets larger. One possible explanation is that the prior captured by the task configurations during training is not closely aligned with these three datasets, when n getting larger, the prior introduced by the task configurations is gradually overridden by knowledge learned from the supervised data. ## 4.2 Ablation Of Task Configs At Training Time The impact of individual configurations on model performance was evaluated by removing one configuration at a time during training. The results, presented in Table 3, indicate that the removal of the **output type** resulted in the largest performance drop, as the model was only able to guess the de- | Models | FETAQA | NQ | HYBRID TAT FEVR Avg. | | | | |--------------|----------|------|------------------------|------|------|------| | Full configs | 21.0 | 39.4 | 28.9 | 20.8 | 68.9 | 35.8 | | - dataset | 21.2 | 36.1 | 25.3 | 19.8 | 68.1 | 34.1 | | - task type | 0.4 | 38.4 | 28.3 | 21.3 | 68.5 | 31.4 | | - input | 20.3 | 39.5 | 29.1 | 19.4 | 68.3 | 35.3 | | - output | 17.3 | 34.8 | 17.9 | 16.1 | 68.2 | 30.9 | sired output type based on learned parameters. The removal of the **input type** had the least impact on performance. This is likely due to the fact that learning the representation of the two input types was not difficult for the model, and explicitly informing the model about the input type does not provide significant benefit, as observed in the previous section. The removal of the **dataset name** also results in a performance drop, particularly on the NQ-TABLES dataset, indicating that even when the task type, input, and output are the same, including the dataset name helps the model learn dataset-independent knowledge more effectively. The removal of the **task type** results in a complete failure on the FETAQA dataset, demonstrating that all configurations are needed to produce the correct form of output. A more detailed discussion of these findings can be found in Section 7. ## 4.3 Ablation Of Task Configs At Test Time While our method demonstrates much stronger zero-shot task performance, it is crucial to understand the extent to which input and output configurations contribute to this success, particularly for tasks involving hybrid input or output types that are not present during training. To examine the contributions of input configurations, we remove each configuration from the hybrid tasks (HYBRIDQA, TAT-QA and FEVEROUS) at test time, with results shown in Table 4. We found that deleting either of the input configurations results in a performance drop in most cases, and the drop is quite notable when the table and passage input configurations are removed together. This suggests that the input configuration captures useful priors about the input during training, and **different configurations** can be combined to yield better performance in the zero-shot transfer to hybrid tasks. We also observe a similar trend in Figure 5 where we test the model performance by removing the *cell* output configurations for FETAQA (thereby skipping cell generation). We see that in both zero-shot and fewshot settings, model performance drops by a large margin. This shows not only that the model can generate different outputs by combining the output configurations, but also that it can better utilize the prior captured by the configurations to improve task performance. ## 4.4 Human Evaluation Of Generated Cells In addition to the strong task generalizability, a key advantage of applying the proposed task configurations to table-to-text tasks is that we can modify the task configurations to output more results for improved explainability, even when such a configuration combination is never seen at training time. An example of this is for the table-based fact verification task, TABFACT, instead of only generating a binary label, we can extend the output configuration to include a *cell* component that can serve as supporting evidence of the binary prediction. We include two examples of this setting in Figure 6. To understand how well our model can generate supporting cells without ever being trained for it, we conduct a human evaluation over 50 randomly sampled outputs from the TABFACT dataset. We ask annotators to manually evaluate the generated cells based on their level of **relevance** and **completeness**. Relevance denotes the usefulness of the generated cells in verifying a claim (precision) and completeness refers to the extent to which all of the relevant cells are generated (recall). Detailed annotation instructions are shown in Appendix C. For each aspect, we ask the annotators to select between three labels that characterize its degree: "full", "partial" or "none". Three of the authors conduct the annotations, achieving 0.72 and 0.80 Fleiss Kappa (Fleiss, 1971) for relevance and completeness, respectively. We conduct majority vote to get | Models | HYBRIDQA | TAT-QA | FEVR | Avg. | |-----------------|------------|----------|--------|--------| | Full Configs | 28.9 | 20.8 | 68.9 | 39.5 | | - input:passage | 28.8 | 19.8 | 68.6 | 39.1 | | - input:table | 29.4 | 20.3 | 66.7 | 38.8 | | - input:all | 27.3 | 19.3 | 66.1 | 37.6 | ![6_image_0.png](6_image_0.png) the consensus label and the results are shown in Table 5. Overall we found that the model is able to generate cells with high relevance (with 72% examples being fully relevant generations), but struggle with full completeness (with 34% fully complete). ## 5 Related Work Table-to-text tasks Table-based tasks, including table-based question answering (Pasupat and Liang, 2015; Zhong et al., 2017; Chen et al., 2020; Cheng et al., 2022; Zhao et al., 2022), tablebased fact-checking (Chen et al., 2019; Aly et al., 2021), table summarization (Parikh et al., 2020; Suadaa et al., 2021; Moosavi et al., 2021), have gained increasing attention in recent years. A flurry of work using transformer-based structure explored modeling table structure via pretraining, for example, TabTransformer (Huang et al., 2020), VIME (Yoon et al., 2020), TABBIE (Iida et al., 2021), TaBERT (Yin et al., 2020), TUTA (Wang et al., 2021), TabT5 (Andrejczuk et al., 2022), and TableFormer (Yang et al., 2022). Our work mainly focuses on table-to-text tasks but the ideal neural architecture for encoding table structures is not our focus. Instead, we emphasize multi-task knowledge-sharing (similar to UnifiedSKG (Xie et al., 2022)) and cross-task gen- ![7_image_0.png](7_image_0.png) ![7_image_1.png](7_image_1.png) | Full | Partial | None | | |--------------|-----------|--------|-----| | Relevance | 72% | 10% | 18% | | Completeness | 34% | 48% | 18% | Table 5: Human evaluation results of the zero-shot cell generation quality for the TABFACT task. eralization in table-to-text tasks. Also, the proposed framework is capable of generalizing to a broader range of tasks and datasets. Task unification There have been a vein of work that tries to solve various NLP tasks using a single model. This includes encoder-decoder models like T5 (Raffel et al., 2020), UnifiedQA (Khashabi et al., 2020), UnifiedQA2 (Khashabi et al., 2022), UnifiedSKG (Xie et al., 2022); decoderonly models driven by prompts, for example, GPT3 (Brown et al., 2020), Codex (Chen et al., 2021), PaLM (Chowdhery et al., 2022). Our work extends UnifiedSKG by using an encoder-decoder model as the backbone and designing prompts to encourage better knowledge sharing between different tasks and enable control over the model's behaviors. fi Cross-task generalization with pretrained models Various efforts have been made to improve the ability of unified models to generalize to new tasks and datasets, including instruction-tuning using a wide range of natural language instructions (Chung et al., 2022; Sanh et al., 2022; Wei et al., 2021; Zhong et al., 2021), better design of prompts in zero-shot and few-shot setting (Wei et al., 2022; Zhou et al., 2022; Kojima et al., 2022). Our proposed method differs from instruction-tuning models like FLAN (Wei et al., 2021) in that we use a more symbolized prompt structure and it is possible to attribute cross-task generalizability to specific tasks and configurations. Also, instructiontuning models like FLAN achieve behavior control and cross-task generalizability through costly largescale instruction tuning. In contrast, our approach demonstrates that within a specific task domain with **limited datasets like table-to-text**, this can be achieved by utilizing a compositional prompt structure. Our method is also relevant to Macaw (Tafjord and Clark, 2021), ProQA (Zhong et al., 2022b), and SchemaPro (Zhong et al., 2022a), which also utilize explicit task descriptions to facilitate knowledge sharing between various NLP tasks. Our work differs in two main aspects: (1) Our work focuses on compositional generalization at test time, examining whether the model can combine different configurations from multiple tasks during training to generalize to unseen tasks at test time. (2) Our work focuses on table-to-text tasks. ## 6 Conclusion We introduced compositional task configurations for unified table-to-text models. Compared to existing unified encoder-decoder models that simply use dataset names as input prefix, compositional task configurations allow us to specify the task type, input, and output types at a finer level, which improve multi-task learning effectiveness and cross-task generalization. Further, we showed that our method allows fine-grained control over the model's generation at test time, enabling the model to generalize to unseen tasks and improving explainability via generating high-quality supporting table cells. ## 7 Limitations Task Configurations Are Entangled With The Full model parameters. In our ablation study of task configurations at training time (Table 3), we see that when training without **task type**, the model fails to generalize to FETAQA. Upon examining the model output, we find that although we change the output configuration to "long answer", the model still produces a short-form answer. This indicates that model behaviors are not always aligned with a single configuration, leading us to question the extent to which each individual configuration influences the model. In order to have better and more interpretable control over the models, one potential avenue for future research is to develop pluggable task configurations, where each configuration controls a more atomic function of the model and can be plugged, unplugged, and combined to yield different model behaviors. Our exploration scope is limited to table-to-text tasks. Due to the constraints of the computational resources, we haven't explored joint training with a broader range of other NLP tasks. We think with some modifications, such as the inclusion of dataset domains in the configuration set, it would be possible to extend our approach to additional datasets and tasks. ## 8 Ethics Statement The authors of this paper are committed to conducting research ethically. Data used in this work has been collected from public sources and used in accordance with all applicable laws and regulations. The only area of work that involves human annotation of data is described in Section 4.4, where authors of this paper annotated a group of samples for analyzing models' behaviors. We ensure that no external human subject was involved or harmed. In addition, this work uses language models, for which the risks and potential harms are discussed in numerous previous works (Bender et al., 2021; Weidinger et al., 2021). The authors strive to ensure that the research and its results do not cause harm. ## 9 Acknowledgement We would like to thanks the anonymous reviewers for their insightful comments and suggestions. ## References Rami Aly, Zhijiang Guo, Michael Sejr Schlichtkrull, James Thorne, Andreas Vlachos, Christos Christodoulopoulos, Oana Cocarascu, and Arpit Mittal. 2021. The fact extraction and VERification over unstructured and structured information (FEVEROUS) shared task. In *Proceedings of the* Fourth Workshop on Fact Extraction and VERification (FEVER), pages 1–13, Dominican Republic. Association for Computational Linguistics. Ewa Andrejczuk, Julian Martin Eisenschlos, Francesco Piccinno, Syrine Krichene, and Yasemin Altun. 2022. Table-to-text generation and pre-training with tabt5. arXiv preprint arXiv:2210.09162. Emily M Bender, Timnit Gebru, Angelina McMillanMajor, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big? In *Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency*. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In *Advances in Neural Information Processing Systems*, volume 33, pages 1877–1901. Curran Associates, Inc. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. 2021. Evaluating large language models trained on code. *arXiv preprint* arXiv:2107.03374. Wenhu Chen, Hongmin Wang, Jianshu Chen, Yunkai Zhang, Hong Wang, Shiyang Li, Xiyou Zhou, and William Yang Wang. 2019. Tabfact: A large-scale dataset for table-based fact verification. In *International Conference on Learning Representations*. Wenhu Chen, Hanwen Zha, Zhiyu Chen, Wenhan Xiong, Hong Wang, and William Yang Wang. 2020. HybridQA: A dataset of multi-hop question answering over tabular and textual data. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1026–1036, Online. Association for Computational Linguistics. Zhoujun Cheng, Haoyu Dong, Zhiruo Wang, Ran Jia, Jiaqi Guo, Yan Gao, Shi Han, Jian-Guang Lou, and Dongmei Zhang. 2022. HiTab: A hierarchical table dataset for question answering and natural language generation. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics* (Volume 1: Long Papers), pages 1094–1110, Dublin, Ireland. Association for Computational Linguistics. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. *arXiv preprint* arXiv:2204.02311. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. 2022. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416. Joseph L Fleiss. 1971. Measuring nominal scale agreement among many raters. *Psychological bulletin*, 76(5):378. Jonathan Herzig, Thomas Mueller, Syrine Krichene, and Julian Eisenschlos. 2021. Open domain question answering over tables via dense retrieval. In *Proceedings of the 2021 Conference of the North American* Chapter of the Association for Computational Linguistics: Human Language Technologies. Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno, and Julian Martin Eisenschlos. 2020. TAPAS: Weakly supervised table parsing via pre-training. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Xin Huang, Ashish Khetan, Milan Cvitkovic, and Zohar Karnin. 2020. Tabtransformer: Tabular data modeling using contextual embeddings. *arXiv preprint* arXiv:2012.06678. Hiroshi Iida, Dung Thai, Varun Manjunatha, and Mohit Iyyer. 2021. TABBIE: Pretrained representations of tabular data. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3446–3456, Online. Association for Computational Linguistics. Daniel Khashabi, Yeganeh Kordi, and Hannaneh Hajishirzi. 2022. Unifiedqa-v2: Stronger generalization via broader cross-format training. *arXiv preprint* arXiv:2202.12359. Daniel Khashabi, Sewon Min, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter Clark, and Hannaneh Hajishirzi. 2020. UNIFIEDQA: Crossing format boundaries with a single QA system. In *Findings of the Association for Computational Linguistics:* EMNLP 2020, pages 1896–1907, Online. Association for Computational Linguistics. Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. *arXiv preprint* arXiv:2205.11916. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question answering research. *Transactions of the Association for Computational Linguistics*, 7:452–466. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 7871–7880, Online. Association for Computational Linguistics. Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, and Jian-Guang Lou. 2021. Tapex: Table pre-training via learning a neural sql executor. In International Conference on Learning Representations. Ilya Loshchilov and Frank Hutter. 2018. Decoupled weight decay regularization. In *International Conference on Learning Representations*. Nafise Sadat Moosavi, Andreas Rücklé, Dan Roth, and Iryna Gurevych. 2021. Learning to reason for text generation from scientific tables. arXiv preprint arXiv:2104.08296. Linyong Nan, Chiachun Hsieh, Ziming Mao, Xi Lin, Neha Verma, Rui Zhang, Wojciech Krysci ´ nski, Nick ´ Schoelkopf, Riley Kong, Xiangru Tang, et al. 2022a. Fetaqa: Free-form table question answering. *Transactions of the Association for Computational Linguistics*, 10:35–49. Linyong Nan, Chiachun Hsieh, Ziming Mao, Xi Victoria Lin, Neha Verma, Rui Zhang, Wojciech Krysci ´ nski, ´ Hailey Schoelkopf, Riley Kong, Xiangru Tang, Mutethia Mutuma, Ben Rosand, Isabel Trindade, Renusree Bandaru, Jacob Cunningham, Caiming Xiong, Dragomir Radev, and Dragomir Radev. 2022b. FeTaQA: Free-form table question answering. *Transactions of the Association for Computational Linguistics*, 10:35–49. Ankur Parikh, Xuezhi Wang, Sebastian Gehrmann, Manaal Faruqui, Bhuwan Dhingra, Diyi Yang, and Dipanjan Das. 2020. ToTTo: A controlled table-to-text generation dataset. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1173–1186, Online. Association for Computational Linguistics. Panupong Pasupat and Percy Liang. 2015. Compositional semantic parsing on semi-structured tables. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1470– 1480, Beijing, China. Association for Computational Linguistics. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(140):1–67. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics. Jeff Rasley, Samyam Rajbhandari, Olatunji Ruwase, and Yuxiong He. 2020. Deepspeed: System optimizations enable training deep learning models with over 100 billion parameters. In *Proceedings of the 26th* ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 3505–3506. Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, et al. 2022. Multitask prompted training enables zeroshot task generalization. In *The Tenth International* Conference on Learning Representations. Peng Shi, Patrick Ng, Zhiguo Wang, Henghui Zhu, Alexander Hanbo Li, Jun Wang, Cicero Nogueira dos Santos, and Bing Xiang. 2021. Learning contextual representations for semantic parsing with generation-augmented pre-training. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 13806–13814. Lya Hulliyyatus Suadaa, Hidetaka Kamigaito, Kotaro Funakoshi, Manabu Okumura, and Hiroya Takamura. 2021. Towards table-to-text generation with numerical reasoning. In *Proceedings of the 59th Annual* Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1451–1465, Online. Association for Computational Linguistics. Oyvind Tafjord and Peter Clark. 2021. General-purpose question-answering with macaw. *arXiv preprint* arXiv:2109.02593. Zhiruo Wang, Haoyu Dong, Ran Jia, Jia Li, Zhiyi Fu, Shi Han, and Dongmei Zhang. 2021. Tuta: treebased transformers for generally structured table pretraining. In *Proceedings of the 27th ACM SIGKDD* Conference on Knowledge Discovery & Data Mining, pages 1780–1790. Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. 2021. Finetuned language models are zero-shot learners. In *International Conference on Learning Representations*. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. *arXiv preprint arXiv:2201.11903*. Laura Weidinger, John Mellor, Maribeth Rauh, Conor Griffin, Jonathan Uesato, Po-Sen Huang, Myra Cheng, Mia Glaese, Borja Balle, Atoosa Kasirzadeh, et al. 2021. Ethical and social risks of harm from language models. *arXiv preprint arXiv:2112.04359*. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2019. Huggingface's transformers: State-ofthe-art natural language processing. arXiv preprint arXiv:1910.03771. Tianbao Xie, Chen Henry Wu, Peng Shi, Ruiqi Zhong, Torsten Scholak, Michihiro Yasunaga, Chien-Sheng Wu, Ming Zhong, Pengcheng Yin, Sida I Wang, et al. 2022. Unifiedskg: Unifying and multi-tasking structured knowledge grounding with text-to-text language models. *arXiv preprint arXiv:2201.05966*. Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer. In *Proceedings* of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 483–498, Online. Association for Computational Linguistics. Jingfeng Yang, Aditya Gupta, Shyam Upadhyay, Luheng He, Rahul Goel, and Shachi Paul. 2022. TableFormer: Robust transformer modeling for tabletext encoding. In *Proceedings of the 60th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 528–537, Dublin, Ireland. Association for Computational Linguistics. Pengcheng Yin, Graham Neubig, Wen-tau Yih, and Sebastian Riedel. 2020. TaBERT: Pretraining for joint understanding of textual and tabular data. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8413–8426, Online. Association for Computational Linguistics. Jinsung Yoon, Yao Zhang, James Jordon, and Mihaela van der Schaar. 2020. Vime: Extending the success of self-and semi-supervised learning to tabular domain. Advances in Neural Information Processing Systems, 33:11033–11043. Yilun Zhao, Yunxiang Li, Chenying Li, and Rui Zhang. 2022. MultiHiertt: Numerical reasoning over multi hierarchical tabular and textual data. In *Proceedings* of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6588–6600, Dublin, Ireland. Association for Computational Linguistics. Ruiqi Zhong, Kristy Lee, Zheng Zhang, and Dan Klein. 2021. Adapting language models for zero-shot learning by meta-tuning on dataset and prompt collections. In *Findings of the Association for Computational* Linguistics: EMNLP 2021, pages 2856–2878, Punta Cana, Dominican Republic. Association for Computational Linguistics. Victor Zhong, Caiming Xiong, and Richard Socher. 2017. Seq2sql: Generating structured queries from natural language using reinforcement learning. CoRR, abs/1709.00103. Wanjun Zhong, Yifan Gao, Ning Ding, Zhiyuan Liu, Ming Zhou, Jiahai Wang, Jian Yin, and Nan Duan. 2022a. Improving task generalization via unified schema prompt. *arXiv preprint arXiv:2208.03229*. Wanjun Zhong, Yifan Gao, Ning Ding, Yujia Qin, Zhiyuan Liu, Ming Zhou, Jiahai Wang, Jian Yin, and Nan Duan. 2022b. ProQA: Structural promptbased pre-training for unified question answering. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4230–4243, Seattle, United States. Association for Computational Linguistics. Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Olivier Bousquet, Quoc Le, and Ed Chi. 2022. Least-to-most prompting enables complex reasoning in large language models. *arXiv preprint* arXiv:2205.10625. Fengbin Zhu, Wenqiang Lei, Youcheng Huang, Chao Wang, Shuo Zhang, Jiancheng Lv, Fuli Feng, and TatSeng Chua. 2021. TAT-QA: A question answering benchmark on a hybrid of tabular and textual content in finance. In *Proceedings of the 59th Annual* Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3277–3287, Online. Association for Computational Linguistics. ## A Dataset Statistics And Preprocessing The statistics and the licenses of the datasets we used in this paper are shown in Table 6. All datasets are English-based and all of them are based on Wikipedia except for TAT-QA which is based on financial reports. To fit the data into the encoder, for all datasets, we limit the max length of each cell to be 15 sentence-piece tokens. If the length (measured by sentence-piece tokens) of the linearized table is longer than 1024, we truncate random rows to reduce the table size. The original annotation of WIKISQL dataset does not include the relevant cells. We extract the relevant cells by executing the accompanied SQL query annotations. In most cases, the relevant cells equal to the final answer annotations; for the rest of the cases, aggregations or numerical operations need to be run to obtain the final answer. During training, we also create another version of the WIKISQL dataset, in which we exclude the relevant cells and only use the final answer as supervision to improve output diversity. We use both versions at training time. NQ-TABLES is a table-based QA dataset derived from the NaturalQuestions dataset (Kwiatkowski et al., 2019) and was originally released by Herzig et al. (2021). The original test set of NQ-TABLES contains 966 unique examples. In our experiments, to make the dataset more compatible with other table-based QA tasks, we evaluate on a customized version of NQ-TABLES where we only include an example if the answer is uniquely locatable as one or more table cells. This filtering step results in 549 unique triples of table, question and answers. To make TOTTO more compatible with other table-to-text tasks, we feed the selected cells as inputs to the decoder, as we mentioned in Section 2. Also, we find it helpful to create a reversed version of the TOTTO dataset, where we treat the annotated summary and the table as input and let the model predict the relevant cells. We add both versions of TOTTO to the training of all models, including the baseline. ## B Task Configurations Applied For Each Dataset Below we list the task configurations applied to all datasets. For each dataset, we present input to the encoder and output from the decoder separately. For encoder, we include the template of the full input, including task configurations as well as how the dataset input is structured (with actual data replaced by "..."). For decoder, we include how we structure the annotated output during training and how we parse the output during testing. | Train+Test Test-only | |------------------------| | Dataset | Train | Dev | Test | License | Domain | Language | |-----------|---------|--------|--------|--------------------|-----------|------------| | WIKISQL | 56,355 | 8,421 | 15,878 | BSD 3-Clause | Wikipedia | English | | WIKITQ | 11,321 | 2,810 | 4,344 | CC BY-SA 4.0 | Wikipedia | English | | SQUAD | 87,599 | 10,570 | - | CC BY-SA 4.0 | Wikipedia | English | | TOTTO | 120,761 | 7,700 | - | CC BY-SA 3.0 | Wikipedia | English | | TABFACT | 92,283 | 12,792 | 9,750 | CC BY 4.0 | Wikipedia | English | | NQ-TABLES | - | - | 549 | Apache License 2.0 | Wikipedia | English | | HYBRIDQA | - | 3,466 | - | CC BY 4.0 | Wikipedia | English | | TAT-QA | - | - | 1,669 | MIT License | Finance | English | | FETAQA | - | - | 2,003 | CC BY-SA 4.0 | Wikipedia | English | | FEVEROUS | - | 7,890 | - | CC BY-SA 3.0 | Wikipedia | English | Table 6: Dataset statistics of the datasets we used in the paper. Note that for the test-only datasets, except for few-shot experiments, only the test splits of the original datasets are used. For SQUAD, HYBRIDQA, FEVEROUS, and TOTTO, as no public test set is offered, we evaluate the model on the original development sets following UnifiedSKG (Xie et al., 2022). For NQ-TABLES, we use a modified version of it in our experiments as described in Appendix A. ## B.1 Wikisql Encoder: [Task: QA] [Dataset: WikiSQL] [Input: query] [Input: table] [Output: cells] [ Output: short answer] [query] ... [/ query] [table] ... [/table] Decoder: [cell] ... [/cell] [answer] ... [/answer ] ## B.2 Wikitq Encoder: [Task: QA] [Dataset: WikiTQ] [Input: query] [Input: table] [Output: short answer] [query] ... [/query] [table] ... [/table] Decoder: [answer] ... [/answer] ## B.3 Squad Encoder: [Task: QA] [Dataset: SQuAD] [Input: query] [Input: passage] [Output: short answer] [query] ... [/query] [passage] ... [/passage] Decoder: [answer] ... [/answer] ## B.4 Totto Encoder: [Task: Summarization] [Dataset: ToTTo] [ Output: cells] [Output: long answer] Decoder: [cell] ... [/cell] [answer] ... [/answer ] ## B.5 Tabfact Encoder: [Task: Fact-checking] [Dataset: TabFact] [Input: query] [Input: table] [Output: binary answer] [query] ... [/query] [ table] ... [/table] Decoder: [answer] ... [/answer] ## B.6 Nq-Tables Encoder: [Task: QA] [Input: query] [Input: table] [Output: short answer] [query] ... [/ query] [table] ... [/table] Decoder: [answer] ... [/answer] ## B.7 Hybridqa Encoder: [Task: QA] [Input: query] [Input: table] [Input: passage] [Output: short answer] [query] ... [/query] [table] ... [/ table] [passage] ... [/passage] Decoder: [answer] ... [/answer] ## B.8 Tat-Qa Encoder: [Task: QA] [Input: query] [Input: table] [Input: passage] [Output: short answer] [query] ... [/query] [table] ... [/ table] [passage] ... [/passage] Decoder: [answer] ... [/answer] ## B.9 Fetaqa Encoder: [Task: Summarization] [Input: query] [ Input: table] [Output: cells] [Output: long answer] [query] ... [/query] [table ] ... [/table] Decoder: [cell] ... [/cell] [answer] ... [/answer ] ## B.10 Feverous Encoder: [Task: Fact-checking] [Input: query] [ Input: table] [Input: passage] [Output: binary answer] [query] ... [/query] [ table] ... [/table] [passage] ... [/ passage] Decoder: [answer] ... [/answer] ## B.11 Wikisql-Answer-Only Encoder: [Task: QA] [Dataset: WikiSQL] [Input: query] [Input: table] [Output: short answer] [query] ... [/query] [table] ... [/table] Decoder: [answer] ... [/answer] ## B.12 Totto-Reverse Encoder: [Task: Cell-generation] [Input: query] [ Input: table] [Output: cell] [query] ... [/query] [table] ... [/table] Decoder: [cell] ... [/cell] ## C Annotation Interface The annotation interface we used for our human study in this paper is shown in Figure 7 ## Zero-Shot Cell Generation (Tabfact) Annotation Guideline: For each claim, the corresponding table, and the generated cells, our main goal is to evaluate the generated cells under the following two aspects: 1. cell relevance: whether the predicted cells are relevant to check the claim (precision). We have the following three labels: (1) relevant: all cells are relevant (precision == 1) (2) partially relevant: some cells are relevant (0 < precision < 1) (3) irelevant: none of the cells are relevant (precision == 0) 2. cell completeness: whether the predicted cells contain all information needed. For example, claim mentions two entities but only one is predicted. We have the following three labels: (1) complete: all necessary information is contained by the cells. (2) partially complete: only part of the information is covered by the cells. (3) incomplete: none of the information is covered by the cells. Example ID: 17 Submit ## Input Claim: the top 2 ranked team be in north america ## Output: [cell] 1 | sherri baier / robin cowan | canada | 2 | lorene mitchell / donald mitchell | united states [/cell] [answer] entailed [/answer] | rank | name | nation | points | places | |--------|------------------------------------|----------------|----------|----------| | 1 | sherri baier / robin cowan | canada | 128.39 | 9 | | 2 | lorene mitchell / donald mitchell | united states | 124.94 | 16 | | 3 | elizabeth cain / peter cain | australia | 116.67 | 33 | | 4 | jana bláhová / ludek | czechoslovakia | 113.74 | 36 | | 5 | sabine fuchs / xavier vide | france | 114.12 | 39 | | 6 | karen wood / stephen baker | united kingdom | 100.33 | 55 | | 7 | catherine brunet / philippe brunet | france | 94.27 | 62 | ## Table Caption: 1976 world junior figure skating championships Figure 7: Instruction and the annotation interface we used for the human study in section 4.4. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 7 ✓ A2. Did you discuss any potential risks of your work? Section 8 ✓ A3. Do the abstract and introduction summarize the paper's main claims? Last paragraph of Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 2.1 ✓ B1. Did you cite the creators of artifacts you used? Section 2.1 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Appendix A ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Appendix A ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? We checked the data-collecting procedure of the datasets we used in this paper and found no specific offensive content is included. However, we did not take further action to conduct a more comprehensive review of each dataset, as they are widely used within the community and the large volume of datasets we employed made this impractical. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Appendix A ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Appendix A The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ## C ✓ **Did You Run Computational Experiments?** Section 3 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 3 ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 3 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 3 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 3 ## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Section 4.4 ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Section 4.4 Appendix C D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
hassan-alikhani-2023-calm
{D}-{CALM}: A Dynamic Clustering-based Active Learning Approach for Mitigating Bias
https://aclanthology.org/2023.findings-acl.342
Despite recent advancements, NLP models continue to be vulnerable to bias. This bias often originates from the uneven distribution of real-world data and can propagate through the annotation process. Escalated integration of these models in our lives calls for methods to mitigate bias without overbearing annotation costs. While active learning (AL) has shown promise in training models with a small amount of annotated data, AL{'}s reliance on the model{'}s behavior for selective sampling can lead to an accumulation of unwanted bias rather than bias mitigation. However, infusing clustering with AL can overcome the bias issue of both AL and traditional annotation methods while exploiting AL{'}s annotation efficiency. In this paper, we propose a novel adaptive clustering-based active learning algorithm, D-CALM, that dynamically adjusts clustering and annotation efforts in response to an estimated classifier error-rate. Experiments on eight datasets for a diverse set of text classification tasks, including emotion, hatespeech, dialog act, and book type detection, demonstrate that our proposed algorithm significantly outperforms baseline AL approaches with both pretrained transformers and traditional Support Vector Machines. D-CALM showcases robustness against different measures of information gain and, as evident from our analysis of label and error distribution, can significantly reduce unwanted model bias.
# D-Calm: A Dynamic Clustering-Based Active Learning Approach For Mitigating Bias Sabit Hassan and Malihe Alikhani ![0_image_0.png](0_image_0.png) School of Computing and Information University of Pittsburgh, Pittsburgh, PA {sah259,malihe}@pitt.edu ## Abstract Despite recent advancements, NLP models continue to be vulnerable to bias. This bias often originates from the uneven distribution of real-world data and can propagate through the annotation process. Escalated integration of these models in our lives calls for methods to mitigate bias without overbearing annotation costs. While **active learning (AL)** has shown promise in training models with a small amount of annotated data, AL's reliance on the model's behavior for selective sampling can lead to an accumulation of unwanted bias rather than bias mitigation. However, infusing clustering with AL can overcome the bias issue of both AL and traditional annotation methods while exploiting AL's annotation efficiency. In this paper, we propose a novel adaptive clustering-based active learning algorithm, **D-CALM**, that dynamically adjusts clustering and annotation efforts in response to an estimated classifier errorrate. Experiments on eight datasets for a diverse set of text classification tasks, including emotion, hatespeech, dialog act, and book type detection, demonstrate that our proposed algorithm significantly outperforms baseline AL approaches with both pretrained transformers and traditional Support Vector Machines. **DCALM** showcases robustness against different measures of information gain and, as evident from our analysis of label and error distribution, can significantly reduce unwanted model bias. ## 1 Introduction While NLP models have experienced groundbreaking advancements in performance and functionality in recent years, they have been under scrutiny for exhibiting bias (Lu et al., 2020; Ahn and Oh, 2021; Kiritchenko and Mohammad, 2018). As noted by Davidson et al. (2019), classifier bias can stem from distribution in training data rather than the classifier itself. This bias is complex and can manifest in various forms, including racial, gender-based, and other types of discrimination. For example, Figure 1: Example scenario: classifiers may not perform well for underrepresented groups in the data. Here, the classifier has a high error rate in detecting hatespeech (HS) against persons of color. Thus, annotation effort should be focused on regions (upper-right) more likely to contain hatespeech against persons of color. in a hatespeech dataset, hatespeech against Persons of Color might be underrepresented, leading to a model biased against Persons of Color (Figure 1). Since the true distribution of data is unknown prior to labeling, ridding these models of such unwanted bias would require annotating a large number of samples to ensure that minority groups are well-represented in the data, incurring much higher cost, time, and effort. As such, we are in need of methods that can mitigate unwanted bias without overwhelming annotation costs. We address the problem of bias with a novel *clustering-based* active learning approach. Although active learning (Settles, 2009) is regarded as an efficient method for training models, generic active learning methods can induce bias (Krishnan et al., 2021) rather than mitigate it. Al5540 though there have been numerous works aimed at mitigating bias of active learning methods by the machine learning community (Farquhar et al., 2021; Gudovskiy et al., 2020), these approaches often necessitate an in-depth comprehension of machine learning and active learning theories. We hypothesize that infusing clustering with active learning will allow us to overcome bias issues of both generic active learning and traditional annotation approaches while leveraging the annotation efficiency of active learning. To this end, we propose a novel dynamic clustering-based algorithm that can substantially improve performance and mitigate bias —**D-CALM** (Dynamic Clustering-based Active Learning for Mitigating Bias)1. **D-CALM** leverages the distance between a classifier's predictions and true labels in dynamically-adjusted subregions within the data. As opposed to existing active learning methods (Bodó et al., 2011; Berardo et al., 2015) that utilize static clustering of data, our proposed algorithm adapts the clustering in each iteration of active learning. As the classifier gets updated in each iteration, the classifier's error rate changes in different regions. By calibrating the boundaries of clusters iteratively, **D-CALM** focuses annotation effort in updated regions with the evolving classifier's error-rate. As **D-CALM** dynamically adapts its regions for obtaining samples, we hypothesize that our approach will result in reduced bias. Similar to Hassan et al. (2018), we expect bias reduction to be reflected in improved performance metrics and more balanced label and error distribution. We test our hypothesis across eight datasets, spanning a diverse range of text classification tasks (e.g., fine-grained hatespeech, dialog act, emotion detection) and a case study of fine-grained hatespeech detection. Our algorithm is model agnostic, showing substantial improvement for both pretrained models and lightweight Support Vector Machines. Our experiments also demonstrate robustness of **D-CALM** with respect to different measures of information gain. ## 2 Related Work Active learning is a well-studied problem in machine learning (Settles, 2009) with numerous scenarios and query strategies (Section 3). Although active learning has shown promise in many tasks, susceptibility to bias, particularly for neural networks, is a concern raised by several works (Yuan et al., 2020). There are existing works that aim to mitigate this bias. Farquhar et al. (2021) proposes using corrective weights to mitigate bias. Gudovskiy et al. (2020) propose selfsupervised Fischer-Kernel for active learning on biased datasets. These approaches, however, often require a deep understanding of active learning and neural networks. Our approach is tailored for the NLP community and can easily be deployed. In recent years, there has been a renewed interest in active learning within the NLP community (Zhang et al., 2022). Some recent works have applied active learning with BERT models for specific tasks such as intent classification (Zhang and Zhang, 2019), sentence matching (Bai et al., 2020), parts-of-speech tagging (Chaudhary et al., 2021) or named entity recognition (Liu et al., 2022). Margatina et al. (2022) propose continued pretraining on unlabeled data for active learning. Rotman and Reichart (2022) adapt active learning to multi-task scenarios for transformer models. Ein-Dor et al. (2020) perform a large-scale empirical study of existing active learning strategies on binary classification tasks. In comparison, we target a diverse range of binary and multi-class classification tasks. Some other works in the NLP domain have adapted advanced active learning approaches. Yuan et al. (2020) adapt the BADGE (Ash et al., 2020) framework for active learning with BERT. While BADGE computes gradient embedding using the output layer of a neural network and then clusters the gradient space, Yuan et al. (2020) computes surprisal embeddings by using Masked Language Model loss. Margatina et al. (2021) use acquisition functions to obtain contrastive samples for BERT. Our algorithm is comparatively straightforward, not requiring in-depth understanding of surprisal embeddings or acquisition functions. Our algorithm is also model-agnostic and can be applied to neural networks such as BERT, and traditional models such as SVMs. In addition, our clustering step relies on feature representation independent from the learner's representation which may induce bias during the learning process. While some of the aforementioned works (Ein-Dor et al., 2020; Yuan et al., 2020; Margatina et al., 2021) compute diversity in selected samples, our work is the first to analyze and address bias in active learning from a socio-cultural perspective. ![2_image_0.png](2_image_0.png) ## 3 Background This section presents the relevant background of generic active learning followed by a discussion of adapting clustering-based active learning framework for text classification. Within the scope of this paper, we focus on creating the train data. We assume that the dev and test data are already created. Literature of active *testing* (Kumar and Raj, 2018; Hassan et al., 2018) can be referred to for efficiently creating the dev and test set. ## 3.1 Active Learning Framework Due to the expanse of active learning literature, it is important to define the generic active learning framework within the scope of this paper. To do so, we need to define the *labeling scenario* and query-strategy. ## 3.1.1 Labeling Scenario In our work, we assume there is a large pool of unlabeled dataset U but only a small set of labeled dataset L that can be obtained. L is iteratively constructed by querying label for the *most-informative* instance. We focus on *pool-based* active learning because of its relevance to many recent NLP tasks (e.g., hatespeech detetion), for which, a large amount of unlabeled data is scraped from the web and then a subset of it is manually annotated. ## 3.1.2 Query-Strategy Many types of query-strategies have been proposed for active learning over the years, including, but not limited to: uncertainty sampling (Lewis and Gale, 1994), expected model change (Settles et al., 2007), expected error reduction (Roy and McCallum, 2001), and variance reduction (Hoi et al., 2006). In our work, we focus on uncertainty sampling because of its popularity and synergy with pool-based sampling (Settles, 2009). Settles (2009) lists three measures of uncertainty to identify most informative sample: ![2_image_1.png](2_image_1.png) Least Confident: Query the instance whose prediction is the least confident. $$x_{L C}^{*}=a r g m a x\ 1-P_{\theta}(\hat{y}|x)\qquad\qquad(1)$$ In Eq. 1, yˆ = *argmax*yPθ(y|x), or the class label with the highest probability. Smallest Margin: Query the sample with minimum difference between two most likely classes: x ∗MS = argmin x Pθ(ˆy1|x) − Pθ(ˆy2|x) (2) Entropy: The most commonly used measure of uncertainty is entropy: x ∗E = argmax x− X i Pθ(yi|x)logPθ(yi|x) (3) In Eq. 3, i ranges over all possible labels. It should be noted that, in binary classification, all the above measures become equivalent. The active learning framework, within the scope of this paper, is summarized in Figure 2 ## 3.1.3 Challenges Bias Induction: Since the active learning framework relies on the model's uncertainty to choose samples, the framework may never query samples that the model is confident on. The active learning classifier can become *confidently wrong* on certain samples, leading to an accumulation of bias. Effective Batch Selection: In a real-world setting, it is not feasible to obtain annotations one by one and queries need to be done in batches. The most straightforward approach would be to choose the N most informative samples (Citovsky et al., 2021). The limitations of this approach can be easily seen. Particularly when N is large, it can amplify the bias discussed earlier. ## 3.2 Clustering-Based Framework To address the challenges outlined earlier, we approach the problem with clustering-based framework for active learning under pool-based uncertainty sampling settings. Within this framework, the first step is to obtain vector representation of the unlabeled data. This can be done using SentenceBERT (Reimers and Gurevych, 2019) or more traditional Doc2Vec (Le and Mikolov, 2014). The next step is to cluster the data. This can be done using any clustering algorithm such as KMeans. Then, informative samples are chosen from each cluster, and are added to the training data. The classifier is retrained after each round and the process is repeated until the annotation budget runs out. Figure 3 summarizes this framework. ## 3.3 D-Calm Within the clustering-based framework of active learning, we propose a novel algorithm, **D-CALM**, that dynamically adjusts clusters in the data based on estimated classifier error rate. Algorithm 1 D-CALM: Dynamic Clusteringbased Active Learning for Mitigating Bias D, T ← dev data, test data U, L ← unlabeled data, labeled data G ← bootstrapped classifier B ← labeling budget N ← annotation batch size m ← initial number of clusters Cluster U into {C1, C2, ... Cm} Partition D into {C′1 , C′2 , ... C′m} while B ≥ 0 do for i=0,1,...m do Estimate accuracy Aiin C′ i end for for i=0,1,...m do Allocate li = N ∗ P 1−Ai j (1−Aj ) Cluster Ciinto {Ci1 , Ci2 , ...Cili} for j=0,1,...li do x∗ ij ← most infor. sample in Cij y∗ ij ← query true label for x∗ ij Add (x∗ ij , y∗ ij ) to L end for end for G ← retrain on L B = B − N end while Evaluate G on T In our proposed algorithm, the cluster C′ i is used to dynamically partition Ci. Our algorithm first observes how the classifier behaves in C′ i . For cluster Ci, it allocates samples proportional to the error rate in C′ i . Then the cluster Ciis split into subclusters according to the number of samples allocated to Ci. Most informative sample from each subcluster is then added to training data. The subclusters are dynamically updated in each iteration to account for the classifier's new state. This prevents the classifier from repeatedly sampling from any particular region. It is worth noting that *error-rate* can be substituted with different metrics to account for specific needs. For example, in scenarios where it is more important to reduce false negative rate reduce compared to false positive rate, the error rate can be substituted with false negative rate. In this paper, we focus on the general case of error-rate. ## 4 Experiment Setup In this section, we outline our experimental setup. ## 4.1 Active Learning Approaches For all the following approaches, total number of samples range from 100-300, initial allocation for bootstrapping is set to 50, and annotation batch size is 50. Similar to (Ein-Dor et al., 2020), the classifiers are retrained in each round. Random: The allocated number of samples are picked randomly from the unlabeled pool. TopN: The classifier is bootstrapped with 50 samples. In each iteration N most informative samples are labeled and added to training data until labeling budget runs out. TopN is a widely used baseline (Yuan et al., 2020; Ash et al., 2020). Cluster-TopN: The classifier is bootstrapped in the same way. The unlabeled pool is first clustered into 10 clusters and in each iteration, N/10 most informative samples are chosen from each cluster. Cluster-TopN combines TopN and stratified sampling (Qian and Zhou, 2010). We choose Cluster-TopN as a baseline due to its similarity with multiple existing methods (Xu et al., 2003; Zhdanov, 2019). D-CALM: The classifier is bootstrapped in a similar fashion. While **D-CALM** is not sensitive to the initial number of clusters because of its dynamic splitting into subclusters, we set initial number of clusters to 10 to be consistent with Cluster-TopN. ## 4.2 Models Transformers We fine-tune the widely-used bertbased-cased (Devlin et al., 2019). We observed that the models stabilize on the dev data when finetuned for 5 epochs with learning rate of 8e-5 and batch size of 16. The same setting is used across all experiments. Support Vector Machine (SVM) We choose SVM as our alternate model as it is completely different from transformers and because SVMs are still in use for practical purposes due to speed and lightweight properties (Hassan et al., 2021, 2022). We use Tf-IDF weighted character [2-5] grams to train SVMs with default scikit-learn settings2. ## 4.3 Datasets We evaluate our proposed algorithm on eight diverse datasets, among which two are binary classification datasets and the rest are multiclass. BOOK32 (Iwana et al., 2016) contains 207K book titles categorized into 32 classes such as *Biographies & Memoirs*. We take a subset that contains 20K random samples from 10 most frequent classes for runtime efficiency. Random sampling ensures the subset respects original distribution. CONAN (Fanton et al., 2021) contains 5K instances annotated for hatespeech targets: *Disabled,* Jews, LGBT+, Migrants, Muslims, Person of Color (POC), Women, and Other. CARER (Saravia et al., 2018) is an emotion detection dataset that contains six basic emotions in the released version: Anger, Fear, Joy, Love, *Sadness*, and *Surprise*. 3. The released version consists of 16K training, 2K dev and 2K test instances. CoLA (Saravia et al., 2018) contains 9.5K sentences expertly annotated for acceptability (grammaticality) in the public version. We use the indomain set as dev and out-of-domain as test set. HATE (Davidson et al., 2017) contains a total of 24.7K tweets that are annotated as: *Offensive*, Hatespeech, and *Neither*. MRDA (Shriberg et al., 2004) contains 117K instances annotated for dialog acts. We consider the five basic labels: Statement, BackChannel, Disruption, *FloorGrabber*, and *Question*. We limit the data to 20K randomly chosen samples for runtime efficiency. Q-Type (Li and Roth, 2002) contains 5.5K train and 0.5K test instances annotated for question types. We take the first level of annotation, containing six classes: *Entity, Description, Abbrebivation,* Number, Human, and textitLocation. Subjectivity (Pang and Lee, 2004) contains 10K snippets from Rotten Tomatoes/IMDB reviews automatically tagged as Subjective or *Objective*. ## 4.4 Data Preparation Splits We use default train-dev-test splits if they are provided. If they are not provided, we split the data into 70-10-20 splits. The train data is treated as unlabeled pool of data, dev data is used for tuning purposes and test data is used to report results. Table 1 shows summary of data used. | Dataset | classes | Pool | Dev | Test | |--------------|-----------|--------|-------|--------| | BOOK32 | 32 | 14K | 2K | 4K | | CONAN | 8 | 3.5K | 0.5K | 1K | | CARER | 6 | 16K | 2K | 4K | | CoLA | 2 | 8.5K | 0.5K | 0.5K | | Hatespeech | 3 | 17.2K | 2.4K | 4.9K | | MRDA | 5 | 14K | 2K | 4K | | Q-Type | 6 | 4.9K | 0.5K | 0.5K | | Subjectivity | 2 | 7K | 1K | 2K | Table 1: Statistics of used datasets Vector Representation We use MiniLM (Wang et al., 2020) sentence-transformer to transform text instances into 384 dimensional vectors. These vectors are then used to cluster the unlabeled data. Clustering We use KMeans to cluster the unlabeled pool of data. We use scikit-learn4implementation of KMeans with default parameters. ## 5 Results And Case Study We first discuss findings of our experiments, followed by a case study of fine-grained hatespeech detection. Figures 4,5 and 6 summarize the results across the eight datasets, different measures of information gain, and different models. Table 2 summarizes relative performance across all experiments. For each experiment, we report Macro-F1 4https://scikit-learn.org/stable/modules/ generated/sklearn.cluster.KMeans.html ![5_image_0.png](5_image_0.png) ![5_image_1.png](5_image_1.png) ![6_image_0.png](6_image_0.png) score averaged across 3 runs. We choose MacroF1 as our metric since it provides a more holistic measure of a classifier's performance across classes. Thus, reduction of bias is more likely to be reflected in metrics such as F1 compared to other metrics such as accuracy. ## 5.1 Experiment Results D-Calm Consistently Outperforms Baselines: From Figures 4, 5, 6 we can observe that **DCALM** consistently outperforms TopN, random and cluster-TopN across all datasets. From Table 2, we observe that **D-CALM** beats TopN in 32 out of 40 data points for BERT, among which, the difference in F1 score is greater than 5 in 15 cases. DCALM beats the nearest algorithm, Cluster-TopN in 28/40 (p value 0.003) for BERT and Random Sampling in 26/40 cases (p value 0.0073) for SVMs (Table 2). Both of these are statistically significant according to 2 population proportion test at significance level of 0.01. | Diff. | Count for BERT (IG=Entropy) | | | |---------|-------------------------------|---------|----------| | (F1) | DL > RND | DL > TN | DL > CTN | | > 0 | 33/40 | 32/40 | 28/40 | | > 1 | 30/40 | 27/40 | 23/40 | | > 3 | 24/40 | 18/40 | 16/40 | | > 5 | 15/40 | 12/40 | 11/40 | | > 10 | 4/40 | 4/40 | 2/40 | | (F1) | DL > RND | DL > TN | DL > CTN | | > 0 | 26/40 | 30/40 | 34/40 | | > 1 | 21/40 | 23/40 | 30/40 | | > 3 | 16/40 | 18/40 | 23/40 | | > 5 | 10/40 | 11/40 | 16/40 | | > 10 | 0/40 | 2/40 | 6/40 | ![7_image_0.png](7_image_0.png) D-CALM is more robust against critical failures: We observe from Figure 4 that on occasions such as in the case of Subjectivity and MRDA, TopN can have critical failures where the model ends up with an extremely low F1 score. Although on a few occasions, we witness dips in the curves of D-CALM, in general, the curves are much more stable, indicating its robustness. D-CALM is robust across different measures of information gain: From Figure 5, we see that **D-CALM** outperforms random, TopN, and cluster-TopN for different measure of information gain. Figure 5 does not contain the Subjectivity and CoLA because these are binary datasets and Entropy (reported in Figure 4), Least Confident and Smallest Margin become equivalent in the case of binary classification (Section 3). D-CALM is model-agnostic: From Figure 6, we observe similar patterns in improvement when the learner model is SVM instead of BERT. Although the degree of improvement is smaller for SVMs compared to BERT, it is a limitation of active learning rather than **D-CALM's**, as we can see others showing smaller improvement as well. Improvement over the baselines for SVMs in addition to BERT suggests **D-CALM** is model-independent. D-CALM is more robust against bias: Since D-CALM's focus on enforcing diversity in dynamically adjusting clusters separates it from the other methods, we can deduce that it is the bias reduction that is resulting in improved performance metrics. This is further supported by a study of label and error distribution in the following section. ## 5.2 Case Study We choose hatespeech detection as our case study because hatespeech and abusive content detection has been one of the most studied text classification tasks in recent years (Zampieri et al., 2019; Hassan et al., 2020). Due to its importance in many avenues, whether for AI moderation of online content (Ye et al., 2023) or filtering language model data (Abdelali et al., 2021), racial and gender bias in hatespeech and abusive content detection tasks have been a particular concern for the NLP community (Davidson et al., 2019; Ahn and Oh, 2021). ## 5.2.1 Label And Error Distribution To understand the effect of the underlying distribution in data, we observe the label distribution of samples chosen for annotation after one round of active learning. We also observe the distribution when the same number of samples are chosen randomly. From Figure 7 we can observe that random sampling may result in small number of samples chosen for minority classes in the data such as *Persons of Color* and *Disabled*. The distribution of labels obtained by TopN can become particularly skewed. We see that on average, TopN samples 25+ hatespeech targeting *Jews* while mostly ignoring groups such as *Person of Color (POC)* with <5 samples on average. While Cluster-TopN mitigates this problem to an extent, the best results are obtained by **D-CALM**, with samples for POC doubling compared to TopN and samples for *Disabled* doubling compared to random sampling after just one iteration. The error distribution in Figure 7, reflects the effects of this as we see errors for POC Text Annotated Predicted **Error Type** ![8_image_1.png](8_image_1.png) Jews are everywhere, but they are too fundamentalist. Jews *Muslims* Related Class Asians just want to talk with other asians: they are the racists... Other POC Dataset Limitation Migrants are just a bunch of animals, we can't accept them... Other *MIGRANT* Annotation Error Table 3: Examples of errors made on the CONAN dataset by our proposed approach. While our approach improves performance and mitigates bias as seen before, a better task design and annotation quality control are required to address these types of errors. *Disclaimer: these examples do not reflect authors' views in any way.* and *Disabled* are greatly reduced by **D-CALM**. It's important to note, if we had access to a large pool of labeled data, we could obtain a more balanced dataset for training. However, in a real-world scenario, before the annotation process, we only have access to unlabeled pool of data. As such, we cannot identify low-frequency classes and balance the training set. **D-CALM**, however, can obtain more samples from the underrepresented classes without knowing their true labels beforehand. ## 5.2.2 Error Analysis To understand the limitations of **D-CALM**, we manually annotated 100 errors made by the best run with BERT on the CONAN dataset after one iteration of active learning. Our key observations are listed below: - The model can be confused on closely related classes such as *Jews* and *Muslims* as the hatespeech in both cases target religions. - Some errors can be attributed to the limitation of annotation design. For example, CONAN contains the class Persons of Color (POC), but does not contain a separate class for racism against Asians. These instances are labeled as *Other* in the data but are predicted as POC by the model. - In some cases, the error is in the original annotation, rather than the model's prediction. Examples of these errors are listed in Table 3. While the first type of error can possibly be reduced with the addition of more data close to boundary regions between closely related classes, the last two types of errors need to be addressed during the design and annotation phase of the task. ## 6 Conclusion And Future Work In this paper, we presented a novel dynamic clustering-based active learning algorithm, **DCALM**, that can be easily adopted by the NLP community for training models with a small set of annotated data. We have shown that by focusing annotation efforts in adaptive clusters where the learner model has higher error rates, the performance can be improved substantially while reducing bias against underrepresented groups in unlabeled data. Our experiments also show that **DCALM** is robust across different datasets, different ![8_image_0.png](8_image_0.png) measures of information gain, and completely different model types. In the future, our approach can be adapted for creating less biased test sets for evaluating classifiers. An exciting future direction for our approach is to adapt it for natural language generation tasks such as style-transfer (Atwell et al., 2022) or counterspeech generation (Ashida and Komachi, 2022). ## Limitations It's important to note that, in this paper, we focus on bias resulting from underlying distribution of training data. Bias that may result from pretraining of transformer models (Li et al., 2021) is not within the scope of this paper. Although we conduct a case study of finegrained hatespeech detection task, a collective effort from the research community is required to better quantify bias mitigation of our approach across multiple tasks and different types of bias. Another limitation of our work is that our proposed algorithm requires dynamic adjustment of clusters. For very large datasets, this may be computationally expensive. ## Ethics Statement Although our proposed algorithm shows more stability and reduced bias compared to existing approaches and random sampling, it's important to observe the behavior of active learner as the algorithm may not completely eliminate bias, specifically when the annotation budget is small. This can be achieved by observing label and error variance on the evaluation data. It is also important to take into consideration the necessities of practical scenarios. In scenarios where certain type of bias is desired (e.g., higher precision), the algorithm needs to be adapted as outlined in Section 3.3 ## References Ahmed Abdelali, Sabit Hassan, Hamdy Mubarak, Kareem Darwish, and Younes Samih. 2021. Pre-training BERT on arabic tweets: Practical considerations. CoRR, abs/2102.10684. Jaimeen Ahn and Alice Oh. 2021. Mitigating languagedependent ethnic bias in BERT. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 533–549, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Jordan T. Ash, Chicheng Zhang, Akshay Krishnamurthy, John Langford, and Alekh Agarwal. 2020. Deep batch active learning by diverse, uncertain gradient lower bounds. *ArXiv*, abs/1906.03671. Mana Ashida and Mamoru Komachi. 2022. Towards automatic generation of messages countering online hate speech and microaggressions. In *Proceedings* of the Sixth Workshop on Online Abuse and Harms (WOAH), pages 11–23, Seattle, Washington (Hybrid). Association for Computational Linguistics. Katherine Atwell, Sabit Hassan, and Malihe Alikhani. 2022. APPDIA: A discourse-aware transformerbased style transfer model for offensive social media conversations. In *Proceedings of the 29th International Conference on Computational Linguistics*, pages 6063–6074, Gyeongju, Republic of Korea. International Committee on Computational Linguistics. Guirong Bai, Shizhu He, Kang Liu, Jun Zhao, and Zaiqing Nie. 2020. Pre-trained language model based active learning for sentence matching. In *Proceedings of the 28th International Conference on Computational Linguistics*, pages 1495–1504, Barcelona, Spain (Online). International Committee on Computational Linguistics. Saul Berardo, Eloi L. Favero, and Nelson Cruz Sampaio Neto. 2015. Active learning with clustering and unsupervised feature learning. In *Canadian Conference* on AI. Zalán Bodó, Zsolt Minier, and L. Csató. 2011. Active learning with clustering. In *Active Learning and* Experimental Design @ AISTATS. Aditi Chaudhary, Antonios Anastasopoulos, Zaid Sheikh, and Graham Neubig. 2021. Reducing confusion in active learning for part-of-speech tagging. Transactions of the Association for Computational Linguistics, 9:1–16. Gui Citovsky, Giulia DeSalvo, Claudio Gentile, Lazaros Karydas, Anand Rajagopalan, Afshin Rostamizadeh, and Sanjiv Kumar. 2021. Batch active learning at scale. In *NeurIPS*. Thomas Davidson, Debasmita Bhattacharya, and Ingmar Weber. 2019. Racial bias in hate speech and abusive language detection datasets. In Proceedings of the Third Workshop on Abusive Language Online, pages 25–35, Florence, Italy. Association for Computational Linguistics. Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. In Proceedings of the 11th International AAAI Conference on Web and Social Media, ICWSM '17, pages 512–515. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. *ArXiv*, abs/1810.04805. Liat Ein-Dor, Alon Halfon, Ariel Gera, Eyal Shnarch, Lena Dankin, Leshem Choshen, Marina Danilevsky, Ranit Aharonov, Yoav Katz, and Noam Slonim. 2020. Active Learning for BERT: An Empirical Study. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7949–7962, Online. Association for Computational Linguistics. Margherita Fanton, Helena Bonaldi, Serra Sinem Tekiroglu, and Marco Guerini. 2021. ˘ Human-in-theloop for data collection: a multi-target counter narrative dataset to fight online hate speech. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3226–3240, Online. Association for Computational Linguistics. Sebastian Farquhar, Yarin Gal, and Tom Rainforth. 2021. On statistical bias in active learning: How and when to fix it. *ArXiv*, abs/2101.11665. Denis A. Gudovskiy, Alec Hodgkinson, Takuya Yamaguchi, and Sotaro Tsukizawa. 2020. Deep active learning for biased datasets via fisher kernel selfsupervision. *2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, pages 9038–9046. Sabit Hassan, Hamdy Mubarak, Ahmed Abdelali, and Kareem Darwish. 2021. ASAD: Arabic social media analytics and unDerstanding. In *Proceedings of* the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations, pages 113–118, Online. Association for Computational Linguistics. Sabit Hassan, Younes Samih, Hamdy Mubarak, and Ahmed Abdelali. 2020. ALT at SemEval-2020 task 12: Arabic and English offensive language identification in social media. In *Proceedings of the* Fourteenth Workshop on Semantic Evaluation, pages 1891–1897, Barcelona (online). International Committee for Computational Linguistics. Sabit Hassan, Shaden Shaar, and Kareem Darwish. 2022. Cross-lingual emotion detection. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 6948–6958, Marseille, France. European Language Resources Association. Sabit Hassan, Shaden Shaar, Bhiksha Raj, and Saquib Razak. 2018. Interactive evaluation of classifiers under limited resources. In *2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA)*, pages 173–180. Steven C. H. Hoi, Rong Jin, and Michael R. Lyu. 2006. Large-scale text categorization by batch mode active learning. In *WWW '06*. Brian Kenji Iwana, Syed Tahseen Raza Rizvi, Sheraz Ahmed, Andreas Dengel, and Seiichi Uchida. 2016. Judging a book by its cover. arXiv preprint arXiv:1610.09204. Svetlana Kiritchenko and Saif Mohammad. 2018. Examining gender and race bias in two hundred sentiment analysis systems. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, pages 43–53, New Orleans, Louisiana. Association for Computational Linguistics. Ranganath Krishnan, Alok Sinha, Nilesh A. Ahuja, Mahesh Subedar, Omesh Tickoo, and Ravi R. Iyer. 2021. Mitigating sampling bias and improving robustness in active learning. *ArXiv*, abs/2109.06321. Anurag Kumar and Bhiksha Raj. 2018. Classifier risk estimation under limited labeling resources. *ArXiv*, abs/1607.02665. Quoc V. Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In *ICML*. David D. Lewis and William A. Gale. 1994. A sequential algorithm for training text classifiers. In *SIGIR* '94. Luoqiu Li, Xiang Chen, Hongbin Ye, Zhen Bi, Shumin Deng, Ningyu Zhang, and Huajun Chen. 2021. On robustness and bias analysis of bert-based relation extraction. In *CCKS*. Xin Li and Dan Roth. 2002. Learning question classifiers. In *COLING*. Mingyi Liu, Zhiying Tu, Tong Zhang, Tonghua Su, Xiaofei Xu, and Zhongjie Wang. 2022. Ltp: A new active learning strategy for crf-based named entity recognition. *Neural Processing Letters*, 54:2433– 2454. Kaiji Lu, Piotr Mardziel, Fangjing Wu, Preetam Amancharla, and Anupam Datta. 2020. Gender bias in neural natural language processing. In *Logic, Language, and Security*. Katerina Margatina, Loic Barrault, and Nikolaos Aletras. 2022. On the importance of effectively adapting pretrained language models for active learning. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 825–836, Dublin, Ireland. Association for Computational Linguistics. Katerina Margatina, Giorgos Vernikos, Loïc Barrault, and Nikolaos Aletras. 2021. Active learning by acquiring contrastive examples. In *Proceedings of the* 2021 Conference on Empirical Methods in Natural Language Processing, pages 650–663, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Bo Pang and Lillian Lee. 2004. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In *Proceedings of the ACL*. Longhua Qian and Guodong Zhou. 2010. Clusteringbased stratified seed sampling for semi-supervised relation classification. In *Proceedings of the 2010* Conference on Empirical Methods in Natural Language Processing, pages 346–355, Cambridge, MA. Association for Computational Linguistics. Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong, China. Association for Computational Linguistics. Guy Rotman and Roi Reichart. 2022. Multi-task active learning for pre-trained transformer-based models. Transactions of the Association for Computational Linguistics, 10:1209–1228. Nicholas Roy and Andrew McCallum. 2001. Toward optimal active learning through sampling estimation of error reduction. In *ICML*. Elvis Saravia, Hsien-Chi Toby Liu, Yen-Hao Huang, Junlin Wu, and Yi-Shin Chen. 2018. CARER: Contextualized affect representations for emotion recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3687–3697, Brussels, Belgium. Association for Computational Linguistics. Burr Settles. 2009. Active learning literature survey. Burr Settles, Mark W. Craven, and Soumya Ray. 2007. Multiple-instance active learning. In *NIPS*. Elizabeth Shriberg, Raj Dhillon, Sonali Bhagat, Jeremy Ang, and Hannah Carvey. 2004. The ICSI meeting recorder dialog act (MRDA) corpus. In Proceedings of the 5th SIGdial Workshop on Discourse and Dialogue at HLT-NAACL 2004, pages 97–100, Cambridge, Massachusetts, USA. Association for Computational Linguistics. Wenhui Wang, Furu Wei, Li Dong, Hangbo Bao, Nan Yang, and Ming Zhou. 2020. Minilm: Deep selfattention distillation for task-agnostic compression of pre-trained transformers. Zhao Xu, Kai Yu, Volker Tresp, Xiaowei Xu, and Jizhi Wang. 2003. Representative sampling for text classification using support vector machines. In European Conference on Information Retrieval. Meng Ye, Karan Sikka, Katherine Atwell, Sabit Hassan, Ajay Divakaran, and Malihe Alikhani. 2023. Multilingual content moderation: A case study on Reddit. In *Proceedings of the 17th Conference of the European Chapter of the Association for Computational* Linguistics, pages 3828–3844, Dubrovnik, Croatia. Association for Computational Linguistics. Michelle Yuan, Hsuan-Tien Lin, and Jordan BoydGraber. 2020. Cold-start active learning through selfsupervised language modeling. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7935–7948, Online. Association for Computational Linguistics. Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar. 2019. Predicting the type and target of offensive posts in social media. In *NAACL*. Leihan Zhang and Le Zhang. 2019. An ensemble deep active learning method for intent classification. In Proceedings of the 2019 3rd International Conference on Computer Science and Artificial Intelligence, CSAI2019, page 107–111, New York, NY, USA. Association for Computing Machinery. Zhisong Zhang, Emma Strubell, and Eduard Hovy. 2022. A survey of active learning for natural language processing. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 6166–6190, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Fedor Zhdanov. 2019. Diverse mini-batch active learning. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitations sections at the end ✓ A2. Did you discuss any potential risks of your work? Ethical considerations sections at the end ✓ A3. Do the abstract and introduction summarize the paper's main claims? Sections 3-5 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4,5 ✓ B1. Did you cite the creators of artifacts you used? Section 4,5 ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? The models (e.g. BERT) are free-to-use for researchers. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. ✗ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Left blank. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Left blank. ## C ✓ **Did You Run Computational Experiments?** Section 4,5 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 4.2, 4.3 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 5.1 C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Not applicable. Left blank. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
xu-etal-2023-language-anisotropic
Language Anisotropic Cross-Lingual Model Editing
https://aclanthology.org/2023.findings-acl.343
Multilingual pre-trained language models can learn task-specific abilities or memorize facts across multiple languages but inevitably make undesired predictions with specific inputs. Under similar observation, model editing aims to post-hoc calibrate a model targeted to specific inputs with keeping the model{'}s raw behavior. However, existing work only studies the monolingual scenario, which lacks the cross-lingual transferability to perform editing simultaneously across languages. In this work, we focus on cross-lingual model editing. Firstly, we define the cross-lingual model editing task and corresponding metrics, where an edit in one language propagates to the others. Next, we propose a framework to naturally adapt monolingual model editing approaches to the cross-lingual scenario using parallel corpus. Further, we propose language anisotropic editing to improve cross-lingual editing by amplifying different subsets of parameters for each language. On the newly defined cross-lingual model editing task, we empirically demonstrate the failure of monolingual baselines in propagating the edit to multiple languages and the effectiveness of the proposed language anisotropic model editing. Our code is publicly available at \url{https://github.com/franklear/LiME}.
# Language Anisotropic Cross-Lingual Model Editing Yang Xu Yutai Hou Wanxiang Che Min Zhang Harbin Institute of Technology {yxu, ythou, car}@ir.hit.edu.cn [email protected] ## Abstract Multilingual pre-trained language models can learn task-specific abilities or memorize facts across multiple languages but inevitably make undesired predictions with specific inputs. Under similar observation, model editing aims to post-hoc calibrate a model targeted to specific inputs with keeping the model's raw behavior. However, existing work only studies the monolingual scenario, which lacks the *cross-lingual* transferability to perform editing simultaneously across languages. In this work, we focus on cross-lingual model editing. Firstly, we define the cross-lingual model editing task and corresponding metrics, where an edit in one language propagates to the others. Next, we propose a framework to naturally adapt monolingual model editing approaches to the crosslingual scenario using parallel corpus. Further, we propose *language anisotropic* editing to improve cross-lingual editing by amplifying different subsets of parameters for each language. On the newly defined cross-lingual model editing task, we empirically demonstrate the failure of monolingual baselines in propagating the edit to multiple languages and the effectiveness of the proposed *language anisotropic* model editing. Our code is publicly available at https://github.com/franklear/LiME. ## 1 Introduction Pre-trained language model based approaches have become the best practice in many fields, including multilingual NLP (Che et al., 2021; Tunstall et al., 2022). During training, Transformerbased (Vaswani et al., 2017) models can embed language abilities (Geva et al., 2021) and memorize facts (Dai et al., 2022) in the parameters. Though, models inevitably make undesired predictions with specific inputs, such as mistake labels or outdated facts. Moreover, the performance of multilingual models is unbalanced across languages, leading to inconsistency predictions over the same input in different languages. However, the high cost of ![0_image_0.png](0_image_0.png) training and data collecting makes it unrealistic to re-train the models using calibrated data in all languages. Therefore, there is a pressing need for an approach to calibrate multilingual pre-trained models across all languages of interest simultaneously. As an emerging research area, model editing allows us to calibrate the behavior of pre-trained language models targeted to specific inputs (Sinitsin et al., 2020; Cao et al., 2021; Mitchell et al., 2022a,b; Meng et al., 2022a,b; Hase et al., 2021). However, challenges emerge when applying model editing to the cross-lingual scenario, due to the two features of multilingual pre-trained models: The first is *cross-lingual transferability*. Based on prior research conducted on pre-trained multilingual models like XLM (Conneau and Lample, 2019) and InfoXLM (Chi et al., 2021), it is wellestablished that incorporating diverse language data during training leads to advantageous crosslingual transfer effects. Thus, input with the same meaning can be expressed in multiple languages as completely different sentences. The editor has 5554 to be aware of this feature in case it suffers from editing failure in unseen languages. The second is *language anisotropy*. Recent work reveals that language-specific and languageuniversal parameters exist in the multilingual pretrained model (Wang et al., 2020). This finding means the model tends to mainly activate a subset of its parameters depending on the language to be processed, which we call *language anisotropy*. An editor which treats all parameters identically for all languages is not *language anisotropic*, potentially harming other languages when editing. In this work, we propose for the first time crosslingual model editing on multilingual pre-trained language models. Different from existing model editing, an edit in a single language propagates to the others in cross-lingual model editing. As is shown in Figure 1, with cross-lingual model editing, editing a fact in English also affects the Chinese version, while retaining unrelated facts. We propose a simple yet effective framework to adapt existing monolingual model editing approaches to the cross-lingual scenario using the parallel corpus. Specifically, we replace the inputs for editor training with their parallel expressions in random languages. For example, the editor can be asked to edit model predictions on English input. The edited model is then supervised to enforce that the predictions are updated on parallel Chinese input and retained on unrelated French inputs. The next time, the above languages randomly change. To this end, the cross-lingual training formula helps the editor gain *cross-lingual transferability*. Besides, we leverage the *language anisotropy* nature of the multilingual models to further improve cross-lingual model editing. Specifically, we propose to add a group of L0 constrained languagespecific masks as the editor's parameters. During editing, the masks are used to instruct the editor to focus on different parts of the raw model's parameters according to the inputs' language. Training along with the masks, the editor gains the skill of making *language anisotropic* edits. Our primary contributions are as follows: - We define the cross-lingual model editing task and corresponding evaluation metrics. - We propose a simple yet effective framework to adapt the monolingual editing approaches to the cross-lingual scenario. - We propose *language anisotropic* model editing to improve cross-lingual model editing ## 2 Background: Model Editing Sinitsin et al. (2020) propose Editable Training (model editing) as an efficient approach to modify the behavior of a trained model on specific inputs, where three core requirements are highlighted: - *Reliability*: the edited model acquires the desired behavior on specific inputs. - *Locality*: the edit influences the model on other inputs of interests as little as possible. - *Efficiency*: the editing approach should be computationally efficient. Reliability and *locality* are essential attributes of the model editing task, while *efficiency* is required to make the editor usable. Recent work explores several ways to solve the model editing problem (Sinitsin et al., 2020; Mitchell et al., 2022a; Meng et al., 2022a,b). Despite the variety of algorithms, their training formulas are similar, i.e., training the editor end-to-end on editing data under the condition of *reliability* and *locality*. Specifically, a training step of the editor contains two stages. 1) Editing stage: the editor is used to edit desired predictions into the raw model f(·; θ), producing the edited model f(·; θu). 2) Editor training stage: the edited model is then constrained under the requirements of *reliability* and *locality*, corresponding to two core objectives respectively. For *reliability*, the edited model need to make the desired prediction ye in response to the input xe. This requirement refers to the task loss Ltask, e.g., cross-entropy or L2. So we have $$L_{\mathrm{rel}}=\lambda_{\mathrm{rel}}L_{\mathrm{task}}\left(f(x_{e};\theta_{u}),y_{e}\right).\qquad(L_{\mathrm{rel}})$$ For *locality*, the edited model needs to retain predictions of unrelated inputs, which means that for an unrelated input xr, the output f(xr; θ) should be kept. Though a similar loss like Lrel can work in theory, the stronger KL divergence loss is used to minimize the side effect on unrelated labels $$L_{\mathrm{loc}}=\lambda_{\mathrm{loc}}\,\mathrm{KL}\left(f(x_{r};\theta_{u})\parallel f(x_{r};\theta)\right).\quad(L_{\mathrm{loc}})$$ In addition, other auxiliary objectives can be utilized which do not affect the training formula. Note that the goal is to train the editor instead of the raw model. During training, the gradients propagate through the edited model to the editor. At test time, only the editing stage is needed. Overall, the training of the editor is a meta version of the model training because the "data" that the editor processes is the model (plus the input-prediction pair to be edited). ## 3 Cross-Lingual Model Editing 3.1 Task Definition Following the work on monolingual model editing, we continue taking the idea of making an edit with reliability and *locality* (Sinitsin et al., 2020), while introducing *cross-lingual transferability*. Assuming we have a model f parameterized by θ that maps the input x to the prediction p = f(x; θ). An update is needed when we want the model change its prediction from p to y. Here the requirement of *cross-lingual transferability* brings the key difference. The same input can be represented in multiple languages, producing parallel sentences. Therefore, the edit with *reliability* for x should affect the parallel inputs, denoted as I(x). As the example in Figure 1, "Messi plays for Paris SG." in English is parallel to its Chinese translation. For *locality*, the side effect should be as low as possible, which means the prediction of input x′ ∈/ I(x) is retained. Note that under this setting, one edit is always independent of another. The editor revisits the θ for every edit, then produces the corresponding θu. Formally, the goal of the editor is to _find_$\theta_{u}$, $$\begin{array}{ll}s.t.&\left\{\begin{array}{ll}f(x_{p};\theta_{u})=y&\forall x_{p}\in I(x)\\ f(x_{n};\theta_{u})=f(x_{n};\theta)&\forall x_{n}\notin I(x)\end{array}\right.,\\ \mbox{given}&x,I(x)=\{x^{\prime}|x^{\prime}\mbox{is parallel to}x\},y,f,\theta.\end{array}$$ ## 3.2 Cross-Lingual Editing Based On Monolingual Approaches Dispite the *cross-lingual transferability*, the requirements of *reliability* and *locality* stay the same with monolingual model editing, which are defined by the training data. To fully leverage the monolingual editing approaches and build reasonable baselines, we propose a framework to adapt them to the cross-lingual scenario using the parallel corpus as illustrated in Figure 2. What we need is a slight change in the training formula of the monolingual editing approaches, namely aligning inputs in different languages. Given xe in the editing language le as the input to be edited and the corresponding desired prediction ye, the inputs used in the training objectives are sampled over the parallel inputs set I(xe). For *reliability*, the edited model is asked to update the prediction to ye on the sampled input xu ∈ I(xe) in the updating language le. Thus the reliability loss (Lrel) is modified by replacing xe with xu. For *locality*, the sampled input xr ∈/ I(xe) in the retaining language lr is used as input, and the *locality* loss (Lloc) remains the same. Monolingual editing is a degenerate case where only a single language is considered, i.e., le = lu = lr. When the languages differ, the editor trained under the above sampling strategy acquires *crosslingual transferability*. Intuitively, the editor functions as updating on identical inputs while not affecting unrelated inputs. In the above cross-lingual adaptation, *reliability* loss tells the editor what should be identical, and locality loss tells what should be unrelated. Thus, the two losses illustrate a semantically equivalent range for the editor across multiple languages, deriving the *cross-lingual transferability*. Therefore, the adaptation we make leverages the parallel corpus to inspire the potential of transferability that comes with the model editing task. ## 3.3 Language Anisotropic Editing A multilingual pre-trained model like mBERT (Devlin et al., 2019) can integrate over one hundred languages in a single model. However, a known phenomenon called the curse of multilinguality (Conneau et al., 2020) exposes the trade-off between the number of languages and the model capacity, implying the languages tend to compete for the shared parameters. Further, it is revealed that languagespecific and language-universal parameters exist in the multilingual model, which potentially harm its cross-lingual transferabillity (Wang et al., 2020). All this evidence indicates that the multilingual model is *language anisotropic* in the perspective of the parameters. Therefore, we introduce a priori, i.e., the update should focus more on a certain subset of parameters according to the language of the input to edit. Nevertheless, identifying which language prefers which parameters is not so direct. Our idea is to drive the editor to find the important parameters during training. As shown in the top-right part of Figure 2, we realize the idea with a group of learnable languagespecific masks. The model editor produces new ![3_image_0.png](3_image_0.png) parameters to update the raw model, so we mask the input/output of the editor to apply an adaptive weighting. For an update in language l, we mask each parameter (tensor) W to be updated with mlW ∈ [0, 1]dimW through $$\mathrm{mask}(\mathbf{W},\mathbf{m}_{\mathbf{W}}^{l})=\mathbf{W}+\mathbf{m}_{\mathbf{W}}^{l}\odot\mathbf{W},$$ where ⊙ computes the element-wise production. The mask operation bypasses the whole parameter firstly, then increases the weight of the selected part. We also add an auxiliary L0 loss $$L_{\mathrm{mask}}=\lambda_{\mathrm{mask}}\sum_{l,W}\|{\mathbf{m}}_{W}^{l}\|_{0},$$ which is a sparsity penalty to make the mask filter only the important components in a parameter. We follow Louizos et al. (2018) to optimize L0 with their re-parametrization approach. It should be noted that the mask is only aware of and applied to the editing language because we aim to update all the languages simultaneously, making any assumption on the updating or retaining languages meaningless. Unfortunately, the element-wise masks for each language may contain as many parameters as the raw model, causing over-parameterization and a waste of computation. Say h is the hidden size. If predicting the O(h 2) updated parameters (or their gradients), the editor's parameters will inflate to unacceptable O(h 4). Inspired by the capacity of the low-rank updating demonstrated in previous model editing work (Cao et al., 2021; Mitchell et al., 2022a), we factorize the full mask matrix into two low-rank matrics, then constructing the updated raw parameters with non-parameterized operations. The proposed *language anisotropic* model editing can work with various model editing approaches, while the implementation is specific to the algorithm details. Taking a parameter matrix W ∈ R n×m in an MLP layer for example. By the chain rule, its gradient on the loss L is $$\nabla_{W}L=x^{\top}\delta,$$ where x ∈ R nis the layer's input, and δ ∈ R m refers to the gradient of the layer's output (i.e., the "input" in the backward pass). For hyper-network based approaches (Cao et al., 2021; Mitchell et al., 2022a), a network g is built to conduct gradient transform. Hence, we insert the language masks ml· here as $$\hat{\mathbf{x}},\tilde{\mathbf{\delta}}=g\left(\mathrm{mask}(\mathbf{x},\mathbf{m}_{\mathbf{x}}^{l}),\mathrm{mask}(\mathbf{\delta},\mathbf{m}_{\mathbf{\delta}}^{l})\right).$$ For other approaches that do not manipulate gradients (Sinitsin et al., 2020), the g is an identical transformation, and the language masks do not affect the rest part of the editing algorithm. Finally we construct the full sized gradient using the rank-1 predictions ## ∇˜ W L = X˜⊤˜Δ. The extra parameters and computation is in the order of O(h|L|). Since the size of the language set L is likely to be tens while the hidden size h can easily reach the thousand level, the extra timespace cost is tiny compared to the original O(h 2) order. To this end, we obtain an approach to make language anisotropic model editing. ## 4 Experiments 4.1 Evaluation To evaluate cross-lingual model editing approaches, we focus on *cross-lingual transferability*, while continuing to keep our eyes on *reliability* and *locality*. Suppose that the languages we focus on make up L, and the corpus is DL. For l ∈ L, each monolingual subset Dl of the corpus contains a number of tuples (xk, yk), which means we desire the model to predict yk to the input xk. The yk does not need to be different from the raw prediction f(xk; θ). Taking the union of datasets in all the languages, we have the cross-lingual model editing dataset DL = ∪l∈LDl. Inspired by Cao et al. (2021), we propose three cross-lingual model editing metrics. Overall, we distinguish the languages where inputs are to be edited from where predictions are to be updated. Let Dedit be the set of (input, desired prediction) pairs fed to edit the model, which cause model predictions to inputs in Dupdate updated. In addition, I(x) = {x′|x′is parallel to x} refers to parallel inputs of a specific input x across languages of interest. To measure *reliability* under *cross-lingual transferability*, we use editing accuracy. We calculate the ratio of the predictions successfully updated $$\operatorname{acc}=\mathbb{E}_{\begin{array}{c}{{(x_{e},y_{e})\sim\mathcal{D}_{\mathrm{edit}}}}\\ {{x_{u}\sim\mathcal{D}_{\mathrm{update}}\cap I(x_{e})}}\end{array}}\left[\mathbf{1}f(x_{u};\theta_{u}(x_{e},y_{e})){=}y_{e}\right].$$ To measure *locality* under *cross-lingual transferability*, we use editing consistency which reflects the retaining rate of predictions to unrelated inputs $$\mathrm{con}=\mathbb{E}_{\begin{array}{c}(x_{e},y_{e})\sim\mathcal{D}_{\mathrm{e}\mathrm{d t}}\\ x_{r}\sim\mathcal{D}_{\mathrm{update}}\setminus I(x_{e})\end{array}}\left[\mathbf{1}_{f}(x_{r};\theta_{u}(x_{e},y_{e})){=}f(x_{r};\theta)\right].$$ The above two metrics are not necessarily consistent or even conflicting, similar to precision and recall in classification. Thus, we define the editing success rate as the harmonic mean $$\mathrm{succ}={\frac{2\times\mathrm{acc}\times\mathrm{con}}{\mathrm{acc}+\mathrm{con}}}.$$ Since evaluating over the full set for each edit has a huge overhead of enumerating every two inputs, we follow existing work on model editing (Cao et al., 2021; Mitchell et al., 2022a,b) to estimate it with mini-batched expectation. Notably, in this work Dedit and Dupdate are finite datasets. Thus we enumerate each (xe, ye) ∈ Dedit, and uniformly sample a subset in certain size of testing inputs xu from I(xe) or xr from complement set of I(xe) (for acc or con, respectively) to make a pair in order to calculate the metrics. To obtain an average metric over all the languages, we calculate the macro average over editing languages. Specifically, to avoid enumerating all language pairs, we mix all the languages into Dupdate = DL, and use edit sets of single language from {Dl}l∈L as Dedit one by one, then finally calculate the macro average. Finally, the success rate is calculated using the averaged accuracy rate and consistency rate. ## 4.2 Baselines Finetuning As the most common baseline of model editing, we use finetuning (degenerated editor). With no editor to train, finetuning has no cross-lingual variant and makes no use of parallel corpus since no editor is to be trained. Learned Editors Considering the proposed approaches are compatible with various learned editors, we use three monolingual editors as the basis: Editable Training (Sinitsin et al., 2020), KnowledgeEditor (Cao et al., 2021), and MEND (Mitchell et al., 2022a). We compare the editing performance of each editor with and without our approaches. ## 4.3 Datasets Following the widely used setting, we construct synthetic editing datasets using existing data (Sinitsin et al., 2020; Cao et al., 2021; Mitchell et al., 2022a,b). We use the knowledge-intensive task mLAMA (Kassner et al., 2021) for fact editing, which is natural because predictions involve only specific knowledge, which is prone to change. Nevertheless, a usable dataset with a parallel corpus of another task, like classification, is lacking due to the difficulty in translating entities. Therefore, to demonstrate the generic task-agnostic editing ability of cross-lingual model editing, we also use a semantics-focused dataset XNLI (Conneau et al., 2018) for error correction. mLAMA is a multilingual dataset of knowledge probing task through (masked) language modeling, providing facts expressed as masked sentences in 53 languages. Each fact is a triple ⟨[X], type, [Y]⟩ including two entities, e.g., ⟨Messi, play-for, Paris SG⟩. To produce the textual 5558 mLAMA XNLI | Approach | Training Languages | acc% | con% | succ% | acc% | con% | succ% | |-------------------|----------------------|--------|--------|---------|--------|--------|---------| | Finetuning | n/a | 21.94 | 55.69 | 31.48 | 47.53 | 98.24 | 64.06 | | Editable Training | en only | 51.13 | 17.33 | 25.88 | 71.02 | 95.24 | 81.36 | | Editable Training | all | 99.78 | 24.45 | 39.27 | 89.45 | 93.04 | 91.21 | | KnowledgeEditor | en only | 37.18 | 50.19 | 42.72 | 69.96 | 96.79 | 81.22 | | KnowledgeEditor | all | 64.69 | 53.00 | 58.26 | 86.20 | 95.08 | 90.42 | | MEND | en only | 24.76 | 61.09 | 35.24 | 84.90 | 94.87 | 89.61 | | MEND | all | 99.58 | 75.76 | 86.05 | 98.16 | 97.75 | 97.95 | ![5_image_0.png](5_image_0.png) expression from triples, mLAMA provides one template for each type ("play-for") of fact like "[X] plays for [Y].". In the original setting of mLAMA, they fill the real [X] and replace [Y] with [MASK] tokens to probe the pre-trained language model. In our model editing setting, to construct the editing input, we also keep the [X] but uniformly sample an entity within the same type as [Y]. To measure the *locality*, we replace the [Y] as [MASK] tokens in a row, where the number of [MASK] tokens is sampled from the length distribution of entity name in the corresponding language. Note that translation of an entity may be invisible for the edited model or even nonexistent. Consequently, editing with entity names, which involves the entity linking problem, can be intractable in pure cross-lingual model editing. Therefore, we always treat the entity in the edit input as the desired prediction. XNLI is a parallel corpus of natural language inference in 15 languages, which can be modeled as a three-way sentence pair classification task, where we ask the model to predict the relation between a premise-hypothesis pair in {entailment, neutral, contradiction}. In the model editing scenario, we treat the premise-hypothesis pair as a whole input sentence to classify. Unfortunately, since the raw model has already been finetuned using the training and dev set, a dedicated training setting for error correction cannot be built. Thus, we train the editor to edit arbitrarily, which implies the error correction ability. During training, we sample edit input over the training set and give a uniformly random label as the desired prediction. To evaluate an editor on *reliability*, we use data in the test set that the raw model gives wrong predictions and use corresponding gold labels as the desired predictions. As for *locality*, we continue to sample inputs to be retained over the whole test set. ## 4.4 Cross-Lingual Model Editing In this part, we demonstrate that the cross-lingual scenario exceeds the capability of the monolingual model editing approach. Specifically, we compare the editing performance of the monolingual approaches and the proposed cross-lingual variants. Recall that we use L to represent the full language set, i.e., the 15 languages for XNLI and the 53 for mLAMA. In the case of XNLI, the data is inher- | mLAMA | XNLI | |---------|--------| Approach acc% con% succ% acc% con% **succ**% Fintuning 10.14 48.68 16.79 56.48 98.54 71.81 Editable Training 97.39 21.90 35.75 90.02 93.58 91.76 w/ *Language Anisotropic* Model Editing **97.87 24.41 39.08 91.79 93.68 92.72** KnowledgeEditor 47.30 49.32 48.29 83.88 **95.79** 89.44 w/ *Language Anisotropic* Model Editing **55.91 51.00 53.34 86.88** 95.45 **90.96** MEND 94.83 67.59 78.92 98.16 97.44 97.80 w/ *Language Anisotropic* Model Editing **96.12 69.20 80.47 98.42 98.02 98.22** ![6_image_0.png](6_image_0.png) ently parallel, while in mLAMA, each language, excluding English, relies on a translated subset of English. Given this scenario, we train the editors using the English subset to ensure uniform exposure to knowledge during training, thereby mitigating potential issues arising from training set disparities. More specifically, we select en as the editing language and expect the approaches to update predictions across all the languages. Hence, we have le = en, lu, lr ∈ L during evaluation. Table 1 shows the averaged results of en → all the languages, while Figure 3 illustrates the distribution of results across languages. Finetuning suffers from severe cross-lingual underfitting according to its low editing accuracy, causing a low overall success rate. The monolingual editors work much better than finetuning. Although never seen other languages, the editors demonstrate partial *cross-lingual transferability*. Moreover, the editor acquires the ability to perform updates of *locality*, almost reaching all the highest editing consistency on XNLI. However, only editors with the proposed crosslingual editing training framework truly generalizes the desired prediction to inputs in other languages. On XNLI, editors trained cross-lingually on all languages improves the editing accuracy by a large margin, with much less loss of editing consistency, resulting in a large growth in editing success rate. On mLAMA, where the model faces a much larger output space, editors trained on all languages reveals its high consistency and improves all three metrics significantly. Moreover, Figure 3 shows that the performance gap across languages is closer under the cross-lingual training framework. ## 4.5 Language Anisotropic Model Editing After confirming the effectiveness of the crosslingual model editing, we conduct experiments to study how the proposed *language anisotropic* model editing improves performance. Here we always train and evaluate approaches in all languages (le, lu, lr ∈ L). Table 2 shows the averaged all → all results, with the per-language distribution plotted in Figure 4. Editors using parallel training data in Table 1 are the same as the editors without *language anisotropic* model editing in Table 2. The difference is that we no longer limit the editing language, thus the editing task becomes harder, making the results in Table 2 lower. ![7_image_0.png](7_image_0.png) Finetuning still falls into underfitting across languages, performing similar to the situation of single editing language. With *language anisotropic* model editing, the performance of editors reach a new high in both datasets. Note that on XNLI, the small growth (97.80% → 98.22%) corresponds to the large error reduction (2.20% → 1.78%, 19% relatively). Though trained with parallel data, a performance gap still exists between some languages and the others. *Language anisotropic* model editing helps the editors close the performance gap and increases the overall editing success rates. To illustrate the function of the language-specific masks, we conduct analyses using one of the final MEND based checkpoints on XNLI. We observe that the parameters of the masks are very close in most dimensions across all languages. However, masks for different languages show preferences in small but different dimension subsets. Therefore, we plot the cosine similarities of learned parameters in the masks as a heatmap in Figure 5, where we limit the size of the preferred subset to 1% of the full hidden size. The heatmap of cosine similarities demonstrates that *language anisotropic* model editing captures the *language anisotropy* feature of the multilingual pre-trained language model. Through adaptively re-weighting gradients of a small subset of parameters for each language, language anisotropic model editing improves the performance of cross-lingual model editing. ## 5 Related Work Model Editing Sinitsin et al. (2020) initially presents the model editing problem and proposes a MAML-like method, called Editable Training. Our cross-lingual model editing problem definition and metrics mostly extend their work. The proposed language anisotropic model editing approach can be applied to Editable Training by using the rank-1 masks to construct a full gradient/parameter mask. A series of work models editing as a learningto-update problem and develops the hyper-network based approaches, such as KnowledgeEditor (Cao et al., 2021), MEND (Mitchell et al., 2022a), and SLAG (Hase et al., 2021). They build the editor to constrain gradients during finetuning. We gain a lot of inspiration from their work when designing our methods. A category of approaches regard the language model as a knowledge base, and utilize a wider range of editing fomulars (Santurkar et al., 2021; Meng et al., 2022a,b; Geva et al., 2021; Dai et al., 2022; Mitchell et al., 2022b). We can obtain the cross-lingual variants using the parallel corpus, while whether the *language anisotropic* model editing works depends on the algorithm details. ## Cross-Lingual Transferability In Recent Work, multilingual pre-trained language models show their *cross-lingual transferability* (Devlin et al., 2019; Conneau et al., 2020; Xue et al., 2021; Chi et al., 2021), where multiple languages included in the training corpus benifit each other. Opposite to the positive cross-lingual transfer, Wang et al. (2020) study the negative interference phenomenon. They show the existence of language-specific parameters, which is also a theoretical basis of our work. Based on this priori, their work and our proposed *language anisotropic* model editing have similar underlying ideas: identifying the languagespecific parameters and using them to improve the cross-lingual transferability. Though, our work differs from theirs in method and task. They leverage language-specific pruning to identify the preferred parameter subset of different languages. Then they propose an iterative second-order meta-optimizing algorithm to improve pre-training. Our approach does not perform prune, where the masks play the role of reweighting coefficients. Our approach also makes no change in the training algorithm, maintaining maximum compatibility with various model editing approaches. ## 6 Conclusion In this work, we define the task and metrics of cross-lingual model editing. After summarizing the training formula of various monolingual model editing approaches, we naturally extend the formula to a cross-lingual variant using the parallel corpus. Further, we propose *language anisotropic* model editing to improve cross-lingual model editing. We conduct experiments to verify that the cross-lingual model editing problem is necessary and find that the proposed approaches are effective. ## Limitations Our work depends mainly on parallel data. Although tasks focusing on language abilities can leverage machine translation to obtain parallel data (Hu et al., 2020), it is much harder for tasks about knowledge and facts. Using parallel data to train cross-lingual model editors is like doing full supervision, while we need to leverage weakly labeled data to mitigate data scarcity. On the other hand, whether monolingual or crosslingual, model editing still struggles with the continual learning problem. In the real world, knowledge constantly emerges and fades, disabling the stop of learning. However, most studies, including our work, focus on a single or a batch of inputs. Thus, an effective solution of continuously updating a series of inputs is necessary before model editing becomes a practical technic. Note that our work focuses on the editor's generalized cross-lingual editing ability. We expect the editor to perform the editing honestly. This target potentially offers the possibility to modify model behavior maliciously. Though editing may not soon become a practical technic, the potential risk does exist. ## Acknowledgement This work was supported by the National Key R&D Program of China via grant 2020AAA0106501 and the National Natural Science Foundation of China (NSFC) via grant 62236004 and 61976072. ## References Lei Jimmy Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. 2016. Layer normalization. *CoRR*, abs/1607.06450. Nicola De Cao, Wilker Aziz, and Ivan Titov. 2021. Editing factual knowledge in language models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 711 November, 2021, pages 6491–6506. Association for Computational Linguistics. Wanxiang Che, Jiang Guo, and Yiming Cui. 2021. *Natural Language Processing: A Pre-trained Model Approach*. Publishing House of Electronics Industry, Beijing, China. Zewen Chi, Li Dong, Furu Wei, Nan Yang, Saksham Singhal, Wenhui Wang, Xia Song, Xian-Ling Mao, Heyan Huang, and Ming Zhou. 2021. Infoxlm: An information-theoretic framework for cross-lingual language model pre-training. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 3576–3588. Association for Computational Linguistics. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020*, pages 8440–8451. Association for Computational Linguistics. Alexis Conneau and Guillaume Lample. 2019. Crosslingual language model pretraining. In *Advances* in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 7057–7067. Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel R. Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. XNLI: evaluating crosslingual sentence representations. In *Proceedings of* the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 2475–2485. Association for Computational Linguistics. Damai Dai, Li Dong, Yaru Hao, Zhifang Sui, Baobao Chang, and Furu Wei. 2022. Knowledge neurons in pretrained transformers. In *Proceedings of the 60th* Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8493– 8502, Dublin, Ireland. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171–4186. Association for Computational Linguistics. Mor Geva, Roei Schuster, Jonathan Berant, and Omer Levy. 2021. Transformer feed-forward layers are keyvalue memories. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana,* Dominican Republic, 7-11 November, 2021, pages 5484–5495. Association for Computational Linguistics. Peter Hase, Mona T. Diab, Asli Celikyilmaz, Xian Li, Zornitsa Kozareva, Veselin Stoyanov, Mohit Bansal, and Srinivasan Iyer. 2021. Do language models have beliefs? methods for detecting, updating, and visualizing model beliefs. *CoRR*, abs/2111.13654. Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan Firat, and Melvin Johnson. 2020. XTREME: A massively multilingual multitask benchmark for evaluating cross-lingual generalization. *CoRR*, abs/2003.11080. Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, volume 37 of *JMLR Workshop and Conference Proceedings*, pages 448–456. JMLR.org. Nora Kassner, Philipp Dufter, and Hinrich Schütze. 2021. Multilingual LAMA: investigating knowledge in multilingual pretrained language models. In *Proceedings of the 16th Conference of the European* Chapter of the Association for Computational Linguistics: Main Volume, EACL 2021, Online, April 19 - 23, 2021, pages 3250–3258. Association for Computational Linguistics. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Christos Louizos, Max Welling, and Diederik P. Kingma. 2018. Learning sparse neural networks through l_0 regularization. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov. 2022a. Locating and editing factual knowledge in GPT. *CoRR*, abs/2202.05262. Kevin Meng, Arnab Sen Sharma, Alex Andonian, Yonatan Belinkov, and David Bau. 2022b. Mass-editing memory in a transformer. *CoRR*, abs/2210.07229. Eric Mitchell, Charles Lin, Antoine Bosselut, Chelsea Finn, and Christopher D. Manning. 2022a. Fast model editing at scale. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net. Eric Mitchell, Charles Lin, Antoine Bosselut, Christopher D. Manning, and Chelsea Finn. 2022b. Memorybased model editing at scale. In International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pages 15817–15831. PMLR. Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick S. H. Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander H. Miller. 2019. Language models as knowledge bases? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 2463–2473. Association for Computational Linguistics. Shibani Santurkar, Dimitris Tsipras, Mahalaxmi Elango, David Bau, Antonio Torralba, and Aleksander Madry. 2021. Editing a classifier by rewriting its prediction rules. In Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 23359–23373. Anton Sinitsin, Vsevolod Plokhotnyuk, Dmitry V. Pyrkin, Sergei Popov, and Artem Babenko. 2020. Editable neural networks. In *8th International Conference on Learning Representations, ICLR 2020,* Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. Lewis Tunstall, Leandro von Werra, and Thomas Wolf. 2022. *Natural Language Processing with Transformers: Building Language Applications with Hugging* Face. O'Reilly Media, Incorporated. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998–6008. Zirui Wang, Zachary C. Lipton, and Yulia Tsvetkov. 2020. On negative interference in multilingual models: Findings and A meta-learning treatment. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 4438– 4450. Association for Computational Linguistics. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, EMNLP 2020 - Demos, Online, November* 16-20, 2020, pages 38–45. Association for Computational Linguistics. Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mt5: A massively multilingual pre-trained text-to-text transformer. In *Proceedings* of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 483–498. Association for Computational Linguistics. ## A Datasets Preprocessing A.1 Mlama Along with the raw English data from LAMA (Petroni et al., 2019), mLAMA provides translations of the facts in the other 52 languages if possible. mLAMA is organized into two-level. The first level is relations, and the second is facts. Facts in the same relation share the same template and can be identified by the ⟨[X], [Y]⟩. Thus, we split at the fact level for consistency across different data splits. Specifically, we split the whole dataset to train/dev/test with a ratio of 8:1:1, resulting in 628,612/78,555/78,600 facts in the train/dev/test set. In our setting, the template with filled [X] is used as input, and then we test if the edited model predicts [Y] (§4.3). Thus, to avoid leakage and keep large output space, we ensure that an [X] can only appear in one split while not limiting the label [Y]. Take an example of relation P19 ([X] was born in [Y] .) where [X] is a person's name and [Y] is a location. In the training set, if an input is "Allan Peiper was born in [MASK] .", there cannot exist an input in the dev/test set with [X] = Allan Peiper. On the contrary, a [Y] (like "Alexandra") can be used as the desired prediction during both training and testing, because we use it as a label. We first exclude samples in the test set having overlap [X] with the training set, then exclude samples in the dev set overlapping with the train/test. Finally, we obtain 628,612/23,718/53,993 facts in the train/dev/test set after preprocessing summing up all languages. ## A.2 Xnli XNLI dataset contains completely parallel data in fifteen languages. We use Huggingface Datasets to access to XNLI dataset, and follow the official split with 392,702/2,490/5,010 samples in train/dev/test set in each language. ## B Experiment Details We conduct all experiments three times and use the mean of editing success rates as the final performance metric, including main experiments and hyperparameter tuning. ## B.1 Model And Implementation We use bert-base-multilingual-cased from Hugging Face Transformers (Wolf et al., 2020) as the basic model. As the basic model editors, we use MAML-like Editable Training (Sinitsin et al., 2020), and hyper-network based KnowledgeEditor (Cao et al., 2021) and MEND (Mitchell et al., 2022a). For XNLI, the pre-trained model finetuned on the en training set is used as the raw model to edit in all the following experiments. For mLAMA, we use the pre-trained language model directly. We work on the official MEND codebase, together with HuggingFace Transformers and Datasets. During preliminary experiments, we find that MEND in their implementation suffers from low computation efficiency and sub-optimization due to the token-level BatchNorm (Ioffe and Szegedy, 2015) variant used by the editor. Thus, we replace the token-level BatchNorm in MEND editor with LayerNorm (Ba et al., 2016) in our implementation. ## B.2 Hyperparameters Finetuning We use Adam (Kingma and Ba, 2015) optimizer with learning rate of 5 × 10−6. For each input to be edited, the maximum step is set to 100. Learned Editors We follow Mitchell et al. (2022a) in most of the hyperparameter settings of the three monolingual editing approaches we use. For Editable Training and KnowledgeEditor, we set the learning rate to 5×10−5. For MEND, the editor is initialized to an identical mapping and trained by Adam optimizer with the learning rate of 1 × 10−6. For the inner gradient decent updating, the learning rate is set to 1×10−4. For the coefficients of losses, we set λrel = 0.1 and λloc = 1.0. Since bert-base-multilingual-cased is used as the raw model, we follow the setting of bert-base of the original MEND, i.e., editing MLPs in the last three layers of the encoder, leaving the other parameters frozen. Language Anisotropic Model Editing The *language anisotropic* varients inherit hyperparameters for architectures and training from their corresponding base editors. The newly introduced training hyperparameters include the learning rate of masks and λmask. We tune the learning rate of masks in {1 × 10−4, 1 × 10−3, 1 × 10−2}, and λmask in {0.01, 0.1, 1}. We pick the best values and apply them to all main experiments, i.e., learning rate of masks of 1 × 10−3, and λmask of 1.0. ## B.3 Training Details We utilize the early-stopping strategy along with up to 500, 000 training steps. When training on the full datasets, we evaluate the model every 100, 000 steps and finalize training when the editing success rate is not improved over 200, 000 steps. When training on the English only subset, the validation interval is set to 20, 000 and the early stop patience is 40, 000 steps. All experiments fit in one NVIDIA RTX 2080Ti GPU, where a single run takes one to three days. ## C Additional Results The large versions with raw data points of Figure 3 and Figure 4 are as follows. ![12_image_0.png](12_image_0.png) ![13_image_0.png](13_image_0.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitations ✓ A2. Did you discuss any potential risks of your work? Limitations ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4 And Appendix A ✓ B1. Did you cite the creators of artifacts you used? Section 4 and Appendix A ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? License and terms will be included in the code repository to be released. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section 4 and Appendix A B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 4 and Appendix A ## C ✓ **Did You Run Computational Experiments?** Section 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix B The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Appendix B ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 4 and Appendix B ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Appendix B D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
king-flanigan-2023-diverse
Diverse Retrieval-Augmented In-Context Learning for Dialogue State Tracking
https://aclanthology.org/2023.findings-acl.344
There has been significant interest in zero and few-shot learning for dialogue state tracking (DST) due to the high cost of collecting and annotating task-oriented dialogues. Recent work has demonstrated that in-context learning requires very little data and zero parameter updates, and even outperforms trained methods in the few-shot setting. We propose RefPyDST, which advances the state of the art with three advancements to in-context learning for DST.First, we formulate DST as a Python programming task, explicitly modeling language coreference as variable reference in Python. Second, since in-context learning depends highly on the context examples, we propose a method to retrieve a diverse set of relevant examples to improve performance. Finally, we introduce a novel re-weighting method during decoding that takes into account probabilities of competing surface forms, and produces a more accurate dialogue state prediction. We evaluate our approach using MultiWOZ and achieve state-of-the-art multi-domain joint-goal accuracy in zero and few-shot settings.
# Diverse Retrieval-Augmented In-Context Learning For Dialogue State Tracking Brendan King and **Jeffrey Flanigan** University of California, Santa Cruz {bking2,jmflanig}@ucsc.edu ## Abstract There has been significant interest in zero and few-shot learning for dialogue state tracking (DST) due to the high cost of collecting and annotating task-oriented dialogues. Recent work has demonstrated that in-context learning requires very little data and zero parameter updates, and even outperforms trained methods in the few-shot setting (Hu et al., 2022). We propose RefPyDST, which advances the state of the art with three advancements to in-context learning for DST. First, we formulate DST as a Python programming task, explicitly modeling language coreference as variable reference in Python. Second, since in-context learning depends highly on the context examples, we propose a method to retrieve a diverse set of relevant examples to improve performance. Finally, we introduce a novel re-weighting method during decoding that takes into account probabilities of competing surface forms, and produces a more accurate dialogue state prediction. We evaluate our approach using MultiWOZ and achieve state-of-the-art multi-domain joint-goal accuracy in zero and few-shot settings.1 ## 1 Introduction Dialogue state tracking (DST) is an important language understanding task required for supporting task-oriented conversational agents. For each turn in a dialogue, the goal of DST is to extract the intentions and arguments a user communicates into a meaning representation aligned with the capabilities of the system. Often, this can be represented as a set of slot-value pairs, using slots defined in a system schema. For example, if a user asks a hotel booking agent for "a four-star hotel with somewhere to park", the agent could extract the state {(hotel-stars, 4),(hotel-parking, yes)}. Annotating these turn-level dialogue states is challenging and time-intensive (Budzianowski et al., 2018). Further, as system capabilities evolve 1Our code: https://github.com/jlab-nlp/RefPyDST ![0_image_0.png](0_image_0.png) Figure 1: Our retrieval-augmented in-context learning approach to DST. We construct a prompt which re-frames DST as a Python programming task conditioned on a system definition and set of retrieved examples Ek (green). For each dialogue turn t, the goal is to take the current state (state) and turn utterances (print(...)) as 'input' and produce a program which *updates* the state with missing values, i.e. (restaurant-area, west). We represent linguistic coreference explicitly as variable reference (pink) over time, the schema and DST requirements change. As such, flexible and data-efficient DST methods are highly valuable. For these reasons, recent work has explored zero and few-shot methods for DST. Few-shot methods often fine-tune a pre-trained language model (LM) on DST or a re-framing of the task (e.g. Su et al., 2021; Shin et al., 2022; Lin et al., 2021a). While these systems are often data efficient, they are inflexible to changing system definitions, requiring re-training as new services are added. To address this, zero-shot methods for domain transfer have been proposed (e.g. Wu et al., 2019; Hosseini-Asl et al., 2020; Gupta et al., 2022), but their performance in new domains can significantly depend on conceptual overlap with training domains (Wu et al., 2019). The in-context learning framework (ICL) (Brown et al., 2020) is particularly appealing in this setting given that it is highly data-efficient and flexible: instead of fine-tuning, ICL methods prompt a fixed LM with templated examples for a task. This approach requires no re-training when adapting to schema changes. In recent work, Hu et al. (2022) find that prompting a language model with examples for DST in a text-to-SQL format can outperform fine-tuned zero and few-shot methods. In this work, we propose **RefPyDST**, a retrievalaugmented in-context learning approach to DST for use with language models pre-trained on code, such as OpenAI Codex (Chen et al., 2021), by building on recent ICL methods for DST (Hu et al., 2022). Our approach advances the state of the art with three key contributions. First, we develop a novel in-context prompt that re-frames DST as text-to-python, explicitly modeling slot value coreferents using variables. We provide an overview of this prompt and example of such coreference in Figure 1. We demonstrate that this approach significantly improves system performance in the zero and few-shot settings, and particularly improves accuracy on predictions requiring coreference resolution. Second, we introduce a novel method for diverse supervised example retrieval, which yields a set of in-context examples Ek that are both individually relevant and collectively representative of the output space, inspired by maximum marginal relevance (MMR) (Goldstein and Carbonell, 1998). Our approach significantly improves performance in few-shot settings, overcoming a failure mode in supervised example retrieval in which examples are each similar to an input x but redundant in the outputs they demonstrate. Third, we propose a novel scoring method PMIβ which compensates for surface-form competition among sampled LM completions in constrained generation settings. Inspired by Holtzman et al. (2021), we re-weigh each completion y by an estimate of its a priori likelihood in the task context. We find this improves system performance in both the zero and few-shot settings. Together, our contributions address key challenges in DST and in retrieval-augmented ICL generally. Our method produces state-of-the-art results on MultiWOZ 2.1 and 2.4 DST benchmarks across a variety of few-shot settings. Similarly, we obtain a new zero-shot state-of-the-art in the multi-domain setting. ## 2 Task Definition A task-oriented dialogue consists of turns or paired utterances between a user and an agent which interfaces the user with a programmable system. At each turn t, the purpose of a DST module is to use the dialogue history up to that turn to predict a dialogue state yt, which represents the user's goal and progress in using the system. Let Ai be an agent utterance, Ui be a user utterance, and Ct = [(A1, U1),(A2, U2), ...(At, Ut)]2 be the dialogue history up to turn t. The task is to map the history Ctto a state representation yt. In this work, we predict dialogue states yt which can be represented as slot-value pairs: ## Yt = {(S1, V1),(S2, V2)...(Sn, Vn)} where each slot si and the types of values it permits are defined in a system schema. For example, an agent supporting hotel reservations might have a slot 'hotel-parking' taking boolean values for constraining search to hotels that include parking. We can equivalently define this task as predicting *state changes*, as proposed in Hu et al. (2022). Let xt = [yt−1,(At, Ut)] be a dialogue *context* consisting of the previous dialogue state prediction and utterances for the current turn. Using this turn context xt, we predict a state change: ## ∆Yt = {+(Si, Vi)... − (Sj , Vj )...} where ytis computed by applying the difference ∆ytto yt−1. This approach has two advantages for few-shot in-context learning. First, the turn context xt requires fewer tokens to represent than the complete history Ct, permitting more in-context examples. Second, the number of distinct state changes ∆yt observed in practice is much smaller than the number of distinct states yt, simplifying the search for relevant examples and the generation problem. For these reasons, we formulate our DST problem as mapping from the turn context xtto a state change ∆yt. For readability, we often use 'turn' to refer to this turn context xt, distinguishing it from the history Ct or turn number t using notation. 2For user-initiated dialogues, A1 may be omitted ## 3 Methods Given a dialogue turn t, our method produces a state change ∆yt by (1) retrieving a set of incontext examples Ek, (2) formatting these into a prompt f*prompt*(xt, Ek), (3) generating and scoring possible program solutions (LM completions) with OpenAI Codex (Chen et al., 2021), (4) executing the program to compute a state change ∆yt. Given the state change, we compute the complete dialogue state yt by applying the difference to yt−1. We describe our prompting function f*prompt*(xt, Ek), in § 3.1. In § 3.2, we describe our method for retrieving a diverse and representative set of examples Ek. Finally, we describe our method for scoring LM completions with a pointwise mutual information estimate in § 3.3. ## 3.1 Prompting With Text-To-Python We design a novel prompt that re-frames DST as a text-to-Python task, allowing us to explicitly represent coreference phenomena and leverage the unique capabilities of language models pre-trained with code. Figure 1 provides an overview. Formally, we define a prompting function f*prompt*(xt, Ek), which takes a test dialogue turn xt and a set of k in-context examples Ek = {(x1, ∆y1)*, ...*(xk, ∆yk)} and produces a string representing the program synthesis task. Our prompt (Figure 1) starts with a task definition represented as a set of Python classes corresponding to each DST domain. Each informable slot is an attribute in the appropriate class. Type hints are used to label categorical slots with their values and non-categorical slots with the most appropriate type. The dialogue state is also represented as an object which can be manipulated, having an attribute per-domain. We represent instances of our programming synthesis task with in-context python examples. Each in-context example ([yt−1, At, Ut], ∆yt) is represented as follows: the previous dialogue state yt−1 is represented as a dictionary, mapping slot names to values. Non-categorical values such as names are de-lexicalized by replacing their string value with a variable referencing their existing value in the state. Solutions to the programming task are represented as function calls that manipulate the dialogue state. One of the key benefits of our formulation of the DST task as python is explicit representation of coreference phenomena. For example, the solution corresponding to a user input "find me a restaurant in the same area as my hotel" would be state.restaurant = find_restaurant(area = state.hotel.area), explicitly modeling the resolution of the linguistic coreference. ## 3.2 Retrieving Diverse Relevant Examples We propose a method for in-context example selection that produces an example set Ek that is both relevant to a test turn xt and diverse, representing the relevant portions of the output space. We first learn an embedding space in which similar state changes have high cosine similarity with one another (§3.2.1), following (Hu et al., 2022). Using this, we propose a novel method for decoding Ek such that examples are similar to xt but dissimilar to each other (§3.2.2). ## 3.2.1 Retriever Training We fine-tune an embedding model to approximate the true similarity between two turn contexts xi, xj with the *cosine similarity* between their encoded representations, following prior works (Hu et al., 2022; Rubin et al., 2021). Let D*train* be a set of dialogue turns serving as training data for an example retriever and selection pool at inference time. As described in §2, each example ei ∈ D*train* is a context state-change pair ei = (xi, ∆yi). A single example eiis shown in the green box in Figure 1. We encode an example or query turn context x = [yt−1,(At, Ut)] by concatenating each element of the turn context and passing the result through an embedding model3emb. For two example turn contexts xi, xj , the cosine similarity between their embeddings cos(emb(xi)*, emb*(xj )) approximates their relevance to each other. At inference time, we can embed a test turn xt and retrieve highly similar examples with nearest neighbors search. We fine-tune our embedding model with a supervised contrastive loss, such that high cosine similarity of representations correlates with high similarity between dialogue state changes, following the procedure in Hu et al. (2022). For our learning objective, we assume a metric that gives the *true* similarity between two dialogue state changes for a pair of turns simF1 , which we define below. For each dialogue turn in the training set, we use simF1 to define positive and (hard) negative examples as the top and bottom 5% of the current nearest 200 examples, respectively. We train each retriever for 3We use all-mpnet-base-v2 (Song et al., 2020), available in sentence-transformers (Reimers and Gurevych, 2019) 15 epochs using the hyperparameters detailed in Appendix C. We define the ground-truth similarity simF1 between two dialogue state changes as follows. Let ∆y a = {(s a 1 , va 1 )...(s am, vam)} and ∆y b = {(s b1 , vb1 )...(s bn , vbn )} be two dialogue state changes. For any slot value vi exhibiting coreference to another slot sj , we replace vi with sj . For example, the state change corresponding to a turn "I need a taxi to my hotel" would become {(taxi-destination, hotel-name)}, regardless of the particular hotel name value. We then compute true state similarity using the average between the F1 score comparing updated slots and the F1 score comparing updated slot-value pairs, as proposed in Hu et al. (2022): $$\begin{split}\operatorname{sim}_{F_{1}}(\Delta y^{a},\Delta y^{b})&=\frac{1}{2}F_{1}(\{s_{1}^{a},...\},\{s_{1}^{b},...\})+\\ &\frac{1}{2}F_{1}(\{(s_{1}^{a},v_{1}^{a}),...\},\{(s_{1}^{b},v_{1}^{b}),...\})\end{split}$$ We propose an adaptation of maximum marginal relevance (MMR) (Goldstein and Carbonell, 1998) which uses our learned embedding model emb to produce a diverse set of examples Ek that maximizes similarity to xt and minimizes similarity between examples in Ek. Particularly for encoders that are fine-tuned to approximate output similarity, this yields a set of examples that is more representative of the output space than simply selecting the nearest k, which may all have the same label. Formally, we define the ideal set of in-context examples E ∗ k for an input xtto be the k examples satisfying: $$\begin{array}{c}{{{\mathcal{E}}_{k}^{*}=a r g m a x\sum_{x_{i}\in{\mathcal{E}}_{k}}c o s(e m b(x_{t}),e m b(x_{i}))}}\\ {{-\alpha\sum_{x_{i},x_{j}\in{\mathcal{E}}_{k}}c o s(e m b(x_{i}),e m b(x_{j}))}}\end{array}$$ where the hyperparameter α is a dissimilarity factor and α = 0 corresponds to typical nearest-k example selection. We greedily approximate E ∗ k by iteratively selecting the example which maximizes the equation at each step. For more efficient decoding of Ek with large selection pools, we limit the considered examples to the nearest N such that |Dtrain| *>> N >> k*. For example in one run in the 5% MultiWOZ few-shot setting, |D*train*| = 2754, N = 100, and k = 10. ## 3.3 Decoding With Point-Wise Mutual Information We introduce a new rescoring function, PMIβ, to mitigate surface form competition when generating from language models, that we use for making predictions in our setting. PMIβis an extension of PMIDC, which was proposed in Holtzman et al. (2021) for mitigating surface form competition in the classification setting. We first describe surface form competition and PMIDC (§3.3.1), and then describe PMIβ, an adaptation of this method to the constrained generative setting with in-context examples (§3.3.2). ## 3.3.1 Surface-Form Competition Conditioned on a prompt, a language model assigns a likelihood to all completing strings, from which we can sample. While string likelihoods can be used as a proxy for output class or structure likelihoods, these are not the same. For example, in our DST formulation, many strings can correspond to the same state change ∆yt, or may not correspond to a valid state change at all. As such, Holtzman et al. (2021) argue string likelihoods can be unreliable for scoring the best among a fixed set of choices which may each contain numerous surface forms in V ∗. To compensate for this, they propose scoring with Domain Conditional Point-wise Mutual Information (PMIDC = P(y|*x,domain*) P(y|*domain*) ). This re-weighs choices by a priori likelihood of their string form in the task context P(y|*domain*). ## 3.3.2 Scoring With Pmiβ To mitigate surface-form competition, we propose PMIβ: a prompt conditional pointwise mutual information scoring method that adapts PMIDC to our constrained generative setting with in-context examples. Doing so requires overcoming two key challenges. First, our choices to score amongst are not practically enumerable. Second, the task context we condition on is partly defined by our choice of in-context examples Ek. We overcome these by first generating a small set of plausible completions C and their likelihoods according to a language model. Then, we re-weigh these likelihoods according to an estimate of their a priori likelihood conditioned on only the task context and selected examples Ek: $$P M I^{\beta}(x;y|{\mathcal{E}}_{k})={\frac{P(y|f_{p r o m p t}(x_{t},{\mathcal{E}}_{k})))}{P(y|f_{p r o m p t}^{\prime}({\mathcal{E}}_{k}))^{\beta}}}\quad(1)$$ ![4_image_0.png](4_image_0.png) where f ′*prompt* is a prompt designed for estimating P(y|Ek) without conditioning on xt, described below, and β is a hyperparameter for adjusting the impact of re-weighing by a priori likelihood.4 To generate the candidate completions C, we sample a set of plausible candidates using nucleus sampling (Holtzman et al., 2020). While one could simply use the language model to compute P(y) directly, such unconditional estimates tend to vary wildly. Following Holtzman et al. (2021), we instead estimate the probability of the completion in context, but further account for the use of in-context examples. To do this, we construct an additional prompt which contains the same problem definition, but reverses the order outputs and inputs. Using this, we can estimate the probability of a completion y in the context of our task and examples without xt, illustrated in Figure 2. Finally, we select the completion yˆ which maximizes Eq. 1, and parse it to a dialogue state change ∆yt: $$\hat{y}=\underset{y\in\mathcal{C}}{\operatorname{argmax}}\,P M I^{\beta}(x;y|\mathcal{E}_{k})$$ We choose a minimum a priori likelihood of between 10−7and 10−5, as estimates for P(y|f ′*prompt*(Ek)) can be very low, particularly when rare slot values implied by xt are not present in any example. When constructing our candidate set C, we choose the five most likely sampled com-4While only β = 1 corresponds neatly to a point-wise mutual information estimate *pmi(x*t; y), we find 0 *< β <* 1 to be more effective in practice. Prior work in terminology extraction has also proposed scaling PMI estimates, though in a different context (Daille, 1994) pletions under the original prompt. Finally, we canonicalize each completion y when computing P(y|f ′*prompt*(Ek)) by first parsing it to a dialogue state change, and then re-writing it as a string in the form as if it were an example in Ek. In effect, this normalizes mis-spellings and enforces the expected order of keyword arguments in the update string, further controlling for high variance in our estimates. ## 4 Experiments We describe our zero and few-shot experimental setups, evaluation, and baselines. Hyperparameter and implementation details can be found in Appendix C. ## 4.1 Experimental Settings We conduct zero and few-shot DST experiments on the MultiWOZ dataset (Budzianowski et al., 2018), containing over ten thousand multi-domain taskoriented dialogues crowd-sourced in a wizard-of-oz setup. There are five domains in the validation/test sets and a total of thirty informable slots. We evaluate on the newest MultiWOZ 2.4 (Ye et al., 2022a). For comparison with prior work, we also report on MultiWOZ 2.1 (Eric et al., 2020). We evaluate performance with standard jointgoal accuracy (JGA) for all of our experiments. For a turn xt, a dialogue state prediction yˆtis considered correct only if all slot names and values exactly match the ground-truth state yt. For the few-shot setting, following (Wu et al., 2020), we sample 1%, 5%, or 10% of the dialogues from the training set to serve as a training | MultiWOZ 2.1 | MultiWOZ 2.4 | | | | | | | | |---------------------------------------|----------------|------|------|------|------|------|------|------| | Model | 1% | 5% | 10% | 100% | 1% | 5% | 10% | 100% | | TRADE (Wu et al., 2019) | 12.6 | 31.2 | 36.2 | 46.0 | - | - | - | 55.1 | | DiSTRICT (Venkateswaran et al., 2022) | 13.4 | 41.3 | 49.7 | 56.1 | - | - | - | - | | DS2 (Shin et al., 2022) | 33.8 | 44.2 | 45.4 | 52.3 | 36.8 | 49.9 | 51.1 | 57.9 | | IC-DST Codex (Hu et al., 2022) | 43.1 | 47.1 | 48.7 | 50.7 | 48.4 | 55.4 | 56.9 | 62.4 | | RefPyDST (ours) | 47.3 | 49.6 | 50.8 | 52.0 | 55.2 | 62.3 | 62.5 | 65.2 | set D*train* for each experiment. We fine-tune our retriever using D*train* and select in-context examples from it. We conduct three independent runs for each sample size and report the average JGA across runs. We also perform a single run in the full setting, using 100% of the training data. For the zero-shot setting, there are no labeled examples to select from, but a single formatting example is used for all inference turns, as in (Wang et al., 2022; Hu et al., 2022). We consider two evaluation settings. The first is the typical assessment on all test set dialogues, as in few-shot and complete training regimes, which we will refer to as the standard MultiWOZ benchmark. These results allow comparison to few-shot and full-data results, as well as other methods which use zero supervised dialogues in training. We also report results on the MultiWOZ 'leave-one-out' benchmark for zero-shot transfer methods (Wu et al., 2019), reporting JGA considering only slots in each individual domain, as well as the average of these five single-domain results. We compare to a number of prior state-of-the-art zero-shot and few-shot DST methods as baselines. These include DST specific architectures (Wu et al., 2019), various fine-tuning methods (Gupta et al., 2022; Shin and Van Durme, 2022; Venkateswaran et al., 2022), and a strong ICL baseline (Hu et al., 2022). ## 5 Results Few-shot DST on MultiWOZ We present fewshot and full-shot dialogue state tracking results on MultiWOZ 2.1 & 2.4 in Table 1. We find that our method achieves state-of-the-art in the 1%, 5%, and 10% few-shot settings for both MultiWOZ 2.1 & 2.4, outperforming all fine-tuned methods as well as other in-context learning methods. While all methods considered improve with additional data, our method is remarkably data efficient: RefPyDST achieves 95% of its full-shot performance using only 5% of the training data, on average. In comparison, using 5% of the training data with IC-DST Codex only achieves 89% of its full-shot performance. Zero-shot DST on MultiWOZ We present zeroshot multi-domain results on MultiWOZ 2.4 in Table 3. We find our method outperforms all zeroshot methods, achieving a 12.4% increase in multidomain JGA over IC-DST Codex, our strongest performing baseline. Comparisons are limited to methods that use zero training data, as opposed to transfer methods that train on some MultiWOZ domains and evaluate on others. For comparison with domain transfer methods, we present zero-shot results on the leave-one-out benchmark for MultiWOZ 2.1 & 2.4 in Table 2. Following prior work, we evaluate only dialogues and slots in the held-out domain.5 Evaluating average performance in this setting, we find our method outperforms all methods except for the current state-of-the-art transfer method, SDT-seq. Their method outperforms ours by 1.5% on each heldout domain on average. However, transfer methods such as SDT-seq require significant out-of-domain DST training data, while ours requires none. Despite this training data disadvantage, our approach outperforms all other zero-shot transfer methods. ## 6 Analysis & Ablations In this section, we further analyze the performance characteristics of our method. 5Prior work on the leave-one-out setting evaluates using the following method: (1) filter to dialogues which *contain* the held out domain (this can include dialogues in multiple domains) and (2) only check slots in that domain when computing JGA. (Wu et al., 2019) | attraction | hotel | restaurant | taxi | train | Avg. | | |-----------------------------------------|---------|--------------|--------|---------|--------|------| | MultiWOZ 2.1 | | | | | | | | TRADE (Wu et al., 2019) † | 20.1 | 14.2 | 12.6 | 59.2 | 22.4 | 25.7 | | TransferQA (Lin et al., 2021a) † | 31.3 | 22.7 | 26.3 | 61.9 | 36.7 | 35.8 | | DiSTRICT (Venkateswaran et al., 2022) † | 33.4 | 22.4 | 24.0 | 66.6 | 47.7 | 38.8 | | D3ST (Zhao et al., 2022) † | 56.4 | 21.8 | 38.2 | 78.4 | 38.7 | 46.7 | | SDT-seq (Gupta et al., 2022) † | 74.4 | 33.9 | 72.0 | 86.4 | 62.9 | 65.9 | | IC-DST (Hu et al., 2022) | 60.0 | 46.7 | 57.3 | 71.4 | 49.4 | 57.0 | | RefPyDST (ours) | 70.9 | 51.2 | 65.6 | 67.1 | 69.2 | 64.7 | | MultiWOZ 2.4 | | | | | | | | IC-DST Codex (Hu et al., 2022) | 62.1 | 53.2 | 54.9 | 71.9 | 51.4 | 58.7 | | RefPyDST (ours) | 74.5 | 56.6 | 68.2 | 68.5 | 76.1 | 68.8 | Table 2: Zero-shot joint-goal accuracy (JGA) for each domain in MultiWOZ 2.1 & 2.4 in the leave-one-out set up. We report results on each held-out domain and the average held-out domain performance (Avg.) Domain transfer methods (marked with †) learn from dialogues in the other four domains and are tested on the held-out domain. Unlike domain transfer methods, IC-DST and our method do not use any DST data. Following prior work, we evaluate only dialogues and slots in the held-out domain. For full evaluation of all dialogues in the zero-shot setup, see Table 3. ## Multiwoz 2.4 Ic-Dst Codex (Hu Et Al., 2022) 35.3 Refpydst (Ours) **47.9** Table 3: Zero-shot (zero DST training data) multidomain JGA evaluated on MultiWOZ 2.4. Our method achieves state-of-the-art for this setting. Comparisons with zero-shot transfer methods, which train on subsets of the MultiWOZ dataset, can be found in Table 2. Ablations In order to assess how each part of our method contributes to performance, we conduct a leave-one-out ablation, as well as reporting the performance of using only our prompting method. Each ablation is conducted using a 20% sample of the development data in the MultiWOZ 2.4 dataset (200 dialogues), sampled independently of the set used to tune hyperparameters. We present results in Table 4 for the zero and 5% few-shot setting. In the few-shot setting, we find leaving out our diverse retrieval to be most impactful. Does using Python improve coreference resolution? Since our Python prompting method explicitly models coreference through variable reference, we analyzed how our system performed on state predictions requiring coreference resolution. Using coreference annotations released with the 2.3 version of the MultiWOZ dataset (Han et al., 2021), we evaluate accuracy on slot values which require coreference to resolve. Our results are presented in Table 5. Overall, our full model improves upon the baseline for coreference. Removing Python greatly | Few-Shot (5%) | | |--------------------|------| | IC-DST (baseline) | 52.4 | | RefPyDST - Python | 54.8 | | RefPyDST - diverse | 54.6 | | RefPyDST - PMIβ | 56.1 | | RefPyDST (full) | 57.9 | | Zero-Shot | | | IC-DST (baseline) | 43.0 | | RefPyDST - Python | 40.7 | | RefPyDST - PMIβ | 46.0 | | RefPyDST (full) | 46.7 | reduces our model's performance, demonstrating the benefit of modeling coreference as Python variable reference. Does our retrieval method improve demonstrated label diversity? We investigate to what degree our diverse decoding procedure increases diversity in the distribution of demonstrated labels for a given input. To approximate a label, we | Model | 0% | 5% | |------------------------|--------|--------| | IC-DST (baseline) | 67.7 | 78.9 ∗ | | RefPyDST (prompt only) | 77.1 ∗ | 77.9 ∗ | | RefPyDST - Python | 62.9 | 73.0 | | RefPyDST (full) | 76.8 ∗ | 81.8 | define S(ei) as the distinct combination of slot names in the output for an in-context example ei = (xi, ∆yi), ignoring assigned values. First, we simply count the average number of distinct combinations of slot names in Ek, shown in upper half of Table 6. For each xt, we retrieve a set of in-context examples Ek. We count the number of distinct slot combinations across each ei ∈ Ek, and report the development set average. A value of 1 indicates the retriever is fully redundant: all k examples demonstrate the same combination of slots, while a value of k indicates every example in Ek is unique. Second, we consider the entropy of slot combinations present in Ek, shown in the lower half of Table 6. For each xt, we again compute S(ei) for each retrieved example in Ek. We then compute the specific conditional entropy H(S|X = xt), estimating the probability of each slot combination p(S|xt) using its frequency in Ek. We report the development set average or conditional entropy H(S|X). H(S|X = xt) = 0 indicates a fully redundant retriever that retrieves the same set of slots for all examples, and a uniform distribution of slot combinations yields H(S|X = xt) = log2(k). 6 We find our retrieval methods increase the diversity of in-context examples across all settings. For a given training set size, we see that diverse decoding increases the number of distinct 'labels', measured by S(ei), as well as the entropy H(S|X). Still, selected examples are not random, as we can see when comparing H(S|X) to a random retriever which uniformly samples from D*train*. 7 Finally, we see that as the size of the training set increases, the diversity in exemplified labels for a 6While this is true of a uniform distribution over demonstrated slot combinations, we find uniformly sampling from D*train* yields an entropy of ∼ 2.6, as the distribution of labels in the training data is not uniform. 7In Appendix D, we also compare few-shot task performance for our retrieval method against random retrieval Number of Distinct S in Ek 1% 5% 10% 100% random 7.1 7.2 7.2 7.3 top-k 3.4 2.2 1.8 1.5 diverse (α = .2) 5.3 4.1 3.3 2.2 diverse (α = .3) 5.7 4.5 3.5 2.3 diverse (α = .5) 7.5 5.7 4.8 2.8 Entropy H(S|X) 1% 5% 10% 100% random 2.6 2.6 2.6 2.6 top-k 1.2 0.63 0.47 0.30 diverse (α = .2) 1.8 1.5 1.1 0.64 diverse (α = .3) 1.9 1.6 1.2 0.68 diverse (α = .5) 2.7 2.0 1.7 0.93 given choice of α *decreases*. Increasing training data leads to a higher density of each slot combination, requiring more aggressive discounting to achieve the same diversity in Ek. As such, we increase α with training set size, using α = 0.2 for 1% and 5% settings and α = 0.3 & α = 0.5 for 10% and 100% settings, respectively. ## 7 Related Work Dialogue State Tracking There has been a recent increase in work on the zero and few-shot DST systems. Many approaches fine-tune a pretrained language model by re-framing DST as some form of text-to-text or auto-regressive language modeling task (Wu et al., 2020; Peng et al., 2021; Hosseini-Asl et al., 2020; Su et al., 2021; Shin et al., 2022; Lin et al., 2021b; Gupta et al., 2022; Li et al., 2021; Xie et al., 2022). Many of these methods often exhibit zero-shot transfer capabilities (Wu et al., 2019; Gupta et al., 2022; Li et al., 2021; Hosseini-Asl et al., 2020). However, these approaches still require re-training when a domain is added or changed, and zero-shot transfer performance is dependent on the relatedness of the new domain to existing ones. Some recent works instead model DST as an incontext learning problem (Hu et al., 2022; Xie et al., 2022; Madotto et al., 2021), bypassing the need for re-training when system definitions change. In particular, we build on the work of Hu et al. (2022), which models DST by predicting dialogue state changes at each turn, relying on only a state summary and agent/user turn utterances for inference. Their work models DST as a text-to-SQL problem, whereas we model it as a Python programming problem with novel methods for selecting in-context examples and scoring language model completions. In-Context Learning Some recent works explore the properties of effective in-context examples. In classification settings, Gao et al. (2021) find random examples can significantly limit performance, and propose using a pre-trained embedding model to find examples semantically close to x, retrieving one per class. Other works investigate the role of examples in ICL performance in detail, finding that ICL methods perform best when example inputs and test inputs are as close in distribution as possible, and when the distribution of exemplified labels closely matches the target distribution (Min et al., 2022; Liu et al., 2022). Paralleling this, a number of works across NLP tasks propose methods for retrieving relevant incontext examples. Pasupat et al. (2021) use an unsupervised embedding model to embed a test input x and all available examples, retrieving the k with highest embedding cosine similarity. Other works use a similar dense retriever but in an embedding space learned with supervision. Rubin et al. (2021) fine-tune an example retriever with contrastive learning in which positive examples maximize pLM (y|*x, e*i). Hu et al. (2022) propose a contrastive learning objective specific to DST, finetuning an embedding model to embed turns with similar state changes in proximity to each other. Rather than use a separate retrieval module, Shin and Van Durme (2022) use the LM itself to select examples which are most likely when conditioned on x. Given a test input x, each of these works scores the relevance of an individual example eito a test input x and then selects the k most relevant ones to include in a prompt. In most cases, this yields a set of examples Ek which are meaningfully similar to x. However, considering examples individually does not necessarily lead to adequate exemplification of the output space. In supervised settings that learn a relevance metric which approximates output similarity, this can lead to degenerate examples sets Ek which all exemplify the same output. In contrast to this, we propose a novel method for using this score to construct Ek with examples that are relevant to x while being distinct from each other. In concurrent work to our own, Ye et al. (2022b) propose a method for decoding diverse examples of explanations from a retriever for use in reasoning problems, also based on maximum-marginalrelevance (MMR) (Goldstein and Carbonell, 1998). Their work uses unsupervised measures of similarity between explanations, where ours uses a supervised retriever which approximates similarity of outputs. Thus, diversity in our example sets correlates to diversity in exemplified outputs. In another concurrent work to our own (Levy et al., 2022) propose a method for diverse example selection in a semantic parsing task, using the outputs of selected examples to incrementally cover more structures in Ek. For tasks which can be re-framed as program synthesis, a number of works have also developed ICL methods for use with LMs pre-trained on code such as Codex and Codegen (Chen et al., 2021; Nijkamp et al., 2022). Shin and Van Durme (2022) use ICL with Codex to generate Lisp-like programs in a dialogue semantic parsing task. Rajkumar et al. (2022) evaluate such models capabilities in Text-toSQL problems, and Hu et al. (2022) use a Text-toSQL framing to use Codex for DST. Instead of SQL queries, we generate Python programs, allowing for intuitive modeling of phenomena like coreference. Finally, recent works have considered adjusting how completion strings are scored with an LM. Brown et al. (2020) normalize log-likelihoods by length before scoring completions. Zhao et al. (2021) re-weigh LM probabilities by learning an affine transformation that yields uniform scores given 'content-free inputs'. Holtzman et al. (2021) propose PMIDC, a method for re-scoring completions using pointwise mutual information (pmi), which we adapt to our constrained generative setting. ## 8 Conclusion We propose RefPyDST, an in-context learning method for DST. Our contributions address key challenges in DST and in retrieval-augmented ICL, producing state-of-the-art results on MultiWOZ DST benchmarks for few-shot and zero-shot setups. Future work could apply methods developed here to other in-context learning problems. ## 9 Limitations While in-context learning methods for DST are promising in their data efficiency and flexibility to new domains, they typically require very large models to perform effectively. At 175 billion parameters, OpenAI Codex (Chen et al., 2021) is much larger than some of the fine-tuned approaches to DST, though with better performance and ability to adapt to new domains without re-training. Despite our advances, there are still significant errors when applying ICL for DST. As such, ICL may not necessarily be relied on in safety-critical settings. ## Acknowledgements We thank Geetanjali Rakshit, Nilay Patel, Changmao Li, Chris Toukmaji, Rongwen Zhao, and other JLab members for insightful feedback on preliminary drafts of this work, and thank the anonymous reviewers and area chairs for their detailed and helpful feedback. The authors were supported in part by the NSF National AI Institute for Student-AI Teaming (iSAT) under grant DRL 2019805. The opinions expressed are those of the authors and do not represent views of the NSF. We are thankful for the computing resources provided by the Pacific Research Platform's Nautilus cluster, supported by the National Science Foundation under Award Numbers CNS-1730158, ACI-1540112, ACI1541349, OAC-1826967, the University of California Office of the President, and the University of California San Diego's California Institute for Telecommunications and Information Technology/Qualcomm Institute. ## References Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language Models are Few-Shot Learners. arXiv:2005.14165 [cs]. ArXiv: 2005.14165. Paweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Iñigo Casanueva, Ultes Stefan, Ramadan Osman, and Milica Gašic. 2018. Multiwoz - a large- ´ scale multi-domain wizard-of-oz dataset for taskoriented dialogue modelling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP). Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. 2021. Evaluating Large Language Models Trained on Code. ArXiv:2107.03374 [cs]. Béatrice Daille. 1994. Approche mixte pour l'extraction de terminologie : statistique lexicale et filtres linguistiques. Mihail Eric, Rahul Goel, Shachi Paul, Abhishek Sethi, Sanchit Agarwal, Shuyang Gao, Adarsh Kumar, Anuj Goyal, Peter Ku, and Dilek Hakkani-Tur. 2020. MultiWOZ 2.1: A Consolidated Multi-Domain Dialogue Dataset with State Corrections and State Tracking Baselines. In *Proceedings of the Twelfth Language Resources and Evaluation Conference*, pages 422–428, Marseille, France. European Language Resources Association. Tianyu Gao, Adam Fisch, and Danqi Chen. 2021. Making Pre-trained Language Models Better Fewshot Learners. *arXiv:2012.15723 [cs]*. ArXiv: 2012.15723. Jade Goldstein and Jaime Carbonell. 1998. Summarization: (1) using MMR for diversity- based reranking and (2) evaluating summaries. In *TIPSTER TEXT* PROGRAM PHASE III: Proceedings of a Workshop held at Baltimore, Maryland, October 13-15, 1998, pages 181–195, Baltimore, Maryland, USA. Association for Computational Linguistics. Raghav Gupta, Harrison Lee, Jeffrey Zhao, Abhinav Rastogi, Yuan Cao, and Yonghui Wu. 2022. Show, Don't Tell: Demonstrations Outperform Descriptions for Schema-Guided Task-Oriented Dialogue. arXiv:2204.04327 [cs]. ArXiv: 2204.04327. R. Hadsell, S. Chopra, and Y. LeCun. 2006. Dimensionality reduction by learning an invariant mapping. In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06), volume 2, pages 1735–1742. Ting Han, Ximing Liu, Ryuichi Takanobu, Yixin Lian, Chongxuan Huang, Dazhen Wan, Wei Peng, and Minlie Huang. 2021. MultiWOZ 2.3: A multidomain task-oriented dialogue dataset enhanced with annotation corrections and co-reference annotation. arXiv:2010.05594 [cs]. ArXiv: 2010.05594 version: 3. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The Curious Case of Neural Text Degeneration. ArXiv:1904.09751 [cs]. Ari Holtzman, Peter West, Vered Shwartz, Yejin Choi, and Luke Zettlemoyer. 2021. Surface form competition: Why the highest probability answer isn't always right. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7038–7051, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Ehsan Hosseini-Asl, Bryan McCann, Chien-Sheng Wu, Semih Yavuz, and Richard Socher. 2020. A Simple Language Model for Task-Oriented Dialogue. In Advances in Neural Information Processing Systems, volume 33, pages 20179–20191. Curran Associates, Inc. Yushi Hu, Chia-Hsuan Lee, Tianbao Xie, Tao Yu, Noah A. Smith, and Mari Ostendorf. 2022. InContext Learning for Few-Shot Dialogue State Tracking. Number: arXiv:2203.08568 arXiv:2203.08568 [cs]. Itay Levy, Ben Bogin, and Jonathan Berant. 2022. Diverse Demonstrations Improve In-context Compositional Generalization. ArXiv:2212.06800 [cs]. Shuyang Li, Jin Cao, Mukund Sridhar, Henghui Zhu, Shang-Wen Li, Wael Hamza, and Julian McAuley. 2021. Zero-shot generalization in dialog state tracking through generative question answering. In *Proceedings of the 16th Conference of the European* Chapter of the Association for Computational Linguistics: Main Volume, pages 1063–1074, Online. Association for Computational Linguistics. Zhaojiang Lin, Bing Liu, Andrea Madotto, Seungwhan Moon, Zhenpeng Zhou, Paul Crook, Zhiguang Wang, Zhou Yu, Eunjoon Cho, Rajen Subba, and Pascale Fung. 2021a. Zero-shot dialogue state tracking via cross-task transfer. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7890–7900, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Zhaojiang Lin, Bing Liu, Seungwhan Moon, Paul Crook, Zhenpeng Zhou, Zhiguang Wang, Zhou Yu, Andrea Madotto, Eunjoon Cho, and Rajen Subba. 2021b. Leveraging Slot Descriptions for Zero-Shot Cross-Domain Dialogue StateTracking. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational* Linguistics: Human Language Technologies, pages 5640–5648, Online. Association for Computational Linguistics. Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, and Weizhu Chen. 2022. What makes good in-context examples for GPT-3? In Proceedings of Deep Learning Inside Out (DeeLIO 2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, pages 100–114, Dublin, Ireland and Online. Association for Computational Linguistics. Andrea Madotto, Zhaojiang Lin, Genta Indra Winata, and Pascale Fung. 2021. Few-Shot Bot: Prompt-Based Learning for Dialogue Systems. arXiv:2110.08118 [cs]. ArXiv: 2110.08118. Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2022. Rethinking the Role of Demonstrations: What Makes In-Context Learning Work? arXiv:2202.12837 [cs]. ArXiv: 2202.12837 version: 1. Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. 2022. CodeGen: An Open Large Language Model for Code with Multi-Turn Program Synthesis. ArXiv:2203.13474 [cs]. Panupong Pasupat, Yuan Zhang, and Kelvin Guu. 2021. Controllable semantic parsing via retrieval augmentation. In *Proceedings of the 2021 Conference on* Empirical Methods in Natural Language Processing, pages 7683–7698, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Baolin Peng, Chunyuan Li, Jinchao Li, Shahin Shayandeh, Lars Liden, and Jianfeng Gao. 2021. SOLOIST: Building Task Bots at Scale with Transfer Learning and Machine Teaching. *arXiv:2005.05298 [cs]*. ArXiv: 2005.05298. Nitarshan Rajkumar, Raymond Li, and Dzmitry Bahdanau. 2022. Evaluating the text-to-sql capabilities of large language models. *ArXiv*, abs/2204.00498. Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics. Ohad Rubin, Jonathan Herzig, and Jonathan Berant. 2021. Learning To Retrieve Prompts for InContext Learning. *arXiv:2112.08633 [cs]*. ArXiv: 2112.08633. Jamin Shin, Hangyeol Yu, Hyeongdon Moon, Andrea Madotto, and Juneyoung Park. 2022. Dialogue summaries as dialogue states (DS2), template-guided summarization for few-shot dialogue state tracking. In *Findings of the Association for Computational* Linguistics: ACL 2022, pages 3824–3846, Dublin, Ireland. Association for Computational Linguistics. Richard Shin and Benjamin Van Durme. 2022. FewShot Semantic Parsing with Language Models Trained on Code. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5417–5425, Seattle, United States. Association for Computational Linguistics. Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie-Yan Liu. 2020. MPNet: Masked and Permuted Pre-training for Language Understanding. ArXiv:2004.09297 [cs]. Yixuan Su, Lei Shu, Elman Mansimov, Arshit Gupta, Deng Cai, Yi-An Lai, and Yi Zhang. 2021. MultiTask Pre-Training for Plug-and-Play Task-Oriented Dialogue System. *arXiv:2109.14739 [cs]*. ArXiv: 2109.14739. Praveen Venkateswaran, Evelyn Duesterwald, and Vatche Isahagian. 2022. DiSTRICT: Dialogue State Tracking with Retriever Driven In-Context Tuning. ArXiv:2212.02851 [cs]. Gengyu Wang, Cheng Qian, Lin Pan, Haode Qi, Ladislav Kunc, and Saloni Potdar. 2022. Benchmarking language-agnostic intent classification for virtual assistant platforms. In *Proceedings of the Workshop* on Multilingual Information Access (MIA), pages 69–76, Seattle, USA. Association for Computational Linguistics. Chien-Sheng Wu, Steven C.H. Hoi, and Caiming Xiong. 2020. Improving Limited Labeled Dialogue State Tracking with Self-Supervision. In *Findings of the* Association for Computational Linguistics: EMNLP 2020, pages 4462–4472, Online. Association for Computational Linguistics. Chien-Sheng Wu, Andrea Madotto, Ehsan HosseiniAsl, Caiming Xiong, Richard Socher, and Pascale Fung. 2019. Transferable Multi-Domain State Generator for Task-Oriented Dialogue Systems. ArXiv:1905.08743 [cs]. Tianbao Xie, Chen Henry Wu, Peng Shi, Ruiqi Zhong, Torsten Scholak, Michihiro Yasunaga, Chien-Sheng Wu, Ming Zhong, Pengcheng Yin, Sida I. Wang, Victor Zhong, Bailin Wang, Chengzu Li, Connor Boyle, Ansong Ni, Ziyu Yao, Dragomir Radev, Caiming Xiong, Lingpeng Kong, Rui Zhang, Noah A. Smith, Luke Zettlemoyer, and Tao Yu. 2022. UnifiedSKG: Unifying and Multi-Tasking Structured Knowledge Grounding with Text-to-Text Language Models. Number: arXiv:2201.05966 arXiv:2201.05966 [cs]. Fanghua Ye, Jarana Manotumruksa, and Emine Yilmaz. 2022a. MultiWOZ 2.4: A Multi-Domain TaskOriented Dialogue Dataset with Essential Annotation Corrections to Improve State Tracking Evaluation. In Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 351–360, Edinburgh, UK. Association for Computational Linguistics. Xi Ye, Srinivasan Iyer, Asli Celikyilmaz, Ves Stoyanov, Greg Durrett, and Ramakanth Pasunuru. 2022b. Complementary Explanations for Effective In-Context Learning. ArXiv:2211.13892 [cs]. Jeffrey Zhao, Raghav Gupta, Yuan Cao, Dian Yu, Mingqiu Wang, Harrison Lee, Abhinav Rastogi, Izhak Shafran, and Yonghui Wu. 2022. DescriptionDriven Task-Oriented Dialog Modeling. Number: arXiv:2201.08904 arXiv:2201.08904 [cs]. Tony Z. Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. 2021. Calibrate Before Use: Improving Few-Shot Performance of Language Models. arXiv:2102.09690 [cs]. ArXiv: 2102.09690. ## A Dialogue State Normalization Real world task oriented dialogue systems can interface users with thousands or more entities, such as restaurants or hotels in MultiWOZ. Since reasoning directly over all such entities is intractable, dialogue understanding modules often first predict a surface form (e.g. a restaurant name mentioned by a user) which another module links to a canonical form (e.g. that restaurants name in a database). While dialogue state trackers evaluated on MultiWOZ do not need to interact with a database, handling of typos and unexpected surface forms is important for a realistic assessment of system performance, since predictions for a slot are evaluated on exact string match. As such, most research systems including the baselines in this paper use rule-based functions to fix typos and unexpected surface forms. We propose a robust rule-based method for effective linking of surface forms to canonical forms described below. Mapping to canonical forms We begin by first reading in canonical forms for every informable slot in the MultiWOZ system. For categorical slots, these are defined in a schema file, as released with MultiWOZ 2.1 (Eric et al., 2020). For non-categorical slots, we read in values from the database defined with the original MultiWOZ data collection (Budzianowski et al., 2018). Neither source of information contains dialogue data, only information defining the task. The taxi and train service have informable slots for departure and destination locations. In addition to the locations listed for these slots in a database (i.e. scheduled train journeys), we accept the name of any entity which has an address as a canonical form for these slots. For time slots we consider any time represented in "hh:mm" form as canonical. Overall, this gives us a mapping from a slot name sito a set of canonical forms C⟩for all slot names. Given a slot name si and a slot value surface form vj , we select the correct canonical form cj as follows: (1) we first generate a set of aliases for vj . These are acceptable re-phrasings of vj , such as adding the leading article "the", a domain specifying suffix such as "hotel" or "museum", or switching numbers to/from digit form (e.g. "one" ↔ "1"). We then consider a surface form vj as mapped to a canonical form cj if any of the aliases aj ∈ Aj is a fuzzy match for the canonical form cj , using the fuzz.ratio scorer in the fuzzywuzzy 8 package. We require a score of 90 or higher, and verify in the development data that no surface form maps to more than one canonical form. Choosing the most likely surface form While in a real world dialogue system we would only need to link to canonical forms, **gold dialogue state** states in MultiWOZ are themselves annotated with surface forms, not always matching the name of the entity in the database and occasionally disagreeing on an entity name. So as to not alter the evaluation process and make sure we can fairly compare to prior work, we use the training data available in each experimental setting to choose the most likely surface form for a given canonical form cj . To do this, we simply count the occurrences of each surface form in the gold labels of the training set for that experiment, and select the most frequently occurring one for cj . However for low data regimes, we often do not observe all canonical forms. Following numerous prior works, we make use of the ontology file released with the dataset (Eric et al., 2020; Ye et al., 2022a), which lists all observed surface forms for a slot name, and treat each of these as if we had seen them 10 times. This serves as a smoothing factor for selecting the most likely surface form. For the zero-shot experiments, we use only the counts derived from the ontology file, as we have no training data to observe. Overall, we find this approach to normalization to be robust when compared to other works, which rely on hard-coded fixes for commonly observed typos. Further, our normalization can be initialized with any similarly formatted system definition and data set, allowing for use in other domains. To verify that our approach to normalization is not the key factor distinguishing our performance from previous methods, we apply it to a faithful 8https://pypi.org/project/fuzzywuzzy/ re-implementation of our IC-DST Codex baseline (Hu et al., 2022) in our ablation in Table 4. ## B Prompt Examples Please see our GitHub repository for prompt examples: https://github.com/jlab-nlp/RefPyDST. ## C Implementation Details C.1 Hyperparameters All hyperparameter tuning is performed using a 10% split of the development set (100 dialogues) and manual tuning. We find that a smaller choice for p (0.7) in nucleus sampling helps performance in the zero-shot setting. Similarly, we find that in order to select a diverse set of examples, we need to scale α. We use α = 0.2 for the 1% & 5% settings, α = 0.3 for 10%, and α = 0.5 for the full setting. For the full setting, we also increase the the number of considered examples from the nearest 100 to nearest 200. Across all settings, we compute PMIβ with β = 0.4. We use a robust approach to normalizing predicted values (i.e. to resolve mis-spellings, etc.) described in Appendix A. We apply this normalization to our strongest baseline (IC-DST Codex) in our ablations (§6). When computing P(y|f ′*prompt*(Ek)), we clip low token log probabilities at 5e-7 in the few-shot setting and 5e4 in the zero-shot setting, as the lack of examples leads to poorer calibration in the zero-shot setting. We also clip full-sequence log probabilities at 1e-7 in the few-shot setting and 1e-5 in the zero-shot setting. ## C.2 Retriever Fine-Tuning Details For both our methods and the re-implementation of IC-DST Codex (Hu et al., 2022) used in our ablations (§ 6), we fine-tune the retriever using the sentence-transformers package (Reimers and Gurevych, 2019), following the procedure of (Hu et al., 2022). We begin with pre-trained all-mpnet-base-v2 embedding model, which we use as a retriever with nearest neighbors search9. Each of our retrievers is trained for 15 epochs using the OnlineContrastiveLoss, which computes the contrastive loss proposed by Hadsell et al. (2006) using only hard positives and hard negatives. For each dialogue turn in the training set, we ![13_image_0.png](13_image_0.png) Table 7: MultiWOZ joint-goal accuracy in the 5% fewshot setting, ablating different retrieval methods. The full model includes both our trained retriever and diverse example decoding methods (§3.2). Top-k uses the trained retriever but decodes the top-k nearest examples instead of using our diverse decoding procedure. Random retrieval samples k examples from D*train* uniformly at random use simF1 to define positive and (hard) negative examples as the top and bottom 5% of the nearest 200 examples, respectively. ## C.3 Arguments To Codex For all methods, we make requests to OpenAI Codex with arguments engine = 'code-davinci-002', max_tokens = 120, and stop sequences of either ['-', '\n', ';', '\#'] (IC-DST Codex baseline replication) or ["\n\n", "\#", "print("] (ours). For methods which utilize nucleus sampling (Holtzman et al., 2020) with the top_p parameter. In the few-shot setting, we sample with best_of=10, keeping only n=5 most likely results. In the zero-shot setting, we increase best_of to 32. ## D Random Retrieval Ablation In Table 7, we compare our retrieval methods to random retrieval, on the 20% split of the development set used in our previous ablations. For random retrieval, we sample k examples from D*train* uniformly at random to construct Ek. We find this significantly under-performs our learned retrieval methods, whether selecting the top-k examples or using our diverse decoding approach.
wang-etal-2023-pre
Pre-Trained Language-Meaning Models for Multilingual Parsing and Generation
https://aclanthology.org/2023.findings-acl.345
Pre-trained language models (PLMs) have achieved great success in NLP and have recently been used for tasks in computational semantics. However, these tasks do not fully benefit from PLMs since meaning representations are not explicitly included. We introduce multilingual pre-trained language-meaning models based on Discourse Representation Structures (DRSs), including meaning representations besides natural language texts in the same model, and design a new strategy to reduce the gap between the pre-training and fine-tuning objectives. Since DRSs are language neutral, cross-lingual transfer learning is adopted to further improve the performance of non-English tasks. Automatic evaluation results show that our approach achieves the best performance on both the multilingual DRS parsing and DRS-to-text generation tasks. Correlation analysis between automatic metrics and human judgements on the generation task further validates the effectiveness of our model. Human inspection reveals that out-of-vocabulary tokens are the main cause of erroneous results.
## Pre-Trained Language-Meaning Models For Multilingual Parsing And Generation Chunliu Wang∗, Huiyuan Lai∗**, Malvina Nissim, Johan Bos** CLCG, University of Groningen / The Netherlands {chunliu.wang, h.lai, m.nissim, johan.bos}@rug.nl ## Abstract Pre-trained language models (PLMs) have achieved great success in NLP and have recently been used for tasks in computational semantics. However, these tasks do not fully benefit from PLMs since meaning representations are not explicitly included in the pretraining stage. We introduce *multilingual pretrained language-meaning models* based on Discourse Representation Structures (DRSs), including meaning representations besides natural language texts in the same model, and design a new strategy to reduce the gap between the pre-training and fine-tuning objectives. Since DRSs are language neutral, crosslingual transfer learning is adopted to further improve the performance of non-English tasks. Automatic evaluation results show that our approach achieves the best performance on both the multilingual DRS parsing and DRS-to-text generation tasks. Correlation analysis between automatic metrics and human judgements on the generation task further validates the effectiveness of our model. Human inspection reveals that out-of-vocabulary tokens are the main cause of erroneous results. ## 1 Introduction There are two common tasks in computational semantics: mapping a text to a meaning representation (semantic parsing), and its reverse, producing a text from a meaning representation (semantic generation). These tasks generally rely on corpora that contain texts aligned with meaning representations. While in recent years large pre-trained language models (PLMs), both monolingual as well as multilingual, have brought NLP tasks to a new level, semantic parsing and generation cannot fully benefit from them since the meaning representations are not included in PLMs explicitly. Our goal in this work is to leverage the principle of pre-trained models and explore the benefit of ∗ Equal contribution. ![0_image_0.png](0_image_0.png) multilingual semantic parsing and generation of including *in the same model* meaning representations aside from natural language. This would make it possible not only to operate multilingually, thanks to representation neutrality, but also to leverage the bidirectionality of language-meaning alignment. Figure 1 illustrates our idea. Semantic parsing and generation (in different languages) are clearly related, but traditionally they are studied and developed independently of one another, usually focusing on a single language (often English). This results in having to train separate models from scratch for each task and language, and progress has been hampered by data scarcity. This is especially true for languages other than English, where data scarcity is even more severe. Our proposal to incorporate meaning representations in PLMs and to concurrently embrace a multilingual approach breaks with this tradition yielding a twofold advantage. First, multilingual PLMs enable different languages to be represented in one universal space making it possible to benefit from cross-lingual knowledge transfer in semantic parsing and generation. Second, joining the formal and natural language representations in training makes it possible to leverage one and the same model for parsing and generation. For this approach to work, we need a meaning representation framework where (i) the formalism is language-neutral, (ii) there is aligned data both in terms of meaninglanguage(s), but also multilingually across different languages, and (iii) there is enough expressivity to cover for a wide range of language phenomena. Discourse Representation Structure (DRS), which satisfies our requirements well, is the formal meaning representation proposed in Discourse Representation Theory (DRT, Kamp 1981; Asher 1993; Kamp and Reyle 1993; Kadmon 2001; Kamp et al. 2011; Geurts et al. 2020). It covers a large variety of linguistic phenomena, including anaphors, presuppositions, temporal expressions and multisentence discourses and captures the semantics of negation, modals and quantification. Furthermore, DRS provides a language-neutral meaning representation: the same meaning representation associated with text that can be expressed in various languages. While Abstract Meaning Representations (AMR, Banarescu et al. 2013) have been proposed for this task, we believe DRS is more suitable because of its multi-lingual representation capability (all predicates are interpreted), its expressive power (proper treatment of negation and universal quantification), and the comparable annotated data available for multiple languages. As a first step, we consider DRS as an additional abstract language that will complement the natural languages in our pre-trained model. We take the multilingual PLM mBART (Liu et al., 2020) and further pre-train it with all of our language data, thus both the four natural languages we use as well as the language neutral meaning representations, so that the DRSs and texts are learnt in the same semantic space. As a second step, we introduce a supervised denoising training that exploits more explicitly the relationship between DRS and each corresponding text as well as between the parallel texts in the different languages; we do this combined with denoising training to reduce the gap between the pre-training and fine-tuning objectives. At this point, we have at our disposal a single multilingual language-meaning model which can then be fine-tuned for either parsing (text-to-DRS) or generation (DRS-to-text), in a monolingual or multilingual fashion. Overall, our main contributions include: (i) A novel task of multilingual DRS-to-text generation, and a framework for a mixed language-meaning modelling in a multilingual setting, serving both parsing and generation. (ii) A pre-training strategy, with self-supervised training followed by supervised training, to reduce the gap between pretraining and fine-tuning; we also employ multilingual transfer techniques to boost performance in languages other than English exploiting language neutrality in DRSs. (iii) Extensive experiments for both parsing and generation across different languages, including both automatic and human evaluation to understand how multilingual models perform.1 ## 2 Background And Related Work This work employs intensive multilingual pretraining techniques for language-meaning modelling for both parsing and generation. In this section, we briefly introduce the concept of DRS, which serves as our meaning representation tool, and relevant background and related work. Discourse Representation Structures The Parallel Meaning Bank (PMB, Abzianidze et al. 2020) provides a large corpus of sentences annotated with DRSs in different formats for three different degrees of annotation quality: gold (completely checked manually), silver (partially checked manually) and bronze (uncorrected).2 The box-format of DRS extensively used in Discourse Representation Theory may be convenient for human readability, but it is not suitable for modelling. We thus use the Discourse Representation Graph (DRG) format provided by the PMB and its equivalent variablefree sequential notation (Figure 2). There are three types of nodes in a DRG, a directed acyclic graph: conceptual entities (represented by WordNet (Fellbaum, 1998) synsets), constants (names, quantities, and the discourse deictics speaker, hearer, and now), contexts (defining scope as a box in DRT, represented graphically as a box). Edges between entity nodes denote thematic roles (Agent, Theme, Patient, Experiencer, Stimulus, Time, etc.) and comparison operators (=, ̸=, ≺, ≤, ∼, and so on); edges between context nodes are discourse relations including negation (Figure 2). Even though the PMB resorts to the English version of Wordnet (Fellbaum, 1998), we consider a synset as an interlingual way of representing a concept, being a compound of a lemma, part of speech (noun, verb, adjective, adverb) and sense number. This means that DRSs for languages other 1Code and models are available at https://github.com/ wangchunliu/DRS-pretrained-LMM. 2See https://pmb.let.rug.nl/data.php. ![2_image_0.png](2_image_0.png) than English also employ the synsets of the English WordNet as a sort of interlingua. Only names in a DRS are language-specific - for instance, the city of London would be represented in an Italian DRS as city.n.01 Name "Londra". The sequence notation for DRGs is based on a variable-free representation of DRS (Bos, 2021). In this notation a DRS is just a sequence of conceptual entities, roles with hooks (indices) or anchors, and discourse relations. Each entity is followed by the roles it introduces. Each thematic role or comparison operator either hooks to another entity via a negative or positive index (−1 relates to the previous entity in the sequence, −2 to the one before that, +1 to the next one, and so on). Discourse relations (e.g., NEGATION, NARRATION, ELABORATION) in the sequence notation introduce new contexts (see Figure 2). We make heavily use of this sequential notation because of the many advantages it offers. For example, compared with the box-format DRS, it can be easily converted into a graph structure without the complicated conversion process introduced in previous work (Fancellu et al., 2019; Fu et al., 2020). Compared with the clause-format DRS (van Noord et al., 2018), it omits the use of variables and is therefore simpler. It can also be used directly to train a sequence-tosequence (seq2seq) neural model. Text-to-DRS Parsing In the traditional efforts for DRS parsing, it can be roughly divided into two categories, namely rule-based and neural networkbased methods. Regarding rule-based methods, Boxer (Bos, 2008) is a classic system based on rules and statistical methods. Recently, Poelman et al. (2022) propose a multilingual DRS parser leveraging existing off-the-shelf Universal Dependency parsers, it can achieve similar or even better performances than BERT-based models. Indeed, neural models have become the most popular methods in this field and usually achieve the best performance (van Noord et al., 2018; Liu et al., 2019b; Evang, 2019; van Noord et al., 2019, 2020a; Wang et al., 2021b). In addition to the seq2seq models above, there are two lines focusing on tree-based approaches (Liu et al., 2018, 2019a) and graphbased approaches (Fancellu et al., 2019; Fu et al., 2020), where Fancellu et al. (2019) is the first attempt at multilingual DRS parsing. Most of the above works train neural models from scratch, and some make use of PLMs, but the models do not contain meaning representations explicitly during pre-training. Therefore, we aim to leverage the principle of pre-trained models and incorporate both meaning representations and natural language into one model. This, hopefully, can enable different languages to be represented explicitly in one universal space through pre-training, and result in one model for parsing and generation. DRS-to-Text Generation Compared to DRS parsing, DRS-to-text generation has only recently drawn interest from NLP practitioners (Basile and Bos, 2011; Narayan and Gardent, 2014; Basile, 2015). Similar to DRS parsing, prior work on the generation task can be classified into rulebased methods (Basile and Bos, 2011) and neural network-based methods (Liu et al., 2021; Wang et al., 2021a). All these works focus on English only. Here, we take the first step towards a multilingual generation task and provide a corresponding benchmark, leveraging the representation neutrality in DRS and the bidirectionality of languagemeaning alignment in different languages. Multilingual Pre-Training In recent years, multilingual PLMs have brought NLP to a new era (Liu et al., 2020; Qiu et al., 2020; Xue et al., 2021). ![3_image_0.png](3_image_0.png) They are pre-trained on large-scale unlabeled data in a self-supervised way, which enable different languages to be represented in one semantic space. Therefore, models fine-tuned on high-resource languages can thus transfer knowledge to other lowerresource languages for various tasks, such as Natural Language Inference (Conneau et al., 2018), Question Answering (Clark et al., 2020), Machine Translation (Liu et al., 2020), and formality transfer (Lai et al., 2022b). Generally, PLMs are pre-trained in a selfsupervised manner, which enforces models to reconstruct corrupted text based on denoising objectives (Liu et al., 2020). However, recent work shows that self-supervised pre-training may introduce noisy information that affects the performance of downstream tasks (Feng et al., 2022; Tang et al., 2022). Moreover, it has been shown that supervised pre-training can achieve superior performance compared to the self-supervised approaches (Conneau and Lample, 2019; Tang et al., 2022). In terms of computational semantics, Bai et al. (2022) propose a monolingual framework based on AMR, where the pre-training and fine-tuning share the same data format to facilitate knowledge transfer between them. Inspired by these works, we model meaning representations and natural language jointly leveraging the principle of PLMs in a multilingual fashion, and propose a pre-training strategy to make the pre-training objectives close to target downstream tasks by exploiting the relationship between DRS and its corresponding texts in different languages. ## 3 Method We use mBART as our backbone to jointly model natural language and meaning representation in a multilingual manner, thereby enabling the DRS representations and the texts to be learnt in the same semantic space. This one model is then fine-tuned for parsing and generation. ## 3.1 Mbart mBART is a pre-trained denoising seq2seq model based on the Transformer architecture (Vaswani et al., 2017), derived from the monolingual model BART (Lewis et al., 2020). It is pre-trained to reconstruct the original text from a corrupted version (e.g. token masking). The model then takes the original sequence as input and maps it into the target sequence during fine-tuning and inference on downstream tasks. The novelty of our approach relies on the fact that the sequential DRS format allows for both text-to-DRS parsing and DRS-to-text generation to be performed in a seq2seq way (see Figure 4). For more efficient training, we filter out the unused tokens from mBART's vocabulary after tokenizing the training corpora (including texts and DRSs), which results in a shared vocabulary of 39,981 tokens. Besides, we add a special token <drs> as a prefix for DRSs, which is used to distinguish DRSs from natural languages and guide models to produce DRSs as outputs of parsing. ## 3.2 Multilingual Language-Meaning Models We introduce a pre-training strategy to model natural language and meaning representation on top of mBART, including (i) basic denoising training and (ii) supervised denoising training. Basic Denoising Training Since the meaning representations are not included in vanilla mBART, we perform a further pre-training to incorporate DRSs into the model and learn the universal representation. Specifically, we combine all the training data of multiple languages: D = {D1*, ..., D*n} where each Diis a collection of data in a language. Language code <lang> and DRS code <drs> are used as prefixes for text and DRS sequences, respectively, to differentiate them from each other. As shown in Figure 3 (I: B-PT block), we follow Liu et al. (2020) to conduct a denoising training, which aims to reconstruct the original sequence from a version corrupted with a noise function. Formally, this denoising training can be $\mathfrak{AS}_4^*$. $$d\ \mathrm{as}:$$ $$L_{\theta}=-\sum\log(T\mid g(T);\theta)$$ $\mathbf{A}\cdot\mathbf{B}=\mathbf{C}\cdot\mathbf{D}\cdot\mathbf{A}\cdot\mathbf{D}\mathbf{T}$. $$p_{\theta}(t|d)=\prod_{i=1}^{m}p_{\theta}(t_{i}|t_{1,\ldots,i-1};d)\qquad(2)$$ ## 4 Experiments 3.3 Parsing And Generation 4.1 Training Details $$(1)$$ | Data type | Gold | Silver | Bronze | | | |-------------|--------|----------|----------|---------|---------| | Lang | Train | Dev | Test | Train | Train | | English | 8,407 | 1,147 | 1,042 | 119,002 | 148,164 | | German | 1,730 | 552 | 545 | 5,986 | 140,654 | | Italian | 682 | 540 | 459 | 3,995 | 98,382 | | Dutch | 535 | 435 | 490 | 1,363 | 26,433 | | Hyper-Parameter | B-PT | S-PT | F-FT | S-FT | |-------------------|--------|--------|--------|--------| | Batch size | 32 | 32 | 32 | 32 | | Update Steps | 8 | 8 | 8 | 1 | | Max learning rate | 1e-4 | 1e-5 | 5e-5 | 1e-5 | | Min learning rate | 1e-5 | 1e-5 | 1e-5 | 1e-5 | | Warmup updates | 3,000 | 0 | 3,000 | 0 | | Max decay steps | 30,000 | 0 | 30,000 | 0 | where θ are the parameters of mBART and g is the noise function that masks 35% of tokens in each sequence at random. Table 1: Documents statistics for PMB release 4.0.0. Supervised Denoising Training Although the basic denoising training makes the model learn the representations for text and DRS in a universal space, during this process the specific relationship between a given DRS and its corresponding texts is not learnt. There is thus a gap between the denoising pre-training and the fine-tuning for the text-to-DRS and DRS-to-text downstream tasks. To bridge this gap, we perform a supervised denoising training using all parallel languagemeaning pairs. This enables our model to learn the transformation connection between text and DRS after the first step of basic denoising training. As shown in Figure 3 (II: S-PT block), we concatenate the text sequences with the corresponding corrupted DRS sequences and conduct denoising training to reconstruct the original DRS in the textto-DRS direction, and vice versa. Inspired by Wang et al. (2022), who show that retrieving and concatenating training instances relevant to the input can lead to significant gains on language generation tasks, we also perform an English-centric crosslingual denoising training: English text (or DRS) sequences are concatenated with their corresponding corrupted non-English text (or DRS) sequences and then used for supervised denoising training (and vice versa). Table 2: Detailed hyper-parameters in our experiments. In the first step (F-FT), we use the multilingual DRS-text pairs from dataset D since the same meaning representation can be expressed in various languages. We expect that this process can allow the model to further benefit from knowledge transfer across different languages. After that, the model can be finally fine-tuned on silver and gold data in either a multilingual or monolingual manner (S-FT). For all experiments we use PMB release 4.0.0, which contains texts in English, German, Dutch and Italian for three levels of annotation (gold, silver and bronze). Table 1 shows the statistics for the various languages, where each counted instance is a sentence and its corresponding DRS. (A small portion of DRSs that cannot be converted to DRGs were removed from the data set.) After denoising pre-training, the single model we have obtained can be fine-tuned with DRS-text pairs for the downstream DRS parsing and DRSto-text generation tasks. As shown in Figure 3 (III: FT block), given a sequence d = {d1, · · · , dn} of DRS and its corresponding text sequence t = {t1, · · · , tm}, taking DRS-to-text generation as an example, its seq2seq training can be formulated as follows: Table 2 reports the detailed hyper-parameters in our experiments. All experiments are implemented atop the Transformers library (Wolf et al., 2020). We use mBART-50 (Tang et al., 2020) as our base model, and train our models with batch size 32, accumulating gradients over 8 update steps in all training except for monolingual fine-tuning which is 1. We use Adam optimiser (Kingma and Ba, 2015) with a polynomial learning rate decay. Additionally, we apply early stopping (patience 5) if validation performance does not improve. Due to the small size of the Dutch dataset, we upsam- Similar to previous work (van Noord et al., 2020b; Wang et al., 2021b), we first train the model on gold + non-gold data, and then on gold + silver data. formulated as: ple them by replication obtaining training sets of 100,000 DRS-text pairs in both pre-training and multilingual fine-tuning. ## 4.2 Model Settings To show the effects of each training stage in our framework, we conduct extensive experiments with different settings, yielding five different models. M1 (FT mBART): fine-tuning vanilla mBART with monolingual data for each task; M2 (M1 + B-PT): including basic denoising pre-training before monolingual fine-tuning; M3 (M2 + S-PT): including supervised pre-training before monolingual finetuning, after basic pre-training; M4 (M3 + F-FT): based on M3, and includes first multilingual finetuning (F-FT) before monolingual fine-tuning (SFT); M5 (monolithic model): based on M4, but using multilingual fine-tuning for S-FT and combining parsing and generation. For comparison with our models, we also include two parsing systems from Poelman et al. (2022) which use the same DRS data format as we do: (i) UD-Boxer is a rule-based DRS parser based on Universal Dependencies; (ii) Neural Boxer is a seq2seq semantic parser based on Bi-LSTM with mBERT embeddings. ## 4.3 Automatic Evaluation For **text-to-DRS parsing**, we follow recent work by Poelman et al. (2022) to convert the linearized DRS into Penman format (Kasper, 1989), as shown in Figure 4. We then adopt Smatch, a standard evaluation tool used in AMR parsing, to compute overlap between system output and gold standard by calculating the F-score of matching triples (Cai and Knight, 2013). To assess **DRS-to-text generation**, we use three automatic metrics commonly used in text generation: n-gram-based BLEU (Papineni et al., 2002) and METEOR (Lavie and Agarwal, 2007), as well as a neural-based COMET3(Rei et al., 2020). ## 4.4 Automatic Evaluation Results Table 3 reports the results of DRS parsing in different languages. For English, the performances of the different models are pretty close to each other, with M2 outperforming the others with basic pre-training and monolingual fine-tuning. The models show higher scores for English compared to the other three languages, most likely because the 3We use model wmt-large-da-estimator-1719. ![5_image_0.png](5_image_0.png) dataset contains a large amount of gold and silver DRS-text pairs in English, sufficient to fine-tune mBART for parsing without further pre-training. When looking at the other three languages, we observe performance improvements with the use of different training strategies. Models pre-trained with the basic denoising task produce better results in German, the same F1-score in Italian, and lower results in Dutch, indicating a gap between pre-training and fine-tuning. This gap is bridged by our supervised pre-training strategy, models with the supervised pre-training (M3) yield steady improvements compared to M1 and M2. For M4 fine-tuned with multilingual data, they can further benefit from cross-lingual knowledge transfer and achieve higher scores. It is interesting to see that our monolithic model M5 performs best, thanks to the language-neutral meaning representation. Compared to existing models UD-Boxer and Neural Boxer, all of our models, especially our main model (M5), achieve higher F1-scores across the board, showing significant improvements in four languages. Our models perform worse than | Model | EN | DE | IT | NL | | | | | |-------------------------------------|------|------|------|------|------|-----|------|-----| | F1 | ERR | F1 | ERR | F1 | ERR | F1 | ERR | | | M1: FT mBART | 94.6 | 0.3 | 90.3 | 0.4 | 90.7 | 0.9 | 86.9 | 1.2 | | M2: M1 + B-PT | 94.7 | 0.3 | 90.6 | 0.8 | 90.7 | 1.0 | 85.9 | 2.4 | | M3: M2 + S-PT | 94.6 | 0.3 | 91.3 | 0.9 | 90.9 | 0.7 | 88.2 | 1.6 | | M4: M3 + F-FT | 94.5 | 0.4 | 92.0 | 0.8 | 92.8 | 0.2 | 92.1 | 0.2 | | M5: monolithic model | 94.0 | 0.2 | 92.0 | 0.4 | 93.1 | 0.2 | 92.6 | 0.6 | | UD-Boxer (Poelman et al., 2022) | 81.8 | 0.0 | 77.5 | 0.0 | 79.1 | 0.0 | 75.8 | 0.0 | | Neural Boxer (Poelman et al., 2022) | 92.5 | 2.3 | 74.7 | 0.5 | 75.4 | 0.0 | 71.6 | 1.0 | Table 3: Evaluation results for text-to-DRS parsing on the test set of the four languages in the PMB 4.0.0. Notes: (i) ERR is the ill-formed rate (%) of generated DRSs that can not be transformed into a graph structure; (ii) bold numbers indicate best systems for each language. Model EN DE IT NL B M C B M C B M C B M C M1: FT mBART **74.5** 54.7 102.8 45.1 35.1 54.3 44.3 34.4 58.2 34.9 29.5 31.3 M2: M1 + B-PT 73.2 54.0 101.5 45.0 34.8 56.8 44.2 34.2 59.7 38.6 31.8 44.4 M3: M2 + S-PT 74.2 54.6 102.4 52.1 38.4 65.3 49.3 36.6 72.6 47.8 38.6 59.9 M4: M3 + F-FT **74.5** 54.8 102.4 56.3 40.8 76.7 58.0 41.1 85.5 **60.8 43.4 79.8** M5: monolithic model 74.5 55.0 102.9 **56.3 40.8** 75.9 56.3 40.1 85.0 59.0 42.6 76.7 Table 4: Automatic evaluation results for DRS-to-text generation on the test sets of the four languages in the PMB 4.0.0 (B = BLEU; M = METEOR; C = COMET). UD-Boxer in terms of ill-formedness rate, i.e., the proportion of generated DRSs which cannot be converted into a graph structure (and receive an F-score of 0). It is perhaps not surprising that rulebased parsers outperform neural-based parsers in generating well-formed DRS: the UD-Boxer parser is based on Universal Dependency and adds manual transformation rules to finally get the linearized data from the graph structure, and the evaluation process is equivalent to a reverse transformation process. It is worth noting that most of these errors can be corrected by post-processing (see §5.3). We also observe that our models have lower ERR rates than Neural-Boxer, except for Italian. The possible reason for this is that the multilingual training may introduce some noise. For the generation task, we observe similar trends to parsing, as shown in Table 4. Concretely, Our proposed supervised denoising pre-training and multilingual fine-tuning strategies substantially boost the performances, especially non-English languages. Model M4 has the highest scores in all evaluation metrics across the three languages, the observation that differs slightly from that for the parsing task. We believe the reason is that the output tokens of the generation task are languageparticular rather than language-neutral compared to the parsing task. Therefore, for the generation task, the results of fine-tuning with monolingual data are better than those with multilingual data. ## 5 Analysis 5.1 Correlation Analysis While human evaluation is seen as the most reliable assessment in language generation tasks, due to its costs and availability it can not be easily used during iterative development. We included human evaluation early on in our experiments to check the correlation of human judgement with automatic metrics, so that the latter could be more safely used in the following stages of our experiments.4 Table 5 shows the sentence-level biserial corre-4See Appendix 8 for the details on human evaluation. | Lang | BLEU | METEOR | COMET | |--------|--------|----------|---------| | EN | -0.098 | -0.016 | 0.775 | | DE | 0.275 | 0.471 | 0.687 | | IT | 0.122 | 0.241 | 0.768 | | NL | 0.195 | 0.386 | 0.686 | ![7_image_0.png](7_image_0.png) lations between automatic metrics and expert judgments in meaning preservation.5 BLEU correlates particularly poorly with human judgments, even showing a negative correlation in English. METEOR also shows a negative correlation with human ratings in English, while it has higher scores than BLEU in non-English languages. Unsurprisingly, we see that COMET has high correlations with human judgements, which is consistent with previous work on other tasks (Rei et al., 2020; Lai et al., 2022a). This observation, therefore, confirms that COMET can be a more reliable metric used for DRS-to-text generation and for comparisons between different models. ## 5.2 Development Loss To better understand the training strategies and components in our proposed framework, we examine the loss curves for different monolingual fine-tuned models on the dev sets of different languages (Figure 5). For DRS parsing, the convergence process of the original mBART (M1) is slow. After adding differ-5Since the biserial correlation coefficient is a statistic used to assess the degree of relationship between an artificially created dichotomous nominal scale and an interval scale, it is naturally applicable to our experiments as the generated text is rated by annotators with 0 or 1. ent training strategies, models have a significantly faster and better convergence process. Specifically, we observe that basic denoising pre-training makes the model learn the representation for DRSs and texts in the same semantic space, but there is still a gap between the basic denoising task and the downstream task. This gap is then eliminated by supervised pre-training as the loss of model M3 is quite flat from the start and is lower than that of M2. Lastly, we see that multilingual fine-tuning consistently helps the model, and it eventually converges fast and well. This suggests that this strategy helps models benefit from the cross-knowledge transfer. We observe similar trends for the DRS-to-text task, with a large fluctuation in the convergence process without pre-training. Overall, the loss curves for M4 are lower than other models. ## 5.3 Manual Inspection In Table 6 we report example DRS outputs from our main model (M5) which differ from the gold standard. We summarize two types of ill-formed DRSs in linear format that cannot be converted to graph structures (and hence are not interpretable). When more tokens are produced than expected, the result is often a sequence of tokens that does not correspond to the graph. For instance, spaces | Type | Subtype | Output Meaning | Gold Meaning | |--------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------|----------------| | Ill-formed Extra Space | geological_formation.n.01 Name " Himalayas" geological_formation.n.01 Name "Himalayas" driving_ licence.n.01 Owner speaker driving_licence.n.01 Owner speaker | | | | Missing Space | person.n.01 Role +1technician.n.01 | person.n.01 Role +1 engineer.n.01 | | | Wrong Concept overtreibe.v.01 Patient -1 Time +1 | exaggerate.v.01 Agent -1 Time +1 | | | | Wrong Role | blind.a.01 Experiencer -3 Time -2 | blind.a.01 Theme -3 Time -2 | | | Wrong Index | female.n.02 Name "Maria" EQU +1 EQU now | female.n.02 Name "Maria" | | | Missing Token | young.a.01 AttributeOf +1 person.n.01 | young.a.01 Value + person.n.01 Attribute -1 | | | Extra Token | more_and_more.a.01 Degree +1 more.r.01 | more_and_more.r.01 | | | Meaning | | | | | Reason | Lang | Generated Text | Gold text | |------------------|--------|----------------------------------------|---------------------------------------| | Semantic | IT | Peter sta comprando un gatto male. | Peter sta comprando un gatto maschio. | | Grammaticality | NL | Tom foldt zijn kleren. | Tom vouwt zijn kleren op. | | Extra Material | EN | My flight arrived exactly at 2:30 p.m. | My flight arrived at 2:30 p.m. | | Missing Material | EN | The express arrives at 6:30. | The express arrives at 6:30 p.m. | | Word Choice | NL | Charles de Gaulle stierf in 1970. | Charles de Gaulle overleed in 1970. | Table 6: Example outputs produced by our best model (M5) for the parsing task. are included where they shouldn't be, or missing spaces cause subsequent tokens to be erroneously connected to each other. These syntactic error types occur in a very limited number of models and can be well resolved by post-processing. We focus on the types of errors that affect meaning. We show five typical semantic error types at the bottom of Table 6 that affect the number of matching triples and may lead to a different meaning in gold data. For example, outof-vocabulary (OOV) words may cause the parser to generate concepts different from gold yielding incorrect meanings. Also, incorrect roles lead to changes in meaning and wrong indices produce different predicate-argument structures. Another problem is when the parser fails to generate a crucial token. In contrast, the parser may hallucinate tokens, which may be added in unexpected places. In Table 7, we show some examples of DRS-totext generation which differ from the gold output for various reasons. The model might produce a word which does not convey the intended meaning. For example, in the IT example, the word "male" (EN: "bad") is generated in place of "maschio" (EN: "male"), probably due to the homography of the words across the two languages, without any semantic correspondence. Another example of non-matching is grammatical agreement, which can be due to some underspecified phenomena in DRSs. We also identify three more types: (1) the generated text has redundant information; (2) the generated data lacks some information; (3) the generated words are synonymous with those in the gold references. These types generally degrade automatic evaluation results but may not affect the performance of human evaluation. The generation of these cases is usually random and occurs in all models. Part of it is probably due to the OOV problem, and the rest is mainly related to the training data itself, because the same meaning representation can be paired with multiple expressions. ## 6 Conclusion And Future Work Using DRS-based meaning representations as an additional language aside four different natural languages yields a novel multilingual pre-trained language-meaning model that can be fine-tuned for both semantic parsing and generation from formal meaning representations. By doing so, we achieve state-of-the-art performance on both tasks. Exploiting parallel data and DRS language neutrality is key to boost performance in lesser-resourced languages. We believe our approach can benefit from improvements in its current form, but also opens up to further research in language-meaning models. Regarding future modelling directions, the contribution of graph structures should be further explored in future work. Specifically, it could be possibly to leverage the graph structure to mask tokens in a more meaningful and principled way, designing a denoising training using the rich linguistic phenomena expressed by DRSs. ## Limitations A large part of the dataset that we used in our experiments are semantic annotations for relatively short sentences (as the examples show). So we don't know really know how our multilingual pre-trained language-meaning modelling for DRS parsing and DRS-to-text generation will work on longer sentences. In our experiments, we converted meaning representation in the sequence notation and modelled them with natural language texts in a seq2seq manner and masked tokens in the DRS sequence randomly. Perhaps a more natural way is to model DRSs as graph structures and let training objectives directly utilize any structural information from DRS. A graph structure would also eliminate the explicit order of concepts that is present in the sequence notation. Although we say that the DRSs are languageneutral, the concepts in the vocabulary are based on the English WordNet. As a result it might be the case that non-English words do not have a direct correspondence to an appropriate synset, but the number of such cases is likely very small. The only (trivial) language dependence in DRSs are literal occurrences of proper names in cases where they differ across languages (e.g., "London", "Londen", or "Londra"). One way to remedy this is to add alternative spellings to the meaning representation to make it completely interlingual. ## Acknowledgments This work was funded by the NWO-VICI grant "Lost in Translation—Found in Meaning" (288-89003) and the China Scholarship Council (CSC). We thank the anonymous reviewers of ACL 2023 for their insightful comments. We would also like to thank the Center for Information Technology of the University of Groningen for their support and for providing access to the Peregrine high performance computing cluster. ## References Lasha Abzianidze, Rik Van Noord, Chunliu Wang, and Johan Bos. 2020. The parallel meaning bank: A framework for semantically annotating multiple languages. *AMIM*, 25(2):45–60. Nicholas Asher. 1993. *Reference to Abstract Objects in* Discourse. Kluwer Academic Publishers. Xuefeng Bai, Yulong Chen, and Yue Zhang. 2022. Graph pre-training for AMR parsing and generation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), page todo, Online. Association for Computational Linguistics. Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract Meaning Representation for sembanking. In *Proceedings of the 7th Linguistic* Annotation Workshop and Interoperability with Discourse, pages 178–186, Sofia, Bulgaria. Association for Computational Linguistics. Valerio Basile. 2015. From logic to language: Natural language generation from logical forms. Ph.D. thesis, University of Groningen. Valerio Basile and Johan Bos. 2011. Towards generating text from discourse representation structures. In Proceedings of the 13th European Workshop on Natural Language Generation, pages 145–150, Nancy, France. Association for Computational Linguistics. Johan Bos. 2008. Wide-coverage semantic analysis with Boxer. In Semantics in Text Processing. STEP 2008 Conference Proceedings, pages 277–286. College Publications. Johan Bos. 2021. Variable-free discourse representation structures. *Semantics Archive*. Shu Cai and Kevin Knight. 2013. Smatch: an evaluation metric for semantic feature structures. In *Proceedings of the 51st Annual Meeting of the Association* for Computational Linguistics (Volume 2: Short Papers), pages 748–752, Sofia, Bulgaria. Association for Computational Linguistics. Jonathan H. Clark, Eunsol Choi, Michael Collins, Dan Garrette, Tom Kwiatkowski, Vitaly Nikolaev, and Jennimaria Palomaki. 2020. TyDi QA: A benchmark for information-seeking question answering in typologically diverse languages. *Transactions of the Association for Computational Linguistics*, 8:454–470. Alexis Conneau and Guillaume Lample. 2019. Crosslingual language model pretraining. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc. Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. XNLI: Evaluating crosslingual sentence representations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2475–2485, Brussels, Belgium. Association for Computational Linguistics. Kilian Evang. 2019. Transition-based DRS parsing using stack-LSTMs. In *Proceedings of the IWCS* Shared Task on Semantic Parsing, Gothenburg, Sweden. Association for Computational Linguistics. Federico Fancellu, Sorcha Gilroy, Adam Lopez, and Mirella Lapata. 2019. Semantic graph parsing with recurrent neural network DAG grammars. In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language* Processing (EMNLP-IJCNLP), pages 2769–2778, Hong Kong, China. Association for Computational Linguistics. Christiane Fellbaum. 1998. Wordnet: An electronic lexical database. The MIT Press, Cambridge, Ma., USA. Yutong Feng, Jianwen Jiang, Mingqian Tang, Rong Jin, and Yue Gao. 2022. Rethinking supervised pretraining for better downstream transferring. In *International Conference on Learning Representations*. Qiankun Fu, Yue Zhang, Jiangming Liu, and Meishan Zhang. 2020. DRTS parsing with structure-aware encoding and decoding. In *Proceedings of the 58th* Annual Meeting of the Association for Computational Linguistics, pages 6818–6828, Online. Association for Computational Linguistics. Bart Geurts, David I. Beaver, and Emar Maier. 2020. Discourse Representation Theory. In Edward N. Zalta, editor, *The Stanford Encyclopedia of Philosophy*, spring 2020 edition. Metaphysics Research Lab, Stanford University. Nirit Kadmon. 2001. *Formal Pragmatics*. Blackwell. H. Kamp. 1981. A theory of truth and semantic representation, 277-322, jag groenendijk, tmv janssen and mbj stokhof, eds. In Jeroen Groenendijk, editor, *Formal Methods in the Study of Language*. U of Amsterdam. Hans Kamp and U. Reyle. 1993. From discourse to logic: Introduction to model theoretic semantics of natural language, formal logic and discourse representation theory. *Language*, 71(4). Hans Kamp, Josef van Genabith, and Uwe Reyle. 2011. Discourse Representation Theory. In Dov M. Gabbay and Franz Guenthner, editors, *Handbook of Philosophical Logic*, volume 15, pages 125–394. Elsevier, MIT. Robert T. Kasper. 1989. A flexible interface for linking applications to Penman's sentence generator. In Speech and Natural Language: Proceedings of a Workshop Held at Philadelphia, Pennsylvania, February 21-23, 1989. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations. Huiyuan Lai, Jiali Mao, Antonio Toral, and Malvina Nissim. 2022a. Human judgement as a compass to navigate automatic metrics for formality transfer. In Proceedings of the 2nd Workshop on Human Evaluation of NLP Systems (HumEval), pages 102–115, Dublin, Ireland. Association for Computational Linguistics. Huiyuan Lai, Antonio Toral, and Malvina Nissim. 2022b. Multilingual pre-training with language and task adaptation for multilingual text style transfer. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 262–271, Dublin, Ireland. Association for Computational Linguistics. Alon Lavie and Abhaya Agarwal. 2007. METEOR: An automatic metric for mt evaluation with high levels of correlation with human judgments. In Proceedings of the Second Workshop on Statistical Machine Translation, StatMT '07, page 228–231, USA. Association for Computational Linguistics. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 7871–7880, Online. Association for Computational Linguistics. Jiangming Liu, Shay B. Cohen, and Mirella Lapata. 2018. Discourse representation structure parsing. In *Proceedings of the 56th Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 429–439, Melbourne, Australia. Association for Computational Linguistics. Jiangming Liu, Shay B. Cohen, and Mirella Lapata. 2019a. Discourse representation parsing for sentences and documents. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6248–6262, Florence, Italy. Association for Computational Linguistics. Jiangming Liu, Shay B. Cohen, and Mirella Lapata. 2019b. Discourse representation structure parsing with recurrent neural networks and the transformer model. In *Proceedings of the IWCS Shared Task on* Semantic Parsing, Gothenburg, Sweden. Association for Computational Linguistics. Jiangming Liu, Shay B. Cohen, and Mirella Lapata. 2021. Text generation from discourse representation structures. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 397–415, Online. Association for Computational Linguistics. Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pretraining for neural machine translation. *Transactions of the Association for Computational Linguistics*, 8:726–742. Shashi Narayan and Claire Gardent. 2014. Hybrid simplification using deep semantics and machine translation. In *Proceedings of the 52nd Annual Meeting of* the Association for Computational Linguistics (Volume 1: Long Papers), pages 435–445, Baltimore, Maryland. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and Wei jing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meetings of the ACL, pages 311–318. Wessel Poelman, Rik van Noord, and Johan Bos. 2022. Transparent semantic parsing with Universal Dependencies using graph transformations. In Proceedings of the 29th International Conference on Computational Linguistics, pages 4186–4192, Gyeongju, Republic of Korea. International Committee on Computational Linguistics. Xipeng Qiu, Tianxiang Sun, Yige Xu, Yunfan Shao, Ning Dai, and Xuanjing Huang. 2020. Pre-trained models for natural language processing: A survey. *Science China Technological Sciences*, page 1872–1897. Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020. COMET: A neural framework for MT evaluation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2685–2702, Online. Association for Computational Linguistics. Tianyi Tang, Junyi Li, Wayne Xin Zhao, and Ji-Rong Wen. 2022. MVP: Multi-task supervised pre-training for natural language generation. *arXiv preprint,* arXiv: 2206.12131v1. Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, and Angela Fan. 2020. Multilingual translation with extensible multilingual pretraining and finetuning. *arXiv* preprint, arXiv: 2008.00401v1. Rik van Noord, Lasha Abzianidze, Antonio Toral, and Johan Bos. 2018. Exploring neural methods for parsing discourse representation structures. Transactions of the Association for Computational Linguistics, 6:619–633. Rik van Noord, Antonio Toral, and Johan Bos. 2019. Linguistic information in neural semantic parsing with multiple encoders. In Proceedings of the 13th International Conference on Computational Semantics - Short Papers, pages 24–31, Gothenburg, Sweden. Association for Computational Linguistics. Rik van Noord, Antonio Toral, and Johan Bos. 2020a. Character-level representations improve DRS-based semantic parsing even in the age of BERT. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4587–4603, Online. Association for Computational Linguistics. Rik van Noord, Antonio Toral, and Johan Bos. 2020b. Character-level representations improve DRS-based semantic parsing even in the age of BERT. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4587–4603, Online. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc. Chunliu Wang, Rik van Noord, Arianna Bisazza, and Johan Bos. 2021a. Evaluating text generation from discourse representation structures. In *Proceedings* of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM 2021), pages 73–83, Online. Association for Computational Linguistics. Chunliu Wang, Rik van Noord, Arianna Bisazza, and Johan Bos. 2021b. Input representations for parsing discourse representation structures: Comparing English with Chinese. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 767–775, Online. Association for Computational Linguistics. Shuohang Wang, Yichong Xu, Yuwei Fang, Yang Liu, Siqi Sun, Ruochen Xu, Chenguang Zhu, and Michael Zeng. 2022. Training data is more valuable than you think: A simple and effective method by retrieving from training data. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Online. Association for Computational Linguistics. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 483–498, Online. Association for Computational Linguistics. ## A Appendix A.1 Human Evaluation Human evaluation was performed on a preliminary version of the models to assess correlation with the automatic metrics we planned to use on all larger-scale experiments. We adopt ROSE (Wang et al., 2021a), a human evaluation method that covers three dimensions: semantics, *grammaticality* and *phenomenon*, to assess the performance of models' outputs in the generation task. Since we are not investigating a particular linguistic phenomenon, we focus on the first two dimensions only: meaning preservation (whether the generated text has the same meaning as the gold text) and grammaticality (whether the generated text has no grammatical errors). We ask two experts with a linguistic doctorate degree to rate the generated texts with {0: No, 1: Yes} on these two dimensions. To reduce the annotation load, we exclude all outputs that are identical to the corresponding references, and then randomly select 100 samples for each language. Evaluation Results Table 8 shows that about 26% of the generated sentences in languages other than English correspond to the references, while this rate is about 50% for English it reaches around 50%, due to the larger datasets. While the training data for German and Italian also far exceeds that of Dutch, the evaluation results are very close (including automatic evaluation) for these three languages, suggesting that the models do benefit from cross-lingual knowledge transfer. | Lang | Perfect | Semantics | Grammaticality | Overall | |--------|-----------|-------------|------------------|-----------| | EN | 49.3 | 87.0 | 90.0 | 83.0 | | DE | 26.4 | 54.0 | 85.0 | 45.0 | | IT | 27.2 | 51.0 | 70.0 | 38.0 | | NL | 26.3 | 51.0 | 74.0 | 45.0 | ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? The section before References. ✗ A2. Did you discuss any potential risks of your work? There is no potential risk in data, methods and analyses. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1 and section 6. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 2 And Section 4. ✓ B1. Did you cite the creators of artifacts you used? Section 2 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Section 4 ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section 2 B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 2 and section 4. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 4 ## C ✓ **Did You Run Computational Experiments?** Section 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix A.1 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Appendix A.1 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 4.1: just a single run. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 4.1 and 4.2. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Section 5.1 and section 5.3. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
zhao-etal-2023-multi
Multi-modal Sarcasm Generation: Dataset and Solution
https://aclanthology.org/2023.findings-acl.346
As an interesting and challenging task, sarcasm generation has attracted widespread attention. Although very recent studies have made promising progress, none of them considers generating a sarcastic description for a given image - as what people are doing on Twitter. In this paper, we present a Multi-modal Sarcasm Generation (MSG) task: Given an image with hashtags that provide the sarcastic target, MSG aims to generate sarcastic descriptions like humans. Different from textual sarcasm generation, MSG is more challenging as it is difficult to accurately capture the key information from images, hashtags, and OCR tokens and exploit multi-modal incongruity to generate sarcastic descriptions. To support the research on MSG, we develop MuSG, a new dataset with 5000 images and related Twitter text. We also propose a multi-modal Transformer-based method as a solution to this MSG task. The input features are embedded in the common space and passed through the multi-modal Transformer layers to generate the sarcastic descriptions by the auto-regressive paradigm. Both automatic and manual evaluations demonstrate the superiority of our method. The dataset and code will be available soon.
# Multi-Modal Sarcasm Generation: Dataset And Solution Wenye Zhao1, Qingbao Huang1,2,3*, Dongsheng Xu1**, Peizhi Zhao**1 1School of Electrical Engineering, Guangxi University, Nanning, Guangxi, China 2Guangxi Key Laboratory of Multimedia Communications and Network Technology 3Key Laboratory of Big Data and Intelligent Robot (SCUT), Ministry of Education {2112391074, 2112391059, 2112391073}@st.gxu.edu.cn, [email protected] ## Abstract As an interesting and challenging task, sarcasm generation has attracted widespread attention. Although very recent studies have made promising progress, none of them considers generating a sarcastic description for a given image - as what people usually do on Twitter. In this paper, we present a Multi-modal Sarcasm Generation (MSG) task: Given an image with hashtags that provide the sarcastic target, MSG aims to generate sarcastic descriptions like humans. Compared with textual sarcasm generation, MSG is more challenging as it is difficult to accurately capture the key information from images, hashtags, and OCR tokens and exploit multi-modal incongruity to generate sarcastic descriptions. To support the research on MSG, we develop MuSG, a new dataset with 5000 images and related Twitter text. We also propose a multi-modal Transformer-based method as a solution to this MSG task. The input features are embedded in the common space and passed through the multi-modal Transformer layers to generate the sarcastic descriptions by the auto-regressive paradigm. Both automatic and manual evaluations demonstrate the superiority of our method. The dataset and code will be available at github.com/lukakupolida/MSG. ## 1 Introduction Sarcasm is a type of emotional expression that indirectly expresses contempt, shows irritation, or demonstrates humor. As a typical task on sarcasm, Sarcasm Generation (SG) is proposed to generate a sarcastic message for a given literal input (Joshi et al., 2015), which can express a variety of communicative intent such as evoking humor and diminishing or enhancing critique (Burgers et al., 2012). It can impact many downstream applications such as personalized dialog systems (Cho et al., 2022) *: Corresponding Author visual **objects**: smiling man, green trees, black car... OCR **tokens**: Did somebody say Sunday FUNDAY hashtags: # bankholidayweekend ground **truth**: when it 's # bankholidayweekend & you have to work an extra shift ![0_image_0.png](0_image_0.png) there is a Sunday FUNDAY when you are still working. Sarcastic Description **Generation** Figure 1: An example of Multi-modal Sarcasm Generation and the illustration of our proposed multi-modal Transformer-based architecture (MTMSG). We feed the features from hashtags, visual objects, and OCR tokens modalities into the multi-modal Transformer. Further, the sarcastic description is generated through iterative decoding with a pointer network and linear layers. and news comment generation (Yang et al., 2019). Since SG is proposed, a surge of follow-up studies have been conducted (Peled and Reichart, 2017; Mishra et al., 2019; Chakrabarty et al., 2020; Oprea et al., 2021, 2022). Notably, the aforementioned SG studies have only been investigated in the textual field so far. However, nowadays social platforms usually leverage multi-modal data where visual information is integrated with the text, making the analysis of uni-modal data in isolation and limitation. Therefore, research on multi-modal sarcasm is crucial and imperative. Studies on multi-modal sarcasm can be categorized into threefold: Multi-modal Sarcasm Detection (Cai et al., 2019; Pan et al., 2020; Xu et al., 2020; Liang et al., 2021, 2022), Multimodal Target Identification (Wang et al., 2022), and Multi-modal Sarcasm Explanation (Kumar et al., 2022; Desai et al., 2022). Unfortunately, there has been no research touching Sarcasm Generation facing multi-modal information until now. We hope general artificial intelligence to learn creativity and associative skills, so learning to generate sarcastic descriptions towards multi-modal inputs like humans deserves to study deeply. Therefore, we propose a Multi-modal Sarcasm Generation task (MSG), which aims to generate sarcastic descriptions on social platforms like humans for a given image with the help of hashtags (cf. Figure 1). Compared with textual sarcasm generation, MSG is more challenging. Firstly, human expressions on social platforms are usually stylized with too many verbalized expressions like abbreviations and interjections. Secondly, accurately capturing information from the key visual regions which may contribute to the sarcasm remains a question. Finally, the incongruity between images and generated Sarcastic descriptions reflects human creativity, imagination, and associative skills, which are hard for machines to learn and construct. To support the studies on MSG, we develop MuSG, a new dataset consisting of 5000 images and related sarcastic descriptions. We manually collect samples with clear sarcastic target from Twitter API and two existing multi-modal sarcasm detection datasets (Schifanella et al., 2016; Cai et al., 2019). The descriptions come with hashtags (the tokens with a '\#' to indicate the topic of Twitter) that point the way to sarcasm generation (cf. Figure 1 *\#bankholidayweekend*). The images contain OCR tokens information that can provide an associative context for sarcasm generation (cf. Figure 1 *Did somebody say Sunday FUNDAY*). With the well-formed dataset MuSG, researchers can easily conduct studies on MSG. Consequently, as shown in Figure 1, we design a Multi-modal Transformer-based model (MTMSG) as a strong baseline for the proposed MSG task. Concretely, we first model spatial, semantic, and visual reasoning relations between multiple OCR tokens, hashtags, and visual features. Further, we map all the modality-specific features to the same reference and utilize the self-attention mechanism (Parikh et al., 2016) to capture the relationships between them. Finally, we combine the vocabulary with OCR tokens which usually contain sarcastic intent to capture the incongruity for sarcasm generations. With the ability to capture the intra- and inter-modality incongruity, our model is thus capable of effectively generating sarcastic descriptions. Our contributions can be summarized as follows: - To the best of our knowledge, we are the first to investigate the Multi-modal Sarcasm Generation task, which aims to generate sarcastic descriptions like humans for a given image with the help of hashtags. - We develop MuSG, a new dataset consisting of 5000 image-text pairs for Multi-modal Sarcasm Generation. To our knowledge, it is the only dataset that can be applied to this task and evaluated automatically. - We benchmark MuSG with a multi-modal Transformer-based model which can be served as a strong baseline. - Empirical results show that our MTMSG outperforms all comparison models on all automatic evaluation metrics. We also perform extensive human evaluations to measure the Creativity, Sarcasticness, Coherence, and ImageText Relation of generated descriptions. ## 2 Related Work 2.1 Textual Sarcasm Generation Recently, sarcasm generation has attracted tremendous attention in the field of natural language processing. The studies can be roughly categorized into twofold: Joshi et al. (2015) and Oprea et al. (2021, 2022) generate sarcasm with a response generator, while Peled and Reichart (2017), Mishra et al. (2019), and Chakrabarty et al. (2020) generate sarcasm with a paraphrase generator. However, these studies concentrate only on the generation of sarcasm in the textual domain, till now there has been no relevant effort on Multi-modal Sarcasm Generation. Applying machines to think and further imagine like humans is a creative and challenging task, fulfilling our imagination for the future of general artificial intelligence. Accordingly, we strongly believe that our proposed MSG will lead to a deeper understanding and expression of individuals' intent on social media. ## 2.2 Research On Multi-Modal Sarcasm With the rapid development of mobile Internet, research on multi-modal sarcasm has come into focus. Schifanella et al. (2016) pioneer the multi-modal sarcasm and propose a dataset to collect 10000 sarcastic posts from Twitter, Instagram, and Tumblr. Cai et al. (2019) and Castro et al. (2019) extend the research and develop richer and better-formed datasets based on Twitter and conversational audiovisual utterances, respectively. Since then, a surge of studies have been conducted on multi-modal sarcasm, which can be roughly divided into three ![2_image_0.png](2_image_0.png) Hashtag Embedding Visual Object Embedding OCR Token Embedding object M OCR token 1 OCR token 2 OCR token N categories: Multi-modal Sarcasm Detection (Pan et al., 2020; Xu et al., 2020; Liang et al., 2021, 2022), which aims to detect whether the input sample is sarcastic; Multi-modal Sarcasm Target Identification (Wang et al., 2022), which aims to extract sarcasm targets from both texts and images; Multi-modal Sarcasm Explanation (Kumar et al., 2022; Desai et al., 2022), which aims to generate a natural language sentence to explain the intended irony in the sarcastic posts. However, there is no research considering Multi-modal Sarcasm Generation. Leveraging the multi-modal information to create sarcastic descriptions will increase the variety of responses for intelligent conversational agents and further serve downstream applications such as personalized dialog systems (Cho et al., 2022) and news comment generation (Yang et al., 2019). Therefore, it is crucial to study the Multimodal Sarcasm Generation task. ## 3 Dataset And Metrics In this section, we describe how the new dataset is constructed and how performance is evaluated. ## 3.1 Dataset Since Twitter text contains varieties of sarcastic descriptions and intent detection in Twitter is a problem worth investigating, we focus on the Twitterbased dataset. We retrieve posts by querying hashtags to collect potential sarcastic samples. To create a well-formed and high-quality dataset **MuSG** for the MSG task, we collect publicly available Twitter posts using Twitter API and two existing multimodal sarcasm detection datasets (Schifanella et al., 2016; Cai et al., 2019) to obtain 5000 samples that have clear sarcasm targets. For text data, we remove external links and mentions (@email); We remove strange and meaningless symbols such as the token *emoji-x* and other special tokens which are hard to understand (♠); We also remove text with more than 40 words, because if the text is too long, it is much more difficult to generate by machine with the same inputs. For image data, we remove text-based images (images consist of text only), images with number-based OCR tokens (OCR tokens in the images consist of numbers only), images with too many visual objects, and images with low resolution. The MuSG dataset is randomly split into 3536/723/741 (using 5:1:1 split) as Train/Valid/Test in the experiments. Further, we conduct a comprehensive statistical analysis of our collected MuSG dataset as follows: First, the content categories of Twitter contain six major categories: politics, sports, games, dining, life, and others, with politics and life accounting for the largest share, reaching half of the total (cf. Table 1). Second, we count the subjects of sarcastic sources, of which 27.3% originate from both images and hashtags, 32.4% from hashtags only, and 40.3% from images only, so we can conclude that most of the sarcasm can be generated with the help of images and hashtags (cf. Table 2). Third, we count the size of the sarcasm targets, and we find that 25.8% of the sarcasm targets are small targets, making the MSG task more challenging (cf. Table 3). Finally, we also count the sentence style of the ground truth, in which 56.3% are declarative sentences, 40.9% are imperative sentences, and only 2.8% are interrogative sentences or other sentences, which represents that sarcastic descriptions express a definite emotion in most cases (cf. Table 4). | Politics | Sports | Games | Dining | Life | Others | |------------|----------|---------|----------|--------|----------| | 26.4% | 8.7% | 5.4% | 13.4% | 28.4% | 17.7% | Table 1: The content categories of MuSG dataset. Both in images and htags Only in images Only in htags | 27.3% | 40.3% | 32.4% | |---------|---------|---------| Table 2: Statistics of the subject of the MuSG dataset. | Small | Medium | Large | |--------------|--------------|--------------| | 1290 (25.8%) | 1865 (37.3%) | 1845 (36.9%) | Table 3: The size of the subject of the MuSG dataset. | Declarative | Imperative | Interrogative (Others) | |---------------|--------------|--------------------------| | 56.3% | 40.9% | 2.8% | Table 4: The sentence types of the MuSG dataset. ## 3.2 Evaluation Metrics We evaluate the generated descriptions both quantitatively (with standard automatic evaluation metrics) and qualitatively (with human evaluation metrics). For automatic evaluation metrics, we apply the Microsoft coco caption evaluation, which includes **BLEU** (B1, B2, B3, and B4) (Papineni et al., 2002), **METEOR** (Denkowski and Lavie, 2014), ROUGE (R1, R2, and R_L) (Lin, 2004), **CIDErD** (Vedantam et al., 2015), and **SPICE** (Anderson et al., 2016). For human evaluation metrics, we propose a set of 4 criteria to evaluate the generated descriptions: **1)Creativity** ("How creative are the generated descriptions?"), to judge if the generated descriptions are novel and attractive; **2)Sarcasticness** ("How sarcastic are the generated descriptions?"), to judge the degree of sarcasm (including irony and humor); **3)Coherence** ("How coherent are the generated descriptions?"), to judge if the generated descriptions are fluent and further easy to understand; **4)Image-text Relation** ("How relevant are the images and the generated descriptions?"), to judge if the generated descriptions are highly correlated with the given images. ## 4 Methodology In this section, we describe our proposed **MTMSG**, a Multi-modal Transformer-based model for MSG. The input to this task is an image and the hashtags of the corresponding Twitter text, while the output is the generated descriptions that need to compare with the original Twitter text (Ground Truth). Yet, demonstrated by Pan et al. (2020), the information of OCR text usually provides the context of sarcasm, which may contribute to sarcasm generation. Therefore, we leverage the information from visual objects, hashtags, and OCR tokens for MSG. The architecture of the proposed MTMSG is illustrated in Figure 2. Specifically, we first embed the three modalities in the same reference, and then feed them into a multi-modal Transformer to achieve intra- and inter-modality interactions. Finally, our models learn to generate sarcastic descriptions through iterative decoding with the help of a dynamic pointer network. In the decoding process, we leverage the previous output to predict the next generated word in an auto-regressive manner. ## 4.1 Uni-Modal Feature Embedding 4.1.1 Hasgtag Embedding Given a hashtag as a sequence of K words, we utilize FastText (Bojanowski et al., 2017) as the feature extractor to get the 300-dimensional vector x f t k(k = 1, · · · , K), which is a word embedding with sub-word information. Finally, we project the vector to a 768-dimensional semantic space to make sure the features from different modalities are embedded in the same reference. The final hashtag embedding x htag kis obtained by: $$x_{k}^{htag}=LN(W_{1}x_{k}^{ft}),\tag{1}$$ where $W_{1}$ is the learnable parameter and $LN$ where W1 is the learnable parameter and LN denotes layer normalization. 4.1.2 Visual Object Embedding Given an image, we apply pretrained Faster R-CNN (Ren et al., 2015) as the detector to obtain the appearance feature x fr m of m-th visual object. Further, to leverage the spatial information of each object, we investigate a 4-dimensional location feature by x bm = [xmin/W, ymin/H, xmax/W, ymax/H]. Then we can obtain a list of 768-dimensional vectors x obj m as follows: $$x_{m}^{o b j}=L N(W_{2}x_{m}^{f r})+L N(W_{3}x_{m}^{b}),$$ $W_{1}$ and $W_{2}$ are learnable parameters. ## M), (2) Where W2 And W3 Are Learnable Parameters, And Ln Denotes Layer Normalization. 4.1.3 Ocr Token Embedding For OCR token embedding, following the M4CCaptioner (Sidorov et al., 2020), to get a rich representation of OCR tokens, we leverage FastText (Bojanowski et al., 2017), Faster R-CNN (Ren et al., 2015), PHOC (Almazán et al., 2014) as the feature extractors to get sub-word feature x f t, appearance feature x fr, and character-level feature x p, respectively. Given a set of N OCR tokens, the location feature of the n-th token is represented as x bn = [xmin/W, ymin/H, xmax/W, ymax/H]. Then the final OCR token embedding x ocr nis projected to a 768-dimensional vector as: $$x_{n}^{ocr}=LN(W_{4}x_{n}^{ft}+W_{5}x_{n}^{fr}+W_{6}x_{n}^{p})+LN(W_{7}x_{n}^{b}),\tag{3}$$ where W4, W5, W6, and W7 are learnable parameters, and LN denotes layer normalization. ## 4.2 Multi-Modal Transformer After extracting uni-modal feature embedding from three modalities, we concatenate the features and feed them into the multi-modal Transformer. In the multi-modal Transformer, the features fully interact to exploit intra- and inter-modality incongruity. Besides, the previous step of decoding output x dec t−1 is also embedded and fed into the Transformers. Finally, the multi-modal Transformers obtain feature vectors as output: [z htag, zobj , zocr, zdec t−1 ] = MMT([x htag, xobj , xocr, xdec t−1 ]), where MMT denotes multi-modal Transformer. ## 4.3 Sentence Decoder The sentence decoder takes the feature embedding output of multi-modal Transformers as input, predicts the score for each word, and selects the predicted word of each time step. We generate sarcastic descriptions through the auto-regressive paradigm. Remarkably, the OCR tokens detected in images usually contain intent information for capturing the multi-modal incongruity, while they are usually not involved in the common word vocabulary, so it is inappropriate to make predictions based on the fixed vocabulary. Therefore, we adopt different classifiers for common vocabulary and OCR tokens as follows: $$y^{t}=a r g m a x(f(z_{t-1}^{d e c}),f_{D P N}(z_{t-1}^{d e c},z^{o c r})),\ \ (4)$$ where f indicates the linear classifier for common vocabulary, and f*DP N* indicates the dynamic pointer network(Vinyals et al., 2015). The captioning loss is computed by: $${\mathcal{L}}=-log\sum_{i=0}^{T}(P(y_{t}|(y_{0:T-1},z^{h t a g},z^{o b j},z^{o c r}))),\tag{5}$$ where $y_{0}$:$T-1$ denotes the generated sequence. where y0:T −1 denotes the generated sequence. ## 5 Experiments 5.1 Comparison Models Due to the multi-modal nature of the input corpus, we compare our proposed MTMSG model with three categories of strong models adapted for the MSG task as follows: 1)Image-modality models: These models only leverage the visual features to generate sarcastic description, including ViT (Dosovitskiy et al., 2020), a powerful visual Transformer (with BART as the decoder); and **BLIP** (Li et al., 2022), a pre-trained image captioning model finetuned on this task. 2)Text-modality models: These models only leverage the hashtags and OCR tokens to generate sarcastic descriptions, including **Transformer** (Vaswani et al., 2017) and **Chandler** (Oprea et al., 2021), a very recent effort that generates sarcastic response to a given textual utterance. 3)Multi-modality models: These models utilize the information from images, hashtags, and OCR tokens for MSG, including **MFFG** (Liu et al., 2020), a multi-stage fusion mechanism with a forget fusion gate (both RNN and Transformer variants of MFFG); and MMT (Tang et al., 2022), a multi-modal Transformer for multi-modal learning. ## 5.2 Experiment Settings For visual objects, we extract 100 object appearance features with the dimension 2048. Besides, we apply Google OCR API to detect sufficient OCR tokens with bounding boxes. The number of OCR tokens for each image is limited to 30 at most. We set the layer number of the multi-modal Transformer to 4 and the number of self-attention heads to 12. Following BERT-base (Devlin et al., 2019), we adopt default settings for other parameters. For MSG, we tokenize the text on whitespace and filter the special symbols that the model cannot recognize. The fixed common vocabulary has 11554 words. Furthermore, we train the model for 160 epochs on a single 3090Ti GPU, and the batch size is set to 64. We adopt the Adam optimizer, the initial learning rate is 1e-4 and declined to 0.1 times every 50 epochs. We monitor the CIDEr-D | Modality | Method | BLEU | Rouge | METEOR | CIDEr-D | | | | | | |--------------------------|----------------------------|--------|---------|----------|-----------|--------|--------|--------|-------|------| | B1 | B2 | B3 | B4 | R1 | R2 | R_L | | | | | | image | ViT (ICLR 2020) | 10.57 | 3.90 | 1.38 | 0.60 | 16.24 | 4.31 | 14.03 | 8.86 | 18.5 | | BLIP (ICML 2022) | 12.07 | 5.01 | 1.69 | 0.84 | 15.17 | 4.55 | 14.00 | 10.63 | 22.8 | | | text | Transformer (NeurIps 2017) | 11.33 | 4.68 | 1.57 | 0.62 | 17.67 | 5.72 | 15.79 | 9.63 | 24.8 | | Chandler (EMNLP 2021) | 17.19 | 7.82 | 4.62 | 2.91 | 18.72 | 7.01 | 16.41 | 10.30 | 28.8 | | | MFFG-RNN (EMNLP 2020) | 14.73 | 6.49 | 2.58 | 1.33 | 15.81 | 5.92 | 14.88 | 11.23 | 28.4 | | | MFFG-Transf (EMNLP 2020) | 15.32 | 6.71 | 2.35 | 1.22 | 16.61 | 5.71 | 15.40 | 10.98 | 27.6 | | | image+text | MMT (IJCAI 2022) | 18.62 | 7.64 | 4.04 | 2.64 | 22.02 | 8.00 | 19.77 | 12.95 | 35.2 | | MTMSG(ours) | 21.37* | 13.32* | 8.67* | 6.37* | 26.44* | 11.38* | 24.12* | 16.05* | 48.6* | | metric to choose the best model and evaluate it on the test set. Finally, we average the experimental results of our MTMSG over ten runs to ensure the statistical stability of the experimental results. ## 6 Experimental Results And Analysis 6.1 Main Results Table 5 presents the comparative generation performances on the dataset. The experimental results illustrate that our model achieves the best performance across all of the competing strong baselines adopted for this task. Specifically, our model obtains BLEU scores of 21.37 (+2.75), 13.32 (+5.68), 8.67 (+4.63), and 6.37 (+3.73) on B1, B2, B3, and B4, respectively. Similarly, we exceed 4.42, 3.38, and 4.35 Rouge points at 26.44, 11.38, and 24.12 on R1, R2, and R_L, respectively. Our model also achieves improved performance on METEOR 16.05 (+3.1), and CIDEr-D 48.6 (+13.4). Moreover, we can draw the following conclusions: 1) Notably, our proposed MTMSG outperforms existing strong baselines on all of the evaluation metrics (Significance Test, all p-value<0.05), which demonstrates the effectiveness of our proposed multi-modal Transformer-based model for MSG. 2) Models based on text modality perform better than the models based on image modality, which indicates more sarcastic information lies in the hashtags and OCR tokens. 3) Multi-modal models achieve much better performance than the unimodal baselines overall, which implies that leveraging the information from images, hashtags, and OCR tokens is efficacious for MSG. ## 6.2 Human Evaluation We also perform the human evaluation for assessing the quality of the MSG. We randomly select 200 samples from the test set. Given the provided Instructions on evaluating the generated multi-modal sarcasm descriptions ![5_image_0.png](5_image_0.png) For each sample, you need to score from five perspectives, namely: Creativity, Sarcasticness, Coherence, and Image-Text Relation. And the four metrics are independent of each **other.** One of the judgements should not have any influence on the other one. Specific criteria for care about what the description is saying but only if there is something that can interest you In the process of evaluating Sarcasticeness, it should be considered whether the description is care about what the description is saying but only if there is something that is in contrast to description and make a semantical evaluation of how relevant the image and generated Again, all the evaluation criteria are independent **metrics**. Each criterion is rated from 1 (not between the provided information (image and hashtags) and the generated description information (image, hashtags, and OCR) together with the description generated by ours and the other 4 strong baselines (BLIP, Chandler, MFFG, MMT). Each criterion is rated from 1 (*not at all*) to 5 (*very*). We employ 5 evaluators to independently score the generated sarcastic descriptions from the five methods and the ground truth. Figure 3 shows the instructions released to the evaluators. The comparative results are shown in Table 6. To measure the inner-annotator agreement, we calculated Fleiss' kappa and all the results show fair agreement (0.2 ≤ κ ≤ 0.4). From the diagram, we can obtain the following conclusions: 1) BLIP is superior in the research of image caption, so this method works well on the metrics of Coherence | MODEL | Cre. | Sac. | Coh. | I-T Rel. | |-------------|--------|--------|--------|------------| | GroundTruth | 3.71 | 3.85 | 4.12 | 4.06 | | BLIP | 1.12 | 1.26 | 4.32 | 4.48 | | Chandler | 2.11* | 2.04* | 3.55 | 2.76 | | MFFG-Transf | 1.88 | 1.94 | 3.21 | 2.67 | | MMT | 1.93 | 1.87 | 3.37 | 2.91 | | MTMSG | 2.57 | 2.78 | 3.62* | 3.02* | and Image-Text Relation. Notably, it works even better than the ground truth, which further demonstrates that the Twitter text posted by humans is creative and full of imagination. 2) For Coherence and Sarcasticness, our proposed MTMSG performs better than any other baselines, which demonstrates that our model meets the basic target of the MSG task. 3) Moreover, we can see that all the baselines adopted for this task show poor performance on the metrics of Creativity and Image-Text Relation, which illustrates that more studies on improving the quality of the generated descriptions are needed. 4) Overall, the results strengthen that our model is superior to all the other baselines. However, a significant gap in performance still remains between humans and machines, which demonstrates that our proposed MSG task is challenging and worth further in-depth research. ## 6.3 Ablation Study MODEL B4 R_L METEOR CIDEr-D M4TSD 6.37 24.12 16.05 48.6 w/o visual 6.02 23.31 15.57 45.2 w/o OCR 5.24 22.08 14.51 42.3 w/o htag 4.95 22.14 14.22 40.9 w/o htag, visual 4.32 21.33 13.23 34.1 w/o OCR, visual 4.13 20.95 12.91 32.7 w/o htag, OCR 1.73 15.41 11.88 24.2 w/o iterative 3.87 18.19 12.15 33.6 w/o pointer 2.51 17.54 11.88 28.1 experimental results of the ablation study. From the perspective of input corpus, we can draw the following conclusions: 1) Note that the removal of hashtags (w/o htag) significantly degrades the performance, which verifies the significance of guiding the direction of sarcasm generation as hashtags indicate the topic of Twitter; 2) Since the context of sarcastic information resides in the OCR tokens, it is hard to understand the sarcastic intent without the OCR tokens. As a result, removing the OCR tokens (w/o OCR) also leads to considerable performance decline; 3) Besides, from the results of w/o visual, we can conclude that the visual features are also beneficial to MSG; 4) Moreover, from experimental results which only utilize one modality at a time to observe how much modality-specific information contributes to the generation, we can conclude that textual information plays a major role in sarcasm generation. More remarkably, concentrating on our models, we can get the conclusions as follows: 1) Dynamic pointer network can predict the copying score between the decoding output and each OCR token. Since the OCR tokens and the common word vocabulary are usually complementary, the removal of the pointer network (w/o pointer) leads to apparent performance degradation; 2) Finally, without leveraging the iterative decoding method (w/o iterative), we only decode for one step. The sharp performance degradation indicates the iterative decoding strategy can improve the quality of generated sarcastic descriptions. ## 6.4 Case Study We present some examples to analyze the performance of our model. Specifically, in Figure 4 (a), we can find that without hashtags and OCR tokens, our model can only describe a man in a black car. The hashtag and OCR token provide the timing factor *Sunday* and *FUNDAY* may guide the intention of sarcasm, which demonstrates the information from hashtags and OCR tokens is necessary for sarcasm generation. Yet MMT simply concatenates the visual information with the timing factor *Sunday*, and the generated description seems not to be sarcastic. Even sometimes the hashtag is not so useful, like *RTX Austin* in Figure 4 (b), it just tells the place of the scene. We can also understand the main intent of the sarcasm from the OCR tokens (*behind schedule*). Associated with the image, we can understand that the flight is behind schedule, which can help generate a sarcastic description. ![7_image_0.png](7_image_0.png) | hashtag | | | | |-----------------------------|--------------|-------------------------|------------------| | #bankholidayweekend | | | | | #rtxaustin | | | | | #speed | | | | | #mondaymorning | | | | | speed test; ping 255ms | | | | | Did somebody say | BEHIND | | Monday should be | | download 0.30mps;upload | | | | | Sunday FUNDAY? | | | | | SCHEDULE | | optional. | | | 0.28mps | | | | | OCR | | | | | token | | here is my blazing fast | | | cntrl / alt / delete monday | | | | | flight delays are the | | | | | unlimited <user> | | | | | # mondaymorning # | | | | | best . # rtxaustin | | | | | #speed . such great | | | | | comedy # tired # | | | | | service for the price . | | | | | coffeeaddict | | | | | when it's | | | | | #bankholidayweekend | | | | | & you have to work an | | | | | extra shift. | | | | | ground | | | | | truth | | | | | MMT | | | | | a man in a black | | | | | nice to see plane at | | | | | what a colorful speed | | | | | a paper with words | | | | | car on Sunday. | the airport. | | table! | | concerning Monday. | | | | | there is a Sunday | | | | | our | | | | | FUNDAY when | | | | | MTMSG | | | | | you are still | | | | | working. | | nice to see the | | | here is so much for < | | | | | Optional Monday is the | | | | | flight was behind | | | | | user > speeds we must | | | | | best thing about my life | | | | | schedule. | | be amazing ! | | | # comedy # nochill | | | | (a) (b) ![7_image_1.png](7_image_1.png) | hashtag | |-------------------------------| | #family | | #lunch | | OCR | | token | | family photo | | bans guns | | i felt so safe eating | | lunch today ! i 'm sure | | this sign kept all the bad | | can ' t have the | | i feel so excited | | about its finest | | more | | people ! | | lunch. | | can 't have #family | | without so many happy people | | ground | | truth | | our | | model | While MMT still generates a literal description of the image (*plane at the airport*). Similarly, in Figure 4 (c), from the hashtag, we know that the sarcastic target is about speed, and from the OCR token, we can get that the intent is to express that speed is slow; in Figure 4 (d), from the hashtag, we know that the sarcastic target is about Monday morning, and from the OCR token, we can get that the intent is to express the hope of optional Monday. If we understand the intent, we can easily get better sarcastic descriptions instead of literal descriptions like MMT. All the examples demonstrate that with the help of proposed images, hashtags, and OCR tokens, our model is capable of generating sarcastic descriptions like humans to some degree. ## 7 Conclusion In this paper, we introduce a novel task of Multimodal Sarcasm Generation, aiming to generate sarcastic descriptions like humans. To address the task, we investigate a new dataset, MuSG, containing 5000 images with corresponding sarcastic descriptions. Further, we propose a strong baseline, MTMSG, to benchmark the MuSG dataset. Machine evaluation metrics demonstrate that our proposed MTMSG outperforms various comparison baselines. Moreover, the human evaluation shows that our proposed MSG task is challenging and worth further in-depth research. We consider that MSG opens a new avenue in the domains of sarcasm understanding and generation. In the future, we will explore the detection of key information from the images and the understanding of the intent from the OCR tokens. ## Limitations To better understand the limitations of our proposed MTMSG, we also perform a qualitative error analysis of the incorrectly generated samples. We randomly select 100 incorrectly generated descriptions and find that our model might incorrectly generate those samples mainly due to the misunderstanding of the necessary intent information from the images and OCR tokens. The statistical results reveal that 37% of the incorrectly generated descriptions are caused because the main part of the sarcasm might lie in the images (eg. Figure 5 (a)), while the other 63% error cases are attributed to the failure of our model in capturing the intent information directly from the OCR tokens (eg. Figure 5 (b)). Specifically, in Figure 5 (a), if we want to generate better descriptions, we need to capture the fine-grained visual attribute feature *happy* from the image; In Figure 5 (b), we need to understand the intent information from the OCR tokens that ban guns can make us feel safe when we have lunch in the restaurant. Therefore, to address the above issues in the future, we will further explore the fine-grained key information in the images to help guide the MSG. Besides, we will explore a language interpreter to further understand the key information contained in the OCR tokens. ## Acknowledgements We thank anonymous reviewers for their valuable comments and thoughtful suggestions. This work was supported by the National Natural Science Foundation of China (62276072, 62076100,and 62261003), the Guangxi Natural Science Foundation (No. 2022GXNSFAA035627), Guangxi Scientific and Technological Bases and Talents Special Projects (Application No. 2022AC21300, 2022AC21254), the Open Research Fund of Guangxi Key Laboratory of Multimedia Communications and Network Technology, and the Open Research Fund of Key Laboratory of Big Data and Intelligent Robot (SCUT), Ministry of Education. ## References Jon Almazán, Albert Gordo, Alicia Fornés, and Ernest Valveny. 2014. Word spotting and recognition with embedded attributes. IEEE Trans. Pattern Anal. Mach. Intell., 36(12):2552–2566. Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. 2016. SPICE: semantic propositional image caption evaluation. In Computer Vision - ECCV 2016 - 14th European Conference, volume 9909 of *Lecture Notes in Computer Science*, pages 382–398. Springer. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomás Mikolov. 2017. Enriching word vectors with subword information. *Trans. Assoc. Comput. Linguistics*, 5:135–146. Christian Burgers, Margot Van Mulken, and Peter Jan Schellens. 2012. Verbal irony: Differences in usage across written genres. Journal of Language and Social Psychology, 31(3):290–310. Yitao Cai, Huiyu Cai, and Xiaojun Wan. 2019. Multimodal sarcasm detection in twitter with hierarchical fusion model. In *Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019*, pages 2506–2515. Santiago Castro, Devamanyu Hazarika, Verónica PérezRosas, Roger Zimmermann, Rada Mihalcea, and Soujanya Poria. 2019. Towards multimodal sarcasm detection (an _obviously_ perfect paper). In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, pages 4619– 4629. Tuhin Chakrabarty, Debanjan Ghosh, Smaranda Muresan, and Nanyun Peng. 2020. Rˆ3: Reverse, retrieve, and rank for sarcasm generation with commonsense knowledge. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics,* ACL 2020, pages 7976–7986. Itsugun Cho, Dongyang Wang, Ryota Takahashi, and Hiroaki Saito. 2022. A personalized dialogue generator with implicit user persona detection. In *Proceedings* of the 29th International Conference on Computational Linguistics, COLING 2022, Gyeongju, Republic of Korea, October 12-17, 2022, pages 367–377. International Committee on Computational Linguistics. Michael J. Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In *Proceedings of the* Ninth Workshop on Statistical Machine Translation, WMT@ACL 2014, pages 376–380. The Association for Computer Linguistics. Poorav Desai, Tanmoy Chakraborty, and Md. Shad Akhtar. 2022. Nice perfume. how long did you marinate in it? multimodal sarcasm explanation. In Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, pages 10563–10571. AAAI Press. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, pages 4171–4186. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. 9th International Conference on Learning Representations, ICLR 2021. Aditya Joshi, Anoop Kunchukuttan, Pushpak Bhattacharyya, and Mark James Carman. 2015. Sarcasmbot: An open-source sarcasm-generation module for chatbots. In *WISDOM Workshop at KDD*. Shivani Kumar, Atharva Kulkarni, Md. Shad Akhtar, and Tanmoy Chakraborty. 2022. When did you become so smart, oh wise one?! sarcasm explanation in multi-modal multi-party dialogues. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, pages 5956–5968. Association for Computational Linguistics. Junnan Li, Dongxu Li, Caiming Xiong, and Steven C. H. Hoi. 2022. BLIP: bootstrapping language-image pretraining for unified vision-language understanding and generation. In *International Conference on Machine Learning, ICML 2022*, volume 162 of *Proceedings of Machine Learning Research*, pages 12888– 12900. PMLR. Bin Liang, Chenwei Lou, Xiang Li, Lin Gui, Min Yang, and Ruifeng Xu. 2021. Multi-modal sarcasm detection with interactive in-modal and cross-modal graphs. In *ACM MM '21: ACM Multimedia Conference*, pages 4707–4715. Bin Liang, Chenwei Lou, Xiang Li, Min Yang, Lin Gui, Yulan He, Wenjie Pei, and Ruifeng Xu. 2022. Multimodal sarcasm detection via cross-modal graph convolutional network. In *Proceedings of the 60th Annual Meeting of the Association for Computational* Linguistics, ACL 2022, pages 1767–1777. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In *Text summarization of* ACL 2004, pages 74–81. Nayu Liu, Xian Sun, Hongfeng Yu, Wenkai Zhang, and Guangluan Xu. 2020. Multistage fusion with forget gate for multimodal summarization in open-domain videos. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, pages 1834–1845. Abhijit Mishra, Tarun Tater, and Karthik Sankaranarayanan. 2019. A modular architecture for unsupervised sarcasm generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, pages 6143–6153. Silviu Oprea, Steven R. Wilson, and Walid Magdy. 2021. Chandler: An explainable sarcastic response generator. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, EMNLP 2021, pages 339–349. Silviu Vlad Oprea, Steven R. Wilson, and Walid Magdy. 2022. Should a chatbot be sarcastic? understanding user preferences towards sarcasm generation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, pages 7686–7700. Association for Computational Linguistics. Hongliang Pan, Zheng Lin, Peng Fu, Yatao Qi, and Weiping Wang. 2020. Modeling intra and intermodality incongruity for multi-modal sarcasm detection. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 1383–1392. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In *Proceedings of the* 40th annual meeting of the Association for Computational Linguistics, ACL 2002, pages 311–318. Ankur P. Parikh, Oscar Täckström, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, pages 2249–2255. Lotem Peled and Roi Reichart. 2017. Sarcasm SIGN: interpreting sarcasm with sentiment based monolingual machine translation. In *Proceedings of the 55th* Annual Meeting of the Association for Computational Linguistics, ACL 2017, pages 1690–1700. Shaoqing Ren, Kaiming He, Ross B. Girshick, and Jian Sun. 2015. Faster R-CNN: towards real-time object detection with region proposal networks. In *Advances in Neural Information Processing Systems 28:* Annual Conference on Neural Information Processing Systems 2015, NeurIps 2015, pages 91–99. Rossano Schifanella, Paloma De Juan, Joel Tetreault, and Liangliang Cao. 2016. Detecting sarcasm in multimodal social platforms. In *Proceedings of the* 2016 ACM Conference on Multimedia Conference, ACM MM 2016, pages 1136–1145. Oleksii Sidorov, Ronghang Hu, Marcus Rohrbach, and Amanpreet Singh. 2020. Textcaps: A dataset for image captioning with reading comprehension. In Computer Vision - ECCV 2020 - 16th European Conference, volume 12347, pages 742–758. Jiajia Tang, Kang Li, Ming Hou, Xuanyu Jin, Wanzeng Kong, Yu Ding, and Qibin Zhao. 2022. MMT: multiway multi-modal transformer for multimodal learning. In Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI 2022, pages 3458–3465. ijcai.org. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems 30: Annual Conference on Neural* Information Processing Systems 2017, NeurIps 2017, pages 5998–6008. Ramakrishna Vedantam, C. Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image description evaluation. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, pages 4566–4575. IEEE Computer Society. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. *Advances in neural information processing systems*, 28. Jiquan Wang, Lin Sun, Yi Liu, Meizhi Shao, and Zengwei Zheng. 2022. Multimodal sarcasm target identification in tweets. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, pages 8164–8175. Association for Computational Linguistics. Nan Xu, Zhixiong Zeng, and Wenji Mao. 2020. Reasoning with multimodal sarcastic tweets via modeling cross-modality contrast and semantic association. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, pages 3777–3786. Ze Yang, Can Xu, Wei Wu, and Zhoujun Li. 2019. Read, attend and comment: A deep architecture for automatic news comment generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 5076–5088. Association for Computational Linguistics. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Yes, in the Section "Error Analysis", we describe the limitation of our work. ✗ A2. Did you discuss any potential risks of your work? No, our work does not involve ethical issues. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Yes, from the section of the Abstract and Introduction, reviewers can easily get the main idea of our work. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Yes, from the section of Reference, Related Work, "Dataset and Metrics", and Abstract. ✓ B1. Did you cite the creators of artifacts you used? Yes, from the section of Reference. ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? No, we only utilized artifacts that are publically avaliable. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Yes, from the section of Related Work. ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Yes, from the section of "Dataset and Metrics". ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Yes, from the section of Abstract and "Dataset and Metrics". ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Yes, from the section of"Dataset and Metrics". ## C ✓ **Did You Run Computational Experiments?** Yes, from the section on Experiments and Experimental Results. ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Yes, from the section of Experiments. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Yes, from the section of Experiments. ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Yes, from the section of Experiments and Experimental Results. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Yes, from the section of "Dataset and Metrics" and Experiments. ## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Yes, From The Section Of Experimental Results "Human Evaluation" Part. ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Yes, from the section of Experimental Results "Human Evaluation" part. ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Yes, from the section of Experimental Results "Human Evaluation" part. ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Yes, from the section of Experimental Results "Human Evaluation" part. ✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No, there is no ethics review board being involved. ✗ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No, we only ask some evaluators for human evaluation.
shi-etal-2023-rethinking
Rethinking Semi-supervised Learning with Language Models
https://aclanthology.org/2023.findings-acl.347
Semi-supervised learning (SSL) is a popular setting aiming to effectively utilize unlabelled data to improve model performance in downstream natural language processing (NLP) tasks. Currently, there are two popular approaches to make use of the unlabelled data: Self-training (ST) and Task-adaptive pre-training (TAPT). ST uses a teacher model to assign pseudo-labels to the unlabelled data, while TAPT continues pre-training on the unlabelled data before fine-tuning. To the best of our knowledge, the effectiveness of TAPT in SSL tasks has not been systematically studied, and no previous work has directly compared TAPT and ST in terms of their ability to utilize the pool of unlabelled data. In this paper, we provide an extensive empirical study comparing five state-of-the-art ST approaches and TAPT across various NLP tasks and data sizes, including in- and out-of domain settings. Surprisingly, we find that TAPT is a strong and more robust SSL learner, even when using just a few hundred unlabelled samples or in the presence of domain shifts, compared to more sophisticated ST approaches, and tends to bring greater improvements in SSL than in fully-supervised settings. Our further analysis demonstrates the risks of using ST approaches when the size of labelled or unlabelled data is small or when domain shifts exist, and highlights TAPT as a potential solution.
# Rethinking Semi-Supervised Learning With Language Models Zhengxiang Shi1 ∗ Francesco Tonolini2 Nikolaos Aletras2,3 **Emine Yilmaz**1,2 Gabriella Kazai2 **Yunlong Jiao**2 1 University College London, London, United Kingdom 2 Amazon, London, United Kingdom 3 University of Sheffield, Sheffield, United Kingdom [email protected] {tonolini,eminey,aletras,gkazai,jyunlong}@amazon.com ## Abstract Semi-supervised learning (SSL) is a popular setting aiming to effectively utilize unlabelled data to improve model performance in downstream natural language processing (NLP) tasks. Currently, there are two popular approaches to make use of unlabelled data: *Self-training* (ST) and *Task-adaptive pre-training* (TAPT). ST uses a teacher model to assign pseudo-labels to the unlabelled data, while TAPT continues pre-training on the unlabelled data before finetuning. To the best of our knowledge, the effectiveness of TAPT in SSL tasks has not been systematically studied, and no previous work has directly compared TAPT and ST in terms of their ability to utilize the pool of unlabelled data. In this paper, we provide an extensive empirical study comparing five state-of-the-art ST approaches and TAPT across various NLP tasks and data sizes, including in- and out-of-domain settings. Surprisingly, we find that TAPT is a strong and more robust SSL learner, even when using just a few hundred unlabelled samples or in the presence of domain shifts, compared to more sophisticated ST approaches, and tends to bring greater improvements in SSL than in fully-supervised settings. Our further analysis demonstrates the risks of using ST approaches when the size of labelled or unlabelled data is small or when domain shifts exist. We offer a fresh perspective for future SSL research, suggesting the use of unsupervised pre-training objectives over dependency on pseudo labels.1 ## 1 Introduction Pre-training (PT) language models (LMs) (Devlin et al., 2019; Liu et al., 2019; Radford et al., 2019) over large amounts of text data (e.g. with masked language modelling) and then fine-tuning on task-specific labelled data offer large performance gains across NLP tasks. *Semi-supervised* learning (SSL) (Grandvalet and Bengio, 2004; Chapelle et al., 2009; Kipf and Welling, 2017) is a powerful and effective approach to utilize unlabelled data. A typical SSL setting assumes access to a (relatively small) labelled training set and an (often large) unlabelled set. The goal of SSL is to make effective use of the unlabelled data to improve model (i.e. LMs) performance. In NLP, *Self-training* (ST) approaches have been proposed to produce pseudo labels for unlabelled examples to train the model (e.g. in Yarowsky, 1995; McClosky et al., 2006). With the advent of neural networks, ST approaches typically focus on using student-teacher models to assign pseudolabels to the unlabelled data (e.g. in Artetxe et al., 2018; Cai and Lapata, 2019; Dong and de Melo, 2019; Xie et al., 2020a; Gera et al., 2022). Apart from the sophisticated ST approaches, Gururangan et al. (2020) proposed *task adaptive pre-training* (TAPT), which is a straightforward yet effective method for utilising unlabelled examples. This method involves continuing pre-training the LM on the task-specific data without using labels, before proceeding with fully-supervised fine-tuning. TAPT and ST are both motivated by the need for effectively leveraging unlabelled examples, raising the questions of how TAPT performs in SSL tasks, as well as how these two approaches perform against each other. In this work, we investigate the performance of TAPT against five state-of-the-art ST approaches across five NLP tasks (§4). We empirically show that TAPT outperforms all state-of-the-art ST approaches on several tasks, suggesting that it should serve as a strong baseline for SSL methods. Previous research (Gururangan et al., 2020) has shown that TAPT can improve performance in fullysupervised settings. Our study goes further by showing that TAPT can be even more effective in SSL settings (§4). We next study the impact of using different 5614 amounts of labelled and unlabelled data for SSL (§5). Our experiments show that ST approaches are prone to suffering from insufficient labelled or unlabelled data, while TAPT is more robust across different combinations of labelled and unlabelled data sizes. Contrary to the common assumption that TAPT requires a large amount of data to perform well (e.g. Li et al., 2021b; Hou et al., 2022), our results show that TAPT improves performance with just a hundred unlabelled samples. We conduct further analysis on the impact of domain shifts in labelled or unlabelled data. While ST approaches generally suffer from domain shifts, TAPT is more robust and even benefits from domain shifts (§6). In summary, the main contributions of this paper are as follows: - An extensive empirical study to directly compare five state-of-the-art ST approaches and TAPT across various NLP tasks in SSL, with varying amounts of labelled and unlabelled data as well as the effect of domain shifts; - Practical insights learned about the limitations of ST approaches, alongside an exploration of the often-unrecognized yet impressive capacity of TAPT as a simple, stable and powerful SSL learner; - A fresh perspective for future SSL research by demonstrating that leveraging unsupervised signals from unlabelled texts presents a promising and effective approach alternative to dependence on pseudo labels. ## 2 Preliminaries 2.1 Task Adaptive Pre-Training (Tapt) LMs are adapted to downstream NLP tasks by finetuning (FT) on task-specific data. TAPT introduces a simple additional step before fine-tuning by continuing pre-training with a masked language modelling (MLM) objective (Devlin et al., 2019; Liu et al., 2019) on the task-specific data without requiring labels. The main advantage of TAPT is that it provides a simple way for the LM to explore the task space while it can easily make use of all available labelled and unlabelled data. ## 2.2 Self-Training (St) The core idea behind ST approaches is to utilise a teacher model trained on labelled examples to make predictions for unlabelled examples, and train a new student model with these predictions. Formally, let L ≜ {(x1, y1), . . . ,(xn, yn)} denote n labelled examples and U ≜ {x˜1*, . . . ,* x˜m} denote m unlabelled examples, where usually m ≫ n. The ST framework is trained with three main steps as follows. Step 1. A teacher model F, parameterized by a neural network Θ, is trained via minimizing the cross entropy loss ℓ on labelled examples L: $${\mathcal{L}}_{t e a c h e r}(L)=\sum_{x_{i},y_{i}\in L}\ell(y_{i},F(x_{i},\Theta)),\quad(1)$$ Step 2. The teacher model F is used to make predictions (referred to as "pseudo-labels") on unlabelled examples U: $$\tilde{y}_{i}=F(\tilde{x}_{i},\Theta),$$ $$(2)$$ y˜i = F( ˜xi, Θ), (2) where y˜i can be either the continuous logit or the discrete label induced by an ARGMAX operation. Step 3. A student model G, parameterized by a fresh neural network Φ, is trained to fit labelled and pseudo-labelled examples: Lstudent(L, U) = X xi,yi∈L ℓ(yi, F(xi, Φ)) +X x˜i,y˜i∈U ℓ( ˜yi, F( ˜xi, Φ)) (3) This process is repeated for a given number of times by treating the student as a new teacher to re-predict pseudo-labels as in eq. (2) and then training a new student with eq. (3). In practice, ST with techniques such as consistency regularization (Miyato et al., 2018; Clark et al., 2018; Berthelot et al., 2019b), strong data augmentation (Sohn et al., 2020; Xie et al., 2020b,a), confidence threshold (Sohn et al., 2020; Zhang et al., 2021; Berthelot et al., 2022) usually leads to substantial improvements in model performance. ## 3 Experimental Setup Datasets. We experiment with five datasets used in previous related work for SSL (Gururangan et al., 2019; Chen et al., 2020b; Xie et al., 2020a; Li et al., 2021a; Gera et al., 2022), including IMDB (Maas et al., 2011), SST-2 (Wang et al., 2018), AG NEWS (Zhang et al., 2015), AMAZON REVIEW (McAuley and Leskovec, 2013), and YAHOO! AN-SWER (Chang et al., 2008). Table 1 shows data | Dataset | Task Type | Train Size | Dev. Size | Test Size | |Y| | L | |--------------------------------------------|---------------------------|--------------|-------------|-------------|-------|-----| | IMDB (Maas et al., 2011) | Movie Review Sentiment | 23,000 | 2,000 | 25,000 | 2 | 149 | | SST-2 (Wang et al., 2018) | Movie Review Sentiment | 60,000 | 7,349 | 872 | 2 | 37 | | AG NEWS (Zhang et al., 2015) | News Topic Classification | 100,000 | 10,000 | 7,600 | 4 | 134 | | AMAZON REVIEW (McAuley and Leskovec, 2013) | Product Review Sentiment | 250,000 | 25,000 | 650,000 | 5 | 79 | | YAHOO! ANSWER (Chang et al., 2008) | Topic Classification | 500,000 | 50,000 | 60,000 | 10 | 32 | statistics. We also provide descriptions and examples of datasets in Appendix §A.1. We show the process for quantifying the similarity between datasets in Appendix §A.2. Adhering to previous work (e.g. Chen et al., 2020b; Wang et al., 2022), we sample the same amount of labelled data per class from the train set, given the labelled size, to form the labelled set. We re-sample the labelled data using the same five seeds for all different approaches and report the average performance with an error bar. TAPT. Our approach to *task adaptive pretraining* (TAPT) using ROBERTA-BASE (Liu et al., 2019) is to further pre-train on the training text corpus including labelled and unlabelled data (see Table 12 in Appendix for hyperparameter details). The model is then fine-tuned on the labelled data where the [CLS] token representation is passed to an extra feed-forward layer for classification (see Table 13 in Appendix for hyperparameter details). The process of TAPT + FINE-TUNING is simply denoted by TAPT henceforth. ST. We implement five state-of-the-art ST approaches, including VAT (Miyato et al., 2018), FixMatch (Sohn et al., 2020), Dash (Xu et al., 2021b), FlexMatch (Zhang et al., 2021), and AdaMatch (Berthelot et al., 2022) (see descriptions of these approaches in Appendix §B). We use ROBERTA-BASE as the backbone, and the [CLS] token representation with an extra feed-forward layer is used for classification (see Table 14 in Appendix for hyperparameter details). Adhering to previous work (Xie et al., 2020a; Wang et al., 2022), back-translation (Ott et al., 2019) is used for data augmentation. Baselines. For reference, we also evaluate two baseline models that are only fine-tuned (from an off-the-shelf ROBERTA-BASE checkpoint) on: (1) the same labelled set as TAPT and ST (SUPERVISED); and (2) the whole training set (FULLY-SUPERVISED). ## 4 St Vs Tapt Overview. Table 2 shows the performance of TAPT against five state-of-the-art ST approaches and the baselines (SUPERVISED and FULLYSUPERVISED) across five datasets, each with two different sizes of labelled data for training following Wang et al. (2022). Overall, we observe that: (1) TAPT achieves highly competitive results compared with state-of-the-art ST approaches; and (2) TAPT gains more improvement compared to the SUPERVISED baselines when using fewer labelled samples. For our first finding, the experimental results show that TAPT outperforms all five state-of-theart ST approaches with lower variances on AMA-ZON REVIEW, and YAHOO! ANSWER, as shown in Table 2. For example, TAPT obtains a F1 score of 68.8% compared to the best ST approach's F1 score of 68.0% (using 500 labelled samples) and 71.5% compared to ST's 69.6% (using 2000 labelled samples) on YAHOO! ANSWER. For an example of the second finding, TAPT gains 3.6% F1 improvement over SUPERVISED (using 20 labelled samples) compared to 2.2% (using 100 labelled samples) on IMDB. Below we delve deeper into these two findings and discuss them in more detail. #1. TAPT **is a strong semi-supervised learner** and can outperform state-of-the-art ST **approaches.** Figure 1 shows how the performance of ST, TAPT, and SUPERVISED vary with respect to five different labelled sizes on each dataset, where two latest ST approaches (ADAMATCH and FLEXMATCH) are selected as representatives for ST. Experimental results further verify that TAPT has a consistent advantage over ADAMATCH and FLEXMATCH across different labelled sizes on AMAZON REVIEW and YAHOO! ANSWER. It is also worth noting that, while TAPT brings a stable improvement over SUPERVISED across all datasets with varying labelled sizes, ST can sometimes bring more substantial improvement, for example when | Method | IMDB | SST-2 | AG NEWS | AMAZON REVIEW | YAHOO! ANSWER | | | | | | |-------------------|----------|----------|-----------|-----------------|-----------------|---------|---------|---------|---------|---------| | 20 | 100 | 40 | 100 | 40 | 200 | 250 | 1000 | 500 | 2000 | | | ST Approaches VAT | 90.20.9 | 92.00.4. | 75.012.0 | 86.23.4 | 87.51.0 | 89.50.7 | 52.21.3 | 57.50.2 | 66.90.5 | 68.60.2 | | FIXMATCH | 93.40.1 | 93.40.1 | 37.38.5 | 66.421.3 | 75.68.7 | 88.80.6 | 55.91.1 | 59.00.5 | 67.51.0 | 69.60.4 | | DASH | 93.20.3 | 93.40.2 | 38.210.1 | 73.318.6 | 74.36.6 | 88.50.6 | 56.61.8 | 59.30.2 | 67.61.0 | 69.50.3 | | FLEXMATCH | 93.30.1 | 93.40.1 | 40.67.7 | 83.08.3 | 80.64.4 | 88.20.5 | 54.93.9 | 58.80.4 | 66.60.7 | 68.70.4 | | ADAMATCH | 94.40.4. | 94.70.2 | 42.613.3 | 83.14.4 | 82.75.9 | 88.60.4 | 55.52.8 | 59.00.7 | 68.00.7 | 69.50.3 | | SUPERVISED | 83.37.4 | 88.70.2 | 74.76.1 | 84.02.7 | 84.61.6 | 88.00.8 | 53.10.7 | 57.20.1 | 65.40.3 | 68.50.3 | | + TAPT | 86.92.8 | 90.90.6 | 82.64.0 | 85.42.4 | 84.01.3 | 88.70.7 | 58.40.7 | 60.60.1 | 68.80.7 | 71.50.3 | | FULLY-SUPERVISED | 93.90.1 | 93.00.6 | 94.80.1 | 65.00.2 | 75.30.2 | | | | | | | + TAPT | 94.00.2 | 93.50.3 | 95.00.1 | 65.60.1 | 75.40.1 | | | | | | ![3_image_0.png](3_image_0.png) ![3_image_1.png](3_image_1.png) only a few hundreds of labelled samples are available from IMDB. However, we do not observe similar phenomena for ST on other datasets. Our experimental results demonstrate that TAPT is a simple, effective and strong learner for SSL tasks, and it should serve as a baseline for SSL tasks in NLP. #2. TAPT **tends to bring more improvements in** SSL than in FULLY-SUPERVISED **setting.** We further study the behaviour of TAPT *itself* under SSL, where we select SUPERVISED as the baseline rather than ST approaches. Figure 1 shows that the differences in performance (in absolute values) between TAPT (red lines) and SUPERVISED (green lines) generally increase as the labelled size decreases. To gain a better understanding of the impact of labelled data sizes, we plot the improvement from TAPT over SUPERVISED (in percentages) against the ratio between labelled size and unlabelled size (unlabelled size is fixed for each dataset) in Figure 2. We see that TAPT improves over SU-PERVISED further as the ratio of labelled and unlabelled sizes decreases, highlighting the trends of gaining greater improvement in low-resource SSL setting. This finding is complementary to prior works (e.g. in Howard and Ruder, 2018; Gururangan et al., 2020) that focus on TAPT's improvement from the FULLY-SUPERVISED perspective, represented by the rightmost red vertical line in Figure 2. The rising trend of the improvement is not monotonic as the labelled size is reduced. Rather it could provide insight into how TAPT improves over SU-PERVISED in SSL and inspire the design of new ![4_image_0.png](4_image_0.png) ## 5 Exploring The Limits Of St And Tapt In §4, our experimental results showed inconsistent results across datasets. For example, ST performs better on IMDB while TAPT achieves better results on AMAZON REVIEW and YAHOO! ANSWER. We hypothesize that this might be attributed to the exposure to different sizes of labelled or unlabelled data. To verify this hypothesis and shed light on the differences in performance between datasets, we compare TAPT and ST (using ADAMATCH and FLEXMATCH as representatives) by sampling different labelled and unlabelled sizes in IMDB, SST2, AMAZON REVIEW and YAHOO! ANSWER. Figure 3 visualizes the differences in performance between TAPT and ST, where each cell represents the macro-F1 performance difference of TAPT over ST (averaged across five seeds). In each case, the highest performance among FLEXMATCH and ADAMATCH is selected to represent the performance of ST. Overall, we observe that: (1) TAPT improves the fine-tuning performance even with a few hundred unlabelled examples; and (2) TAPT performs more stable across the different labelled and unlabelled data sizes than ST approaches. Below we provide a comprehensive analysis of the impact of labelled and unlabelled sizes. \#1. TAPT **works even with a few hundred unlabelled samples.** It is generally assumed that TAPT requires a large amount of unlabelled data to perform well (e.g. Li et al., 2021b; Hou et al., 2022). However, we surprisingly observe that TAPT can bring substantial improvement over SUPERVISED baseline even with a relatively small number of unlabelled samples, as shown in Figure 5. To explore the effectiveness of TAPT over SUPERVISED in the low-resource setting of unlabelled data, we ![4_image_1.png](4_image_1.png) select the performance of TAPT and SUPERVISED from the first column (the lowest unlabelled size) for each dataset in Figure 3 and plot their average performance over different labelled sizes. Figure 4 shows that TAPT improves over the SUPERVISED baseline with just one hundred or one thousand samples. For instance, TAPT achieves a 5.5% increase in F1 score compared to the SUPERVISED baseline when using only 1k unlabelled samples on YAHOO! ANSWER. Additionally, this performance is achieved without the need for large amounts of tokens in each sample, as training samples from SST-2, on average, contain only 9 tokens and training samples from YAHOO! ANSWER contain about 32 tokens (see examples in Table 6 of Appendix). \#2. Scarce labelled data and adequate unlabelled data. TAPT appears to be a more favourable choice than ST approaches in this setting. The bottom of each sub-figure in Figure 3 shows a clear labelled size boundary, below which FLEXMATCH and ADAMATCH are outperformed by TAPT with a large margin, regardless of datasets and unlabelled size used. This suggests that ST might not be able to efficiently handle large amounts of unlabelled data if labelled data ![5_image_0.png](5_image_0.png) do not provide adequate information. This might be attributed to *confirmation bias* (Tarvainen and Valpola, 2017; Arazo et al., 2020), which results from the accumulation of errors in the iterative ST process caused by incorrect pseudo-labels. The specific value of adequate labelled size boundary for ST approaches depends on the nature of the dataset. For example, even though both IMDB and SST-2 are binary classification tasks for movie review sentiment analysis, the labelled size boundary for SST-2 is higher (40 > 4), indicating that this boundary tends to increase as the task becomes more challenging. While it may be easy to obtain dozens of labelled data in this case, when the task becomes more intricate or contains noisy weak labels, it is important to be aware of this potential issue with ST approaches. TAPT could serve as an alternative in situations where collecting adequate labelled data for training is costly. We provide specific values of the performance of ST and TAPT, and further verify that this finding applies to other ST approaches in Appendix §D. \#3. Adequate labelled data and scarce unlabelled data. In this setting, TAPT is more robust, while ST has a greater chance of performing worse than the SUPERVISED baseline. In Figure 5, we plot the performance of ST approaches and TAPT against five different sizes of unlabelled data, grouped by size (using similar colours). We note | #Unl. | 10 | 50 | 100 | 500 | |-----------|----------|---------|----------|---------| | FLEXMATCH | 57.317.9 | 35.23.4 | 45.122.5 | 33.40.1 | | ADAMATCH | 53.322.1 | 36.86.1 | 33.50.2 | 33.60.3 | that ST approaches perform worse than their corresponding SUPERVISED baselines (represented by horizontal lines) until a certain amount of unlabelled data has been reached. For example, when the labelled size is 500, ST requires about 20k unlabelled samples to achieve the corresponding SUPERVISED baseline performance on YAHOO! ANSWER. On the other hand, TAPT generally outperforms SUPERVISED baselines demonstrating its robustness across various unlabelled sizes. To further quantify the model performance in case of scarce unlabelled and adequate labelled data, we choose the three lowest unlabelled sizes (the first three columns) excluding the lowest labelled size (the last row) in Figure 3 for each dataset. Our analysis shows that ST has 67%, 56% and 54% probability of falling below the SUPER-VISED baselines on SST-2, AMAZON REVIEW, and YAHOO! ANSWER respectively. Even on IMDB where ST generally performs well, it still has a probability of 33% to fall behind SUPER-VISED. In contrast, TAPT never performs worse than SUPERVISED in those cases. We provide computation details and comparative statistics in Appendix §C. The specific value of adequate unlabelled size boundary for ST approaches depends on the nature of the dataset as well as the labelled size. Figure 5 illustrates that as the size of the labelled data increases, ST approaches require more unlabelled data to surpass the SUPERVISED baselines. For example, on AMAZON REVIEW, ST trained with 100 labelled samples requires about 5k unlabelled samples to perform better than SUPERVISED, while ST trained with 10k labelled samples requires about 100k unlabelled samples. Adjusting the unlabelled size accordingly might be conducive to exploiting the full potential of ST approaches. \#4. Scarce labelled and unlabelled data. When the labelled data is insufficient, increasing unlabelled size is not helpful or even detrimental to ST approaches. This finding is well-illustrated in the last row of results on SST-2 shown in Figure 3. In ![6_image_0.png](6_image_0.png) Figure 6: Results of UDA experiments. Legends indicate domains of labelled training data. Orange/green represents the performance with/without domain shift. Average Macro-F1 score on test sets over five seeds is reported. Train (Lab.) Train (Unl.) #Lab. FLEXMATCH ADAMATCH TAPT S**UPERVISED** IMDB IMDB 100 93.40.1 94.70.2 90.90.6 88.70.2 ⋆ SST-2 100 89.11.2 (▼4.6%) 87.62.2 (▼7.5%) 89.90.6 (▼1.1%) 88.70.2 AMAZON REVIEW 100 92.10.7 (▼1.4%) 92.40.2 (▼2.4%) 91.40.3 (▲0.6%) 88.70.2 IMDB IMDB 200 93.50.1 93.60.1 91.80.3 90.30.4 ⋆ SST-2 200 89.52.4 (▼4.3%) 88.91.0 (▼5.0%) 90.30.4 (▼1.6%) 90.30.4 AMAZON REVIEW 200 92.50.4 (▼1.1%) 92.70.5 (▼1.0%) 92.10.2 (▲0.3%) 90.30.4 SST-2 SST-2 100 83.08.3 83.14.4 85.42.4 84.02.7 ⋆ IMDB 100 46.72.1 (▼43.7%) 49.27.3 (▼40.8%) 88.50.9 (▲3.6%) 84.02.7 AMAZON REVIEW 100 46.44.9 (▼44.1%) 48.211.0 (▼42.0%) 88.90.9 (▲4.1%) 84.02.7 SST-2 SST-2 200 87.23.9 89.50.9 88.60.9 86.80.3 ⋆ IMDB 200 62.77.4 (▼28.1%) 61.02.8 (▼31.8%) 89.11.1 (▲0.6%) 86.80.3 AMAZON REVIEW 200 61.87.7 (▼29.1%) 56.010.3 (▼17.4%) 89.41.0 (▲0.9%) 86.80.3 Table 4: Results of STL experiments. We report the average Macro-F1 score on the test set across five seeds, with standard deviations as subscripts. Blue represents the best result for each row. Stars highlight rows without domain shifts. Arrows in colours stand for the changes in performances against the star row result within each cell. Table 5: A summary of domain adaptation, where the distribution of source and target domains are different. other words, reducing the size of unlabelled data could be beneficial for ST approaches when the labelled size is inadequate. We further zoom in on this phenomenon in Table 3 by selecting 4 fixed labelled and 500 unlabelled samples, and gradually removing unlabelled samples on IMDB. This is a stark contrast to the case where more unlabelled data is beneficial for ST approaches when adequate labelled data is available. Meanwhile, TAPT generally benefits from training on more in-domain unlabelled data, following the scaling law in LMs (Kaplan et al., 2020; Hoffmann et al., 2022). \#5. Adequate labelled and unlabelled data. Both ST and TAPT have demonstrated the ability to exploit unlabelled data in this setting. Figure 3 shows that ST dominates in IMDB when more than | Task | Lab. | Unl. | |--------------------------------|--------|--------| | Semi-supervised Learning | Target | Target | | Unsupervised Domain Adaptation | Source | Target | | Self-taught Learning | Target | Source | 10 labelled and 100 unlabelled samples are available. On the other hand, TAPT generally performs better than ST on AMAZON REVIEW and YAHOO! ANSWER, indicating that the answer to which approach is better depends on the nature of the dataset and task. As labelled and unlabelled data size increase, the difference between ST and TAPT shrinks (colours fade and lines converge in Figures 3 and 5). As the labelled data in size reaches the unlabelled data, the method of ST reduces to FULLYSUPERVISED, which is generally outperformed by TAPT (Gururangan et al., 2020). ## 6 Domain Adaptation We next investigate how ST and TAPT compare in the presence of domain shifts between labelled and unlabelled data in two additional settings (refer to Table 5). First, we experiment with the *Unsupervised Domain Adaptation* (UDA) setting, where domain shifts exist between the labelled data from a source domain and the unlabelled data from the target domain (Ben-David et al., 2010; Saito et al., 2018; Ramponi and Plank, 2020). Then, we experiment with *Self-taught Learning* (STL) (Raina et al., 2007) in a domain adaptation setting, where the unlabelled data come from the source domain and the labelled data from the target domain. In both settings, we use the (labelled) validation and test sets from the target domain. Validation and test sets are excluded from any pool of labelled or unlabelled train data. \#1. Unsupervised Domain Adaptation (UDA). In this setting, we use two movie sentiment datasets, IMDB and SST-2, as the source and target domain (and vice versa) with two different sizes of labelled data (i.e. 100 and 200). Figure 6 depicts the performance of ST and TAPT in UDA. In case of domain shifts, we observe that FLEXMATCH and ADAMATCH fail to deliver satisfactory results and their performance drops to the level of random guessing, with a F1 score of 33% across all labelled sizes and datasets. This highlights the vulnerability of ST approaches in UDA. In contrast, TAPT demonstrates robust performance even with domain shifts, on par with its own SSL performance without domain shifts. Additionally, TAPT even benefits from training on the source domain. For instance, training on IMDB (source domain) further improves the performance of TAPT on SST-2 (target domain) from 86.4% to 89.6% with 100 labelled samples and from 88.6% to 89.7% with 200 labelled samples. \#2. Self-taught Learning (STL). We select IMDB, SST-2, and AMAZON REVIEW for this setting. Although they are all sentiment reviews datasets, IMDB and AMAZON REVIEW are more closely related (see the similarity analysis in Table 7 of Appendix) and arguably contain richer language than SST-2 (see examples in Table 6 of Appendix). Table 4 presents the performance of ST and TAPT in STL setting. We find that domain shifts in unlabelled data consistently hurt the performance of ST, depending on the similarity between the source and target domains. The performance of ST drops sharply if the source and target domains are vastly different. For example, when SST-2 is used as the labelled data (target domain) and IMDB or AMAZON REVIEW is used as unlabelled data (source domain), the performance of ST falls from over 80% to around 60% or lower. On the other hand, when using SST-2 and IMDB as the source and target domains, the performance of ST drops by a much smaller margin (a few percentage points). This shows the importance of training ST approaches using more informative labelled data, which is also consistent with our findings in §5. TAPT in the STL setting is in fact a variation of domain adaptive pre-training (Beltagy et al., 2019; Gururangan et al., 2020) applied to SSL tasks. Table 4 shows that the performance of TAPT remains stable when there exist domain shifts in the unlabelled data. Using more informative unlabelled data can further improve the performance of TAPT. For example, using IMDB or AMAZON REVIEW as unlabelled data when SST-2 is a target task, we see an improvement of about 4% with 100 labelled samples. However, it is worth noting that ST methods can still be competitive compared to TAPT if the source and target domains are relatively similar. For instance, when using AMAZON REVIEW and IMDB as the source and target domains, ST still achieves better results than TAPT. ## 7 Related Work Leveraging unlabelled data by Continuing Pretraining. Previous work has shown that further pre-training LMs on the unlabelled data of a task (e.g. Alsentzer et al., 2019; Mehri et al., 2020; Margatina et al., 2022) or in-domain data (e.g. Logeswaran et al., 2019; Gururangan et al., 2020; Xue et al., 2021) is beneficial to downstream tasks. However, it is unknown whether this is valid in SSL settings. Previous studies in computer vision (Zoph et al., 2020) and speech recognition (Xu et al., 2021a) have compared PT and ST. However, our study has a different focus, specifically, we compare TAPT and ST in NLP tasks. Concurrently to our work, Shi and Lipani (2023) put forward prompt-based continued pre-training, which primarily aims to enhance the performance of promptbased fine-tuning techniques (Schick and Schütze, 2021; Gao et al., 2021). This approach outperforms these state-of-the-art ST approaches (Sohn et al., 2020; Xu et al., 2021b; Zhang et al., 2021; Berthelot et al., 2022) as well as the conventional CLS-based fine-tuning with TAPT. Semi-supervised Learning. Recent work in SSL has demonstrated great progress in effectively exploiting unlabelled data. A wide range of approaches has been proposed including Pseudo Labeling (Lee et al., 2013), Temporal Ensemble (Laine and Aila, 2017), Mean Teacher (Tarvainen and Valpola, 2017), Virtual Adversarial Training (Miyato et al., 2018), FixMatch (Sohn et al., 2020). A major issue for ST approaches is *confirmation* bias, where the student model would accumulate errors from the teacher model when learning with inaccurate pseudo-labels (e.g. Wang et al., 2021; Goel et al., 2022; Chen et al., 2022). While many efforts towards ST (e.g. Ruder and Plank, 2018; Gururangan et al., 2019; Li et al., 2019; Chen et al., 2020b; Meng et al., 2020; Chen et al., 2020a; He et al., 2020; Gera et al., 2022) have been made in NLP, the performance of ST approaches across various labelled and unlabelled sizes has yet to be thoroughly explored. Although Mukherjee and Awadallah (2020); Li et al. (2021b) noted that training ST approaches from TAPT checkpoints can improve the performance, the performance of TAPT in SSL tasks has not been either well-researched by previous works or compared with state-of-the-art ST approaches. ## 8 Conclusion In this work, we shed light on how TAPT performs against state-of-the-art ST approaches in various SSL settings. Our experiments reveal that TAPT achieves strong and robust performance, even with just a few hundred unlabelled examples. We further demonstrate that the ST approaches are vulnerable to small amounts of either labelled or unlabelled data. We also find that TAPT is more robust than ST approaches in joint domain adaptation and SSL settings. Overall, our empirical study demonstrates that TAPT is a strong SSL learner, competitive to more sophisticated ST approaches. In future work, we plan to further explore the potential of TAPT with unsupervised learning signals. ## Limitations For easier comparison with previous work, we only focus on text classification tasks, while ST can also be applied to a variety of NLP tasks, such as language generation, conversational systems and commonsense reasoning (Kedzie and McKeown, 2019; He et al., 2020; Shi et al., 2022a,b; Hendriksen et al., 2022). We also assume that the datasets are roughly balanced. However, real-world datasets are usually class-imbalanced (Li et al., 2011), which might impact the performance of TAPT and ST. While this is out of the scope of this paper, we believe that this is an interesting avenue for future work. Additionally, different labelled and unlabelled sizes may impact the performance of ST approaches in the domain shift setting. However, this doesn't alter our conclusion that the effectiveness of ST approaches significantly fluctuates across different scenarios. ## References Emily Alsentzer, John Murphy, William Boag, WeiHung Weng, Di Jindi, Tristan Naumann, and Matthew McDermott. 2019. Publicly available clinical BERT embeddings. In *Proceedings of the 2nd* Clinical Natural Language Processing Workshop, pages 72–78, Minneapolis, Minnesota, USA. Association for Computational Linguistics. Eric Arazo, Diego Ortego, Paul Albert, Noel E O'Connor, and Kevin McGuinness. 2020. Pseudolabeling and confirmation bias in deep semisupervised learning. In 2020 International Joint Conference on Neural Networks (IJCNN), pages 1–8. IEEE. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In *Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long* Papers), pages 789–798, Melbourne, Australia. Association for Computational Linguistics. Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciBERT: A pretrained language model for scientific text. In *Proceedings of the 2019 Conference on Empirical* Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3615– 3620, Hong Kong, China. Association for Computational Linguistics. Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. 2010. A theory of learning from different domains. *Machine learning*, 79(1):151–175. Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum learning. In Proceedings of the 26th Annual International Conference on Machine Learning, ICML '09, page 41–48, New York, NY, USA. Association for Computing Machinery. David Berthelot, Nicholas Carlini, Ekin D Cubuk, Alex Kurakin, Kihyuk Sohn, Han Zhang, and Colin Raffel. 2019a. Remixmatch: Semi-supervised learning with distribution alignment and augmentation anchoring. arXiv preprint arXiv:1911.09785. David Berthelot, Nicholas Carlini, Ian J. Goodfellow, Nicolas Papernot, Avital Oliver, and Colin Raffel. 2019b. Mixmatch: A holistic approach to semisupervised learning. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 5050–5060. David Berthelot, Rebecca Roelofs, Kihyuk Sohn, Nicholas Carlini, and Alexey Kurakin. 2022. Adamatch: A unified approach to semi-supervised learning and domain adaptation. In International Conference on Learning Representations. Rui Cai and Mirella Lapata. 2019. Semi-supervised semantic role labeling with cross-view training. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1018– 1027, Hong Kong, China. Association for Computational Linguistics. Ming-Wei Chang, Lev Ratinov, Dan Roth, and Vivek Srikumar. 2008. Importance of semantic representation: dataless classification. In *Proceedings of the* 23rd national conference on Artificial intelligenceVolume 2, pages 830–835. Olivier Chapelle, Bernhard Scholkopf, and Alexander Zien. 2009. Semi-supervised learning (chapelle, o. et al., eds.; 2006)[book reviews]. *IEEE Transactions* on Neural Networks, 20(3):542–542. Baixu Chen, Junguang Jiang, Ximei Wang, Pengfei Wan, Jianmin Wang, and Mingsheng Long. 2022. Debiased self-training for semi-supervised learning. In *Advances in Neural Information Processing Systems*, NIPS'22. Jiaao Chen, Zhenghui Wang, Ran Tian, Zichao Yang, and Diyi Yang. 2020a. Local additivity based data augmentation for semi-supervised NER. In *Proceedings of the 2020 Conference on Empirical Methods* in Natural Language Processing (EMNLP), pages 1241–1251, Online. Association for Computational Linguistics. Jiaao Chen, Zichao Yang, and Diyi Yang. 2020b. MixText: Linguistically-informed interpolation of hidden space for semi-supervised text classification. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2147– 2157, Online. Association for Computational Linguistics. Kevin Clark, Minh-Thang Luong, Christopher D. Manning, and Quoc Le. 2018. Semi-supervised sequence modeling with cross-view training. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1914–1925, Brussels, Belgium. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota. Association for Computational Linguistics. Xin Dong and Gerard de Melo. 2019. A robust selflearning framework for cross-lingual text classification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6306–6310, Hong Kong, China. Association for Computational Linguistics. Tianyu Gao, Adam Fisch, and Danqi Chen. 2021. Making pre-trained language models better few-shot learners. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3816–3830, Online. Association for Computational Linguistics. Ariel Gera, Alon Halfon, Eyal Shnarch, Yotam Perlitz, Liat Ein-Dor, and Noam Slonim. 2022. Zero-shot text classification with self-training. In *Proceedings* of the 2022 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics. Arushi Goel, Yunlong Jiao, and Jordan Massiah. 2022. Pars: Pseudo-label aware robust sample selection for learning with noisy labels. arXiv preprint arXiv:2201.10836. Yves Grandvalet and Yoshua Bengio. 2004. Semisupervised learning by entropy minimization. *Advances in neural information processing systems*, 17. Suchin Gururangan, Tam Dang, Dallas Card, and Noah A. Smith. 2019. Variational pretraining for semi-supervised text classification. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5880–5894, Florence, Italy. Association for Computational Linguistics. Suchin Gururangan, Ana Marasovic, Swabha ´ Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8342–8360, Online. Association for Computational Linguistics. Junxian He, Jiatao Gu, Jiajun Shen, and Marc'Aurelio Ranzato. 2020. Revisiting self-training for neural sequence generation. In International Conference on Learning Representations. Mariya Hendriksen, Maurits Bleeker, Svitlana Vakulenko, Nanne van Noord, Ernst Kuiper, and Maarten de Rijke. 2022. Extending clip for category-to-image retrieval in e-commerce. In Advances in Information Retrieval: 44th European Conference on IR Research, ECIR 2022, Stavanger, Norway, April 10–14, 2022, Proceedings, Part I, page 289–303, Berlin, Heidelberg. Springer-Verlag. Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. 2022. Training compute-optimal large language models. arXiv preprint arXiv:2203.15556. Zejiang Hou, Julian Salazar, and George Polovets. 2022. Meta-Learning the Difference: Preparing Large Language Models for Efficient Adaptation. *Transactions of the Association for Computational Linguistics*, 10:1249–1265. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 328–339, Melbourne, Australia. Association for Computational Linguistics. Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models. *arXiv* preprint arXiv:2001.08361. Chris Kedzie and Kathleen McKeown. 2019. A good sample is hard to find: Noise injection sampling and self-training for neural language generation models. In Proceedings of the 12th International Conference on Natural Language Generation, pages 584–593, Tokyo, Japan. Association for Computational Linguistics. Thomas N. Kipf and Max Welling. 2017. Semisupervised classification with graph convolutional networks. In International Conference on Learning Representations (ICLR). Samuli Laine and Timo Aila. 2017. Temporal ensembling for semi-supervised learning. In *5th International Conference on Learning Representations,* ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net. Dong-Hyun Lee et al. 2013. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In Workshop on challenges in representation learning, ICML, page 896. Changchun Li, Ximing Li, and Jihong Ouyang. 2021a. Semi-supervised text classification with balanced deep representation distributions. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5044–5053, Online. Association for Computational Linguistics. Shiyang Li, Semih Yavuz, Wenhu Chen, and Xifeng Yan. 2021b. Task-adaptive pre-training and self-training are complementary for natural language understanding. In *Findings of the Association for Computational Linguistics: EMNLP 2021*, pages 1006–1015, Punta Cana, Dominican Republic. Association for Computational Linguistics. Shoushan Li, Zhongqing Wang, Guodong Zhou, and Sophia Yat Mei Lee. 2011. Semi-supervised learning for imbalanced sentiment classification. In *Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence - Volume Volume* Three, IJCAI'11, page 1826–1831. AAAI Press. Zhenghua Li, Xue Peng, Min Zhang, Rui Wang, and Luo Si. 2019. Semi-supervised Domain Adaptation for Dependency Parsing. In *Proceedings of the 57th* Annual Meeting of the Association for Computational Linguistics, pages 2386–2395, Florence, Italy. Association for Computational Linguistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*. Lajanugen Logeswaran, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Jacob Devlin, and Honglak Lee. 2019. Zero-shot entity linking by reading entity descriptions. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 3449–3460, Florence, Italy. Association for Computational Linguistics. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In *Proceedings of the 49th Annual Meeting of the* Association for Computational Linguistics: Human Language Technologies, pages 142–150, Portland, USA. Association for Computational Linguistics. Katerina Margatina, Loic Barrault, and Nikolaos Aletras. 2022. On the importance of effectively adapting pretrained language models for active learning. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 825–836, Dublin, Ireland. Association for Computational Linguistics. Julian McAuley and Jure Leskovec. 2013. Hidden factors and hidden topics: Understanding rating dimensions with review text. In *Proceedings of the 7th* ACM Conference on Recommender Systems, RecSys '13, page 165–172, New York, NY, USA. Association for Computing Machinery. David McClosky, Eugene Charniak, and Mark Johnson. 2006. Effective self-training for parsing. In *Proceedings of the Human Language Technology Conference* of the NAACL, Main Conference, pages 152–159, New York City, USA. Association for Computational Linguistics. Shikib Mehri, Mihail Eric, and Dilek Z. Hakkani-Tür. 2020. Dialoglue: A natural language understanding benchmark for task-oriented dialogue. *ArXiv*, abs/2009.13570. Yu Meng, Yunyi Zhang, Jiaxin Huang, Chenyan Xiong, Heng Ji, Chao Zhang, and Jiawei Han. 2020. Text classification using label names only: A language model self-training approach. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9006–9017, Online. Association for Computational Linguistics. Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. 2018. Virtual adversarial training: a regularization method for supervised and semisupervised learning. *IEEE transactions on pattern* analysis and machine intelligence, 41:1979–1993. Subhabrata Mukherjee and Ahmed Awadallah. 2020. Uncertainty-aware self-training for few-shot text classification. In *Advances in Neural Information Processing Systems*, volume 33, pages 21199–21212. Curran Associates, Inc. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In *Proceedings of NAACL-HLT* 2019: Demonstrations. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. *OpenAI* blog, 1(8). Rajat Raina, Alexis Battle, Honglak Lee, Benjamin Packer, and Andrew Y. Ng. 2007. Self-taught learning: Transfer learning from unlabeled data. In *Proceedings of the 24th International Conference on Machine Learning*, ICML '07, page 759–766, New York, NY, USA. Association for Computing Machinery. Alan Ramponi and Barbara Plank. 2020. Neural Unsupervised Domain Adaptation in NLP—A Survey. In Proceedings of the 28th International Conference on Computational Linguistics, pages 6838–6855, Barcelona, Spain (Online). International Committee on Computational Linguistics. Sebastian Ruder and Barbara Plank. 2018. Strong Baselines for Neural Semi-Supervised Learning under Domain Shift. In *Proceedings of the 56th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1044–1054, Melbourne, Australia. Association for Computational Linguistics. Kuniaki Saito, Kohei Watanabe, Yoshitaka Ushiku, and Tatsuya Harada. 2018. Maximum classifier discrepancy for unsupervised domain adaptation. In *Proceedings of the IEEE conference on computer vision* and pattern recognition, pages 3723–3732. Timo Schick and Hinrich Schütze. 2021. Exploiting cloze-questions for few-shot text classification and natural language inference. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 255–269, Online. Association for Computational Linguistics. Zhengxiang Shi, Yue Feng, and Aldo Lipani. 2022a. Learning to execute actions or ask clarification questions. In *Findings of the Association for Computational Linguistics: NAACL 2022*, pages 2060–2070, Seattle, United States. Association for Computational Linguistics. Zhengxiang Shi and Aldo Lipani. 2023. Don't stop pretraining? make prompt-based fine-tuning powerful learner. *arXiv preprint arXiv:2305.01711*. Zhengxiang Shi, Qiang Zhang, and Aldo Lipani. 2022b. Stepgame: A new benchmark for robust multi-hop spatial reasoning in texts. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 11321–11329. Kihyuk Sohn, David Berthelot, Chun-Liang Li, Zizhao Zhang, Nicholas Carlini, Ekin D. Cubuk, Alex Kurakin, Han Zhang, and Colin Raffel. 2020. Fixmatch: Simplifying semi-supervised learning with consistency and confidence. In Proceedings of the 34th International Conference on Neural Information Processing Systems, NIPS'20, Red Hook, NY, USA. Curran Associates Inc. Antti Tarvainen and Harri Valpola. 2017. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In *Proceedings of the 31st International Conference on Neural Information Processing Systems*, NIPS'17, page 1195–1204, Red Hook, NY, USA. Curran Associates Inc. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353–355, Brussels, Belgium. Association for Computational Linguistics. Ximei Wang, Jinghan Gao, Mingsheng Long, and Jianmin Wang. 2021. Self-tuning for data-efficient deep learning. In *International Conference on Machine* Learning (ICML). Yidong Wang, Hao Chen, Yue Fan, Wang SUN, Ran Tao, Wenxin Hou, Renjie Wang, Linyi Yang, Zhi Zhou, Lan-Zhe Guo, Heli Qi, Zhen Wu, Yu-Feng Li, Satoshi Nakamura, Wei Ye, Marios Savvides, Bhiksha Raj, Takahiro Shinozaki, Bernt Schiele, Jindong Wang, Xing Xie, and Yue Zhang. 2022. USB: A unified semi-supervised learning benchmark for classification. In Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track. Qizhe Xie, Zihang Dai, Eduard Hovy, Minh-Thang Luong, and Quoc V. Le. 2020a. Unsupervised data augmentation for consistency training. In Proceedings of the 34th International Conference on Neural Information Processing Systems, NIPS'20, Red Hook, NY, USA. Curran Associates Inc. Qizhe Xie, Minh-Thang Luong, Eduard Hovy, and Quoc V Le. 2020b. Self-training with noisy student improves imagenet classification. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10687–10698. Qiantong Xu, Alexei Baevski, Tatiana Likhomanenko, Paden Tomasello, Alexis Conneau, Ronan Collobert, Gabriel Synnaeve, and Michael Auli. 2021a. Selftraining and pre-training are complementary for speech recognition. In *ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)*, pages 3030–3034. IEEE. Yi Xu, Lei Shang, Jinxing Ye, Qi Qian, Yu-Feng Li, Baigui Sun, Hao Li, and Rong Jin. 2021b. Dash: Semi-supervised learning with dynamic thresholding. In *International Conference on Machine Learning*, pages 11525–11536. PMLR. Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer. In *Proceedings* of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 483–498, Online. Association for Computational Linguistics. David Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In 33rd Annual Meeting of the Association for Computational Linguistics, pages 189–196, Cambridge, Massachusetts, USA. Association for Computational Linguistics. Bowen Zhang, Yidong Wang, Wenxin Hou, Hao Wu, Jindong Wang, Manabu Okumura, and Takahiro Shinozaki. 2021. Flexmatch: Boosting semi-supervised learning with curriculum pseudo labeling. In *Proceedings of the 35th International Conference on* Neural Information Processing Systems, volume 34. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1, NIPS'15, page 649–657, Cambridge, MA, USA. MIT Press. Barret Zoph, Golnaz Ghiasi, Tsung-Yi Lin, Yin Cui, Hanxiao Liu, Ekin D. Cubuk, and Quoc V. Le. 2020. Rethinking pre-training and self-training. In Proceedings of the 34th International Conference on Neural Information Processing Systems, NIPS'20, Red Hook, NY, USA. Curran Associates Inc. ## Appendix Overview The appendix is structured as follows: Appendix §A provides a brief description and example for each dataset (subsection §A.1). Additionally, a similarity analysis among datasets and an illustration of overlaps between IMDB and AMA-ZON REVIEW are included (subsection §A.2). Appendix §B presents a brief description of stateof-the-art ST approaches. Appendix §C includes a supplementary Table that examines the effect of low unlabelled data sizes. Appendix §D presents additional experiments to verify our findings using other ST approaches. Appendix §E includes additional experiments to train ST approaches using TAPT checkpoints. Appendix §F provides implementation details and hyperparameters for TAPT, ST, and FT methods used in our experiments. ## A Datasets In this section, we briefly introduce the datasets used in our work and provide additional analysis of the similarity among them. Specifically, we provide four examples to demonstrate the overlap between IMDB and AMAZON REVIEW, as a supplement to our domain adaptation analysis (§6). ## A.1 Description In this section, we briefly introduce IMDB, SST2, AG NEWS, AMAZON REVIEW, and YAHOO! ANSWER datasets. Table 6 list examples for each dataset. IMDB. The IMDB dataset (Maas et al., 2011) contains a collection of 50 000 reviews from the Internet Movie Database, with no more than 30 reviews per movie. This dataset contains an equal number of positive and negative reviews, yielding a 33% Marco-F1 score for random guessing. There are 25 000 and 25 000 for training and testing, respectively. We follow Wang et al. (2022) to split the dataset by selecting 12 500 samples and 1 000 samples per class from the train set to form a train and validation set, respectively. SST-2. The SST-2 dataset (Wang et al., 2018) consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence. Similar to IMDB, this is also a binary classification task. There are 67 349 and 872 for training and testing. We select 60 000 and 7 349 samples from the train set to form a train and validation set, respectively, where the validation set contains 3 675 and 3 674 samples for two classes, respectively. AG NEWS. The AG NEWS topic classification dataset is constructed by Zhang et al. (2015), where 4 classes are used. Each class contains 30 000 training samples and 1 900 test samples. We follow Wang et al. (2022) to split the dataset by selecting 25 000 samples and 2 500 samples per class from the train set samples to form a train and validation set, respectively. AMAZON R**EVIEW**. The AMAZON REVIEW dataset (McAuley and Leskovec, 2013) is a sentiment classification dataset, with five classes. There are 600 000 train samples and 130 000 test samples per class. We follow Wang et al. (2022) to split the dataset by selecting 50 000 samples and 5 000 samples per class from the train set samples to form a train and validation set, respectively. YAHOO! A**NSWER**. The YAHOO! ANSWER dataset (Chang et al., 2008) is a topic classification dataset, with ten classes. There are 140 000 train samples and 6 000 test samples per class. We follow Wang et al. (2022) to split the dataset by selecting 50 000 samples and 5 000 samples per class from the train set samples to form a train and validation set, respectively. ## A.2 Dataset Similarity We provide an analysis of the vocabulary overlap of the datasets, as shown in Figure 7. Additionally, in Table 7, we provide some examples to illustrate the overlap between IMDB and AMAZON REVIEW. As shown in Table 6, although both the SST-2 and IMDB datasets are sentiment analysis tasks for movie reviews, the SST-2 datasets contain shorter and vaguer sentences than the IMDB dataset. This difference could be a potential reason for poor performance of ST approaches in the UDA setting (§6). In contrast, the AMAZON REVIEW dataset, which is a product review sentiment analysis dataset, is more similar to the IMDB dataset than the SST-2 dataset, as shown in Table 7. This suggests a poten- ![14_image_0.png](14_image_0.png) ## B St **Frameworks** VAT. VAT (Miyato et al., 2018) proposed a regularization technique that forces pairs of data points that are very close in the input space to be close to each other in the output space. VAT adds small perturbation to the input data and forces the model to produce similar predictions. FIXM**ATCH**. FIXMATCH (Sohn et al., 2020) generates artificial labels using both consistency regularization and pseudo-labelling, where the artificial labels are produced based on weakly-augmented unlabelled data. These artificial labels are then used as targets to train the model on strongly-augmented unlabelled data. FIXMATCH only retains an artificial label if the model assigns a high probability to one of the possible classes. DASH. DASH (Xu et al., 2021b) extends FIXMATCH by introducing a mechanism with a dynamically adjusted threshold of loss to select a subset of training examples from the unlabelled data for performing SSL. FLEXM**ATCH**. FLEXMATCH (Zhang et al., 2021) also extends FIXMATCH by introducing the concept of curriculum learning (Bengio et al., 2009) to flexibly adjust thresholds for different classes at each time step and select unlabelled data and their pseudo labels that are more likely to be informative. ADAM**ATCH**. ADAMATCH (Berthelot et al., 2022) aims to solve domain adaptation problems in SSL and build a high-accuracy model that trains on and tests on different data distributions. ADAMATCH builds on FIXMATCH and introduces a relative confidence threshold and a modified distribution alignment from (Berthelot et al., 2019a). ## C Probability Of Performing Worsen Than S**Upervised**. In §5, we discuss that we select the model performance with the three lowest unlabelled sizes (the first three columns in Figure 3) for each dataset and exclude the model performance with the lowest labelled size (the last row in Figure 3). This results in 9 cells in IMDB, 3 cells in SST-2, 9 cells in AMAZON REVIEW, and 12 cells in YA-HOO! ANSWER, where TAPT has one run per cell and ST (FLEXMATCH and ADAMATCH) has two runs per cell. We consider a run to be a failure if its performance is worse than its corresponding SUPERVISED baseline. Table 8 lists the probability of ST and TAPT of falling below the SUPERVISED baseline with selected combinations of labelled and unlabelled sizes. ## D Further Validations With Other St Approaches In this section, we conduct additional experiments on ST approaches, including VAT, DASH, and FIXMATCH to demonstrate that our findings are applicable to other ST approaches as well. In Table 9, we select several combinations of labelled and unlabelled sizes on IMDB, SST2, AMAZON REVIEW, and YAHOO! ANSWER datasets. Our experimental results show that other ST approaches do not perform well when the labelled size is low, and that other ST approaches have a high probability to perform worsen than SUPERVISED baselines when the unlabelled size is low. This suggests that poor performance when the labelled or unlabelled size is inadequate may be a common problem of state-of-the-art ST approaches. ## E Train St Approaches With Tapt Checkpoints Previous works (Mukherjee and Awadallah, 2020; Li et al., 2021b) have suggested that training ST approaches from a TAPT checkpoint may be beneficial. Here we also provide some additional experiments to train ST approaches with TAPT checkpoints to further corroborate our findings. Table 10 shows that TAPT outperforms ADAMATCH +TAPT or FLEXMATCH +TAPT with two different labelled sizes on the YAHOO! AN-SWER dataset. Table 11 shows that training ST approaches from TAPT checkpoints could improve the performance of ST but cannot solve the issue of ST approaches when labelled or unlabelled data is not adequate. Specifically, the performance of ST +TAPT is still poor when labelled data is insufficient, as discussed in §5. Meanwhile, in Table 11, the performance of ST +TAPT could be outperformed by the SU-PERVISED baselines when unlabelled data is inadequate, while TAPT consistently outperforms the SUPERVISED baselines. When the labelled size is 10, the performance of ST trained with fewer unlabelled samples tends to be better, indicating that reducing the number of unlabelled data can be helpful, as discussed in §5. ## F Implementation Details We consistently use five random seeds, ranging from 1 to 5, for all algorithms. The sampled labelled data is the same for all algorithms for a given seed. The development and test sets remain unchanged for all different labelled and unlabelled data sizes. Our model implementation uses open-source libraries including HuggingFace Transformers2, Fairseq3, and USB4. Our experiments of TAPT are performed on 8x32GB V100 GPUs, with a batch size of 16 per device and 2 gradient accumulation steps. Table 12 lists the hyperparameters used for the TAPT phrase. Table 13 lists the hyperparameters used for the fine-tuning phrase. Table 14 lists the hyperparameters used for ST approaches. | Dataset | Example | |---------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | IMDB | I watched this movie after seeing other comments on IMDb, even convincing my wife that it was a "unique horror movie." I wanted to like this movie, but was unable to.The "love story" was good, but the horror aspect was quite bad. If the story was just about a young man who fell in love with a girl suffering from parasomnia, then it would have been a better movie.The care centre stretched credulity well past the limits, in fact it was quite ridiculous. The doctor happily ignors privacy laws and professionalism. A nurse goes into a room for a routine feeding of a dangerous patient (without security escort), and drops the tray and runs out of the room screaming for no apparent reason. The forensic patient (and the film's villain) is tied up in a standing position fully clothed - apparently for years? None of it makes much sense.The movie even had some actors that I've liked in other things, such as the detectives, but still I can't recommend this movie. | | SST-2 | a rewarding work of art for only the most patient and challenge-hungry moviegoers. | | AG NEWS | Teen flies in plane #39;s landing gearA homeless teenager who hid in the landing gear of a passenger plane survived a 700-kilometre flight across south-western China but his companion fell and probably died, state media reported on Friday. | | AMAZON REVIEW | THIS is MUSIC at its BESTRob Dougan has done it. He's crafted musical perfection, or close to it anyway. I have finally found the music I've been waiting for my whole life in this album - Rob D you are a genius. I think a lot of us wanted to know more about this guy as soon as we heard the track playing to the "Woman in the Red Dress" scene. Now I know why the Wachowski brothers have enlisted his musical talents to flesh out their movies.I know I should be trying to write a more helpful, objective review but I can do nothing but wax poetic for Rob Dougan and his debut album. He has mixed classical melodies with awesome electric beats and it all comes together in an audio orgy. Just buy the album already and let's get Rob some more mainstream recognition. | | YAHOO! ANSWER | Does anybody know a great deal about angels? I'm looking for names, if they're good or bad, what they look like, etc. The more detail the better. All religions accepted | Table 7: Similarity analysis between IMDB and AMAZON REVIEW with four examples that highlight the overlap. | IMDB | AMAZON REVIEW | |--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | I loved this movie since I was 7 and I saw it on the opening day. It was so touching and beautiful. I strongly recommend seeing for all. It's a movie to watch with your family by far. My MPAA rating: PG-13 for thematic elements, prolonged scenes of disastor, nudity/sexuality and some language. | This is a very touching, spiritual movie! When I first saw this film, [...]. I was deeply moved by this motion picture, and the DVD brings the story to your own home. The bonus materials could be better, but the main part of the DVD is the actual movie. Great, great, great film... [...] | | Pacino is over-the-top but to good effect as he's clearly having loads of fun. Beatty is great [...] The lighting, velvet overtones and smog/smoke combine to create a great effect.There are some really funny cameos [...] Highly recommended. 4.5/5 stars. [...] | Makes a great gift! We bought this book for my dad for Father's Day this year, and thought he would have fun reading it since he has four granddaughters. He loved it and has even selected stories to read to the girls during over-nights with Grandpa and Grandma. I highly recommend it as a great gift. | | The late [...] scripted this tale of terror and it was absolutely one of the scariest movies I ever saw as a kid. (I had to walk MILES just to see a movie, and it was usually dark when I emerged from the theater; seeing a horror movie was always unnerving [...] | Movia ... please This movie is a masterpiece of terror & suspence & Beautifully filmed & acted.Comparisons to reality are not allowed when reviewing films of this caliber. Your reaction (though it MAY be sarcastic) is EXACT proof of it's genius! Watch it again...and this time bask in all it's glory! | | Fabulous actors, beautiful scenery, stark reality [...] I tried to buy the video for several years, finally bought it used from a video store that went out of business. But Yippee! The DVD is now for sale, I purchased it on amazon.com. Not cheap, but well worth it to me. [...] | Well worth the import price. My first impression of this album was a good one, but as time went on it came to grow on me more and more. This is certainly one of the better Costes albums. The mixing is nothing revolutionary, but it is well done and all tracks flow into each other very well. [...]. | Table 8: Results on the effect of low unlabelled sizes on ST and TAPT. Failure means performing worsen than SUPERVISED. | Task | #Unl. | #Lab. | Prob. of ST Failure | Prob. of TAPT Failure | |---------------|--------------|-----------------------|-----------------------|-------------------------| | IMDB | 100, 500, 2k | 10, 20, 200, 1k | 6/18 (33%) | 0/9 (0%) | | SST-2 | 100, 500, 2k | 40, 200, 1k, 5k | 4/6 (67%) | 0/3 (0%) | | AMAZON REVIEW | 1k, 5k, 20k | 100, 500, 2k, 10k | 10/18 (56%) | 0/9 (0%) | | YAHOO! ANSWER | 1k, 5k, 20k | 20, 100, 500, 2k, 10k | 13/24 (54%) | 0/12 (0%) | Dataset #Unl. #Lab. VAT FIXMATCH DASH FLEXMATCH ADAMATCH TAPT S**UPERVISED** 100 4 33.50.2 33.40.1 33.40.1 35.74.2 34.10.7 61.86.7 59.44.8 100 10 61.620.1 45.421.6 34.72.2 49.019.9 52.421.0 75.56.9 71.88.5 100 20 87.12.2 64.616.5 67.816.6 85.52.9 79.17.6 85.51.0 84.11.9 500 4 33.40.0 33.40.1 33.40.1 33.40.1 33.60.3 63.47.2 58.27.1 2k 4 33.30.0 33.30.0 33.30.0 33.30.0 33.30.0 63.16.2 60.95.6 10k 4 33.30.0 33.50.3 33.30.0 34.01.2 33.60.4 64.18.9 62.47.9 23k 4 33.30.0 33.30.0 57.429.4 45.323.9 33.30.0 68.85.6 65.610.4 | IMDB SST-2 AMAZON REVIEW YAHOO! ANSWER | |------------------------------------------| 100 40 63.310.6 46.99.7 47.97.0 57.24.5 51.014.0 78.72.5 76.43.7 500 40 55.716.8 53.88.9 51.210.0 67.710.7 59.111.4 83.34.8 72.97.9 500 200 83.01.6 84.52.8 82.63.5 83.83.0 87.41.9 88.80.9 88.30.9 2k 40 55.924.2 36.43.0 35.32.0 56.66.7 49.313.8 79.35.9 71.78.2 10k 40 73.520.5 38.911.4 35.62.6 56.912.5 36.22.9 85.91.0 78.57.5 60k 40 79.613.4 32.61.7 33.40.6 40.67.7 42.613.3 82.64.0 75.37.2 1k 20 13.55.2 14.95.6 20.33.0 25.83.2 20.71.1 32.01.8 32.52.2 1k 100 46.12.2 36.33.1 35.36.2 43.41.7 40.32.2 48.50.9 48.22.2 1k 500 52.60.2 50.81.5 49.51.0 54.11.0 52.81.1 55.90.3 55.30.5 5k 20 15.57.8 13.53.3 22.25.2 23.27.3 16.96.9 32.83.4 32.32.5 20k 20 19.37.5 15.23.9 20.56.4 19.110.0 19.36.3 32.03.2 31.63.6 100k 20 14.17.3 11.92.9 20.75.2 15.32.6 12.53.7 30.73.6 30.83.9 250k 20 10.35.0 10.93.6 22.05.7 22.74.9 14.45.6 30.22.4 32.13.1 1k 10 1.90.1 2.00.1 4.62.9 15.72.6 18.87.9 29.65.8 23.54.5 1k 20 6.72.8 10.14.2 9.63.2 32.79.1 28.85.8 38.94.1 34.13.6 1k 100 55.21.7 46.94.4 45.33.7 54.21.4 53.91.3 59.70.8 57.41.6 1k 500 59.20.4 61.60.6 60.71.3 61.91.1 61.50.9 65.80.3 65.50.2 5k 10 1.80.0 3.22.6 3.72.7 16.410.8 17.811.7 31.45.1 25.73.9 20k 10 2.40.9 2.00.3 4.93.1 7.34.7 25.212.2 32.45.6 27.24.4 100k 10 2.30.6 3.82.5 3.42.9 2.91.1 17.711.4 30.83.8 28.05.0 500k 10 2.00.4 1.80.0 2.61.2 2.50.9 14.36.0 27.34.6 24.74.8 Table 10: Results of ADAMATCH +TAPT and FLEXMATCH +TAPT on YAHOO! ANSWER with two different labelled sizes. Table 11: We further verify our conclusion on FLEXMATCH +TAPT. We report the average Macro-F1 score on the test set across five seeds, with standard deviations as subscripts. Blue represents the best results for each row. | YAHOO! ANSWER 500 2000 | | | |--------------------------|---------|---------| | ADAMATCH | 68.00.7 | 69.50.3 | | + TAPT | 68.21.0 | 69.80.3 | | FLEXMATCH | 66.60.7 | 68.70.4 | | + TAPT | 66.71.2 | 69.00.5 | | SUPERVISED | 65.40.3 | 68.50.3 | | + TAPT | 68.80.7 | 71.50.3 | | FULLY-SUPERVISED. | 75.30.2 | | | + TAPT | 75.40.1 | | | Dataset | #Unl. | #Lab. | FLEXMATCH + TAPT | FLEXMATCH | TAPT | SUPERVISED | |---------------|---------|---------|--------------------|-------------|---------|--------------| | 1k | 10 | 17.04.9 | 15.72.6 | 29.65.8 | 23.54.5 | | | 1k | 20 | 39.42.0 | 32.79.1 | 38.94.1 | 34.13.6 | | | 1k | 100 | 55.21.8 | 54.21.4 | 59.70.8 | 57.41.6 | | | 1k | 500 | 62.00.7 | 61.91.1 | 65.80.3 | 65.50.2 | | | 20k | 10 | 4.01.4 | 7.34.7 | 32.45.6 | 27.24.4 | | | 100k | 10 | 5.16.1 | 2.91.1 | 30.83.8 | 28.05.0 | | | 500k | 10 | 2.51.1 | 2.50.9 | 27.34.6 | 24.74.8 | | | YAHOO! ANSWER | | | | | | | | Hyperparameter | Assignment | |-------------------------|---------------| | number of steps | 100 epochs | | batch size | 256 | | maximum learning rate | 1e-06, 1e-4 | | learning rate optimizer | AdamW | | Adam epsilon | 1e-6 | | Adam beta weights | 0.9, 0.98 | | learning rate scheduler | Warmup linear | | Weight decay | 0.01 | | Warmup proportion | 0.06 | | learning rate decay | linear | Table 12: Hyperparameters for task-adaptive pretraining. The learning rate and unlabelled size are tightly connected and need to be adjusted together. We generally recommend increasing the learning rate as you increase the unlabelled size. Different from its predecessor, BERT (Devlin et al., 2019), where the next sentence prediction objective is used, ROBERTA (Liu et al., 2019) is only trained with the MLM objective (i.e., cross-entropy loss on predicting randomly masked tokens), dynamically changing the masking pattern applied to the training examples and typically using the masking probability of 0.15. | Hyperparameter | Assignment | |-------------------------|-----------------| | number of steps | 10 or 50 epochs | | batch size | 16 or 32 | | maximum learning rate | 2e-05 | | learning rate optimizer | AdamW | | maximum sequence length | 256 | | learning rate scheduler | Warmup linear | | Warmup proportion | 0.06 | | learning rate decay | linear | | Hyperparameter | Assignment | |-------------------------|------------------------| | number of steps | 25 600 or 51 200 steps | | batch size | 16 | | maximum learning rate | 2e-05 | | learning rate optimizer | AdamW | | maximum sequence length | 256 | | learning rate scheduler | Warmup linear | | Warmup proportion | 0.05 | | learning rate decay | linear | ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Right before the reference section. A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? Not applicable. Left blank. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Not applicable. Left blank. ## C ✓ **Did You Run Computational Experiments?** Section 4, 5, 6 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix F The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Appendix F ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 4, 5, 6 and Appendix C, D, E ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Appendix F D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
glass-etal-2023-retrieval
Retrieval-Based Transformer for Table Augmentation
https://aclanthology.org/2023.findings-acl.348
Data preparation, also called data wrangling, is considered one of the most expensive and time-consuming steps when performing analytics or building machine learning models. Preparing data typically involves collecting and merging data from complex heterogeneous, and often large-scale data sources, such as data lakes. In this paper, we introduce a novel approach toward automatic data wrangling in an attempt to alleviate the effort of end-users, e.g. data analysts, in structuring dynamic views from data lakes in the form of tabular data. Given a corpus of tables, we propose a retrieval augmented transformer model that is self-trained for the table augmentation tasks of row/column population and data imputation. Our self-learning strategy consists in randomly ablating tables from the corpus and training the retrieval-based model with the objective of reconstructing the partial tables given as input with the original values or headers. We adopt this strategy to first train the dense neural retrieval model encoding portions of tables to vectors, and then the end-to-end model trained to perform table augmentation tasks. We test on EntiTables, the standard benchmark for table augmentation, as well as introduce a new benchmark to advance further research: WebTables. Our model consistently and substantially outperforms both supervised statistical methods and the current state-of-the-art transformer-based models.
# Retrieval-Based Transformer For Table Augmentation Michael Glass1, Xueqing Wu2, Ankita Rajaram Naik1, Gaetano Rossiello1**, Alfio Gliozzo**1 1IBM Research AI, Yorktown Heights, NY, USA 2University of Illinois Urbana-Champaign ## Abstract Data preparation, also called data wrangling, is considered one of the most expensive and timeconsuming steps when performing analytics or building machine learning models. Preparing data typically involves collecting and merging data from complex heterogeneous, and often large-scale data sources, such as data lakes. In this paper, we introduce a novel approach toward automatic data wrangling in an attempt to alleviate the effort of end-users, e.g. data analysts, in structuring dynamic views from data lakes in the form of tabular data. We aim to address *table augmentation* tasks, including row/column population and data imputation. Given a corpus of tables, we propose a retrieval augmented self-trained transformer model. Our self-learning strategy consists in randomly ablating tables from the corpus and training the retrieval-based model to reconstruct the original values or headers given the partial tables as input. We adopt this strategy to first train the dense neural retrieval model encoding tableparts to vectors, and then the end-to-end model trained to perform table augmentation tasks. We test on EntiTables, the standard benchmark for table augmentation, as well as introduce a new benchmark to advance further research: WebTables. Our model consistently and substantially outperforms both supervised statistical methods and the current state-of-the-art transformer-based models. ## 1 Introduction The way organizations store and manage data is rapidly evolving from using strict transactional databases to data lakes that typically consist of large collections of heterogeneous data formats, such as tabular data, spreadsheets, and NoSQL databases. The absence of a unified schema in data lakes does not allow the usage of declarative query languages, e.g. SQL, making the process of data preparation1 dramatically expensive (Terriz1Also referred as data wrangling or data munging. ## Zano Et Al., 2015). Data preparation involves several phases, such as data discovery, structuring, cleansing, enrichment and validation, with the purpose of producing views commonly organized in a tabular format used to create reports (Koehler et al., 2021) or to gather feature sets to build machine learning models (He et al., 2021). The schemaless nature of data lakes makes data discovery and structuring even more challenging since the tasks of joinability and unionability among tables become non-deterministic (Fernandez et al., 2018; Zhu et al., 2019; Bogatu et al., 2020). In this work, we propose a novel end-to-end solution based on a retrieval augmented transformer architecture with the aim to support end-users, such as data analysts, in the process of constructing dynamic views from data lakes. To this end, we address three table augmentation tasks (Zhang and Balog, 2017, 2019): automatic row and column population and cell filling (or data imputation). Figure 1 illustrates the three core tasks in table augmentation. All tasks proceed from a query or seed table. In the case of self-supervised training, this seed table is formed by ablating rows, columns or cell values from an existing table in the data lake. The task of column header population, also simply called column population, is to extend the table with additional possible column names or headers. This is a way of suggesting additional data that could be joined into this table. In the task of cell filling there is a specific unknown cell, for which the model predicts a specific value. The task of row population is only populating the key column for a row. This is the column that contains the primary entity that the remainder of the row contains data for, sometimes referred to as a row header. Typically this is the first column in a table. Approaches to table augmentation can be purely parametric (Iida et al., 2021; Deng et al., 2022), in which case the data lake is used to train the param5635 ![1_image_0.png](1_image_0.png) eters of the model, but not used during inference. In this setting, the table augmentation model must draw the possible augmentations for rows, columns and cells from its trained parameters. Alternatively, with retrieval-based models (Lewis et al., 2020b; Glass et al., 2021b, 2022), the data lake can also be used at inference to provide evidence for proposed augmentations. This has two key advantages: 1) the model need not memorize the data lake - or even a significant fraction of it, and 2) the model can provide justification for its predicted augmentations in the form of a provenance table or tables. The key contributions of this paper are: (1) We introduce the first end-to-end, retrieval-based model for table augmentation. Our Retrieval Augmented Table Augmentation (RATA) model uses a biencoder retrieval model for neural indexing and searching tables from data lake, and a reader transformer to identify augmentations from retrieved tables. (2) Our model establishes a new state-ofthe-art across all three tasks in table augmentation, while also providing additional value with its provenance information. (3) We create and release a new dataset for table augmentation, expanding the scope of evaluation beyond Wikipedia. This dataset, based on Cafarella et al. (2008), is also larger and more diverse than the standard Wikipedia-based dataset (Zhang and Balog, 2017). ## 2 Related Work Table augmentation can be divided into three sub-tasks: row population, column population, and cell filling. For row and column population, Zhang and Balog (2017) identifies and ranks candidate values from both the table corpus and knowledge base. Table2Vec (Zhang et al., 2019a) trains header and entity embeddings from a table corpus in a skipgram manner and uses the embeddings for the task. Although TaBERT (Yin et al., 2020) was developed as a foundational model primarily for question answering, its embeddings have also been applied for row and column population. Recent work formulates the task as multi-label classification and fine-tunes large-scale pre-trained models such as TABBIE (Iida et al., 2021) and TURL (Deng et al., 2022). TABBIE consists of three transformers for converting cells, columns and rows to vector representations. A corrupt cell detection task is the pretraining task used to learn these embeddings on the table corpus. To fine-tune a trained TABBIE model for the column header population task, a concatenated [CLSCOL] embedding of the columns is passed through a single linear and softmax layer and trained with a multi-label classification objective. Similarly, for the row population task a multi-class classification is carried out on the first column's [CLSCOL] representation. For cell filling, InfoGather (Yakout et al., 2012) retrieves tables from the table corpus and selects values from retrieved tables. Zhang and Balog (2019) extends the system to retrieve from both the table corpus and knowledge base. Their system that uses only the table corpus as the source is called TMatch, which we compare to in Section 6. Ahmadov et al. (2015) combines predictions both from table retrieval and from a machine learningbased value imputation system. Deng et al. (2022) ![2_image_0.png](2_image_0.png) directly applies pre-trained TURL model to the task since cell filling is similar with its pre-training objective. Cell filling is also related to the task of value imputation, i.e., to provide an assumed value when the actual value is unknown, usually using machine learning methods (Bießmann et al., 2019). In addition to augmenting individual entities, column headers or cells, some other work aims to join tables over entire rows or columns with retrieved tables (Sarma et al., 2012; Bhagavatula et al., 2013; Lehmberg et al., 2015). Retrieval-augmented models have been successfully applied to many tasks. For open-domain question answering (ODQA), DPR learns dense representation to retrieve evidence and trains a separate reader to select answer from retrieved evidence (Karpukhin et al., 2020). RAG uses a generator to generate outputs conditioned on retrieved evidence and jointly trains DPR with a generator on the downstream task (Lewis et al., 2020b). RAG is shown to achieve good performance on knowledgeintensive NLP tasks such as ODQA, fact verification, slot filling, etc (Lewis et al., 2020b; Petroni et al., 2021). Re2G further introduces a reranker to boost performance (Glass et al., 2022). Retrievalaugmented models are also shown to be effective on zero-shot slot filling (Glass et al., 2021b), and multilingual keyphrase generation (Gao et al., 2022). Similar models have also been applied to tablerelated tasks such as open-domain table question answering (Herzig et al., 2021). In our work, we apply the architecture to table augmentation. ## 3 Approach While the row, column, and cell predictions of purely parametric table augmentation methods may be useful on their own, they can be much more effective for a human-in-the-loop use case if they are supported by provenance. A user of a data preparation application may be unwilling to simply accept the prediction of a model, but when paired with evidence from the data lake, that prediction can be better assessed. Furthermore, the retrieval model itself may be useful for exploration and general search in a data lake. In this view, table augmentation can be seen as self-supervised pretraining for table retrieval. Fortunately, there is now considerable work on retrieval augmented transformer models (Glass et al., 2022; Lewis et al., 2020b). These models augment the parametric knowledge of the transformer, with non-parametric knowledge in the form of an indexed corpus. To do so, they use a neural retrieval model based on DPR (Dense Passage Retrieval) (Karpukhin et al., 2020) that is trained end-to-end to assist in generation. We build on this line of research to introduce a general model for all table augmentation tasks: row population, column header population and cell filling. Our model, Retrieval Augmented Table Augmentation (RATA), comprises of an index of tables, a retrieval component, and a reader or selection component. The table index is built from the tables in the training set, which are first decomposed into table-parts, then transformed into sequences for use with standard retrieval approaches. The retrieval component is a biencoder architecture similar to DPR (Karpukhin et al., 2020), but trained without ground truth on correct provenance. We call this *Dense Table Retrieval* or DTR. The reader component is an extractive approach. An extractive rather than generative approach ensures that the model's predictions are always grounded in actual data, rather than speculative guesses. The extractive approach is also a more natural fit for row and column population tasks, where there is no required order to the answers. Finally, the extractive approach permits an initial training phase for the retrieval component where the *answer-bearing* tables are considered as a bag of positives. Figure 1 illustrates the tasks of table augmentation by example. Formally, the input I is a table with r rows and c columns comprising a caption C, headers H, and matrix of cell values, V. One of the columns, usually the first, is indicated as the key column key. $$\begin{array}{c}{{I=\langle{\mathcal{C}},\mathbf{H},\mathbf{V},k e y\rangle,1\leq k e y\leq c}}\\ {{\mathbf{H}=\left[h_{1},h_{2},...,h_{c}\right]}}\\ {{\mathbf{V}=\left[\begin{matrix}v_{1,1},v_{1,2},...,v_{1,c}\\ ...\\ v_{r,1},v_{r,2},...,v_{r,c}\end{matrix}\right]}}\end{array}$$ The input table is ablated in a task specific way to produce a query table and gold answers, -Q, G, described as follows: $$\begin{array}{l}{{Q_{r p}=\langle{\mathcal{C}},\mathbf{H},\mathbf{V_{1..n_{s e e d}}},k e y\rangle}}\\ {{\mathbf{G}_{r p}=\left\{\mathbf{V_{i,k e y}}:i>n_{s e e d}\right\}}}\\ {{Q_{c p}=\langle{\mathcal{C}},\mathbf{H_{1..n_{s e e d}}},\mathbf{V_{..,1..n_{s e e d}}},k e y\rangle}}\\ {{\mathbf{G}_{c p}=\left\{\mathbf{H_{i}}:i>n_{s e e d}\right\}}}\\ {{Q_{c f}=\langle{\mathcal{C}},\mathbf{H},\mathbf{V}\setminus\mathbf{v_{i,j}},k e y\rangle}}\\ {{\mathbf{G}_{c f}=\left\{v_{i,j}\right\}}}\end{array}$$ where rp, cp and cf refer to the row population, column header population and cell filling tasks, respectively. ## 3.1 End-To-End Model Figure 2a shows how tables in a data lake are first indexed to provide a non-parametric knowledge store. Each table is first split into chunks of up to three rows plus the header, which we refer to as *table-parts*. We form sequence representations of these table-parts following work in other transformer-based approaches to tables (Glass et al., 2021a). The table-part sequence representations (St) are formed from the row sequence representations (Sri ) and the table caption: $$S_{i}^{r}=\bigoplus_{j=1}^{c}h_{j}\oplus\mathrm{``}\mathrm{"}\oplus\mathrm{\boldmath~\nabla~}v_{i,j}\oplus\mathrm{``}\mathrm{"}$$ $$S^{t}=\mathcal{C}\oplus[\mathrm{SEP}]\oplus\bigoplus_{i=\operatorname{start}}^{\operatorname{end}}S_{i}^{r}\oplus\mathrm{``}\mathrm{"}$$ Here ⊕ indicates concatenation and the strings ':', '*', and '|' delimit the header, cell value contents, and each row respectively. Any distinctive tokens can work as delimiters since the transformer will learn an appropriate embedding representation. These sequences are then projected to vectors using the context encoder by taking the [CLS]. We index the dense representations for all table-parts in the data lake using FAISS (Johnson et al., 2017) with Hierarchical Navigable Small World (Malkov and Yashunin, 2018). Figure 2b shows the architecture of our approach, Retrieval Augmented Table Augmentation (RATA). The input query is encoded to a vector for retrieving related table-parts from the indexed data lake. Similar to table-part representation, we form sequence representation for the query, use a query encoder to encode it, and take the [CLS] vector as query representation. Both the context encoder and the query encoder use the BERTBASE architecture. We use unnormalized dot product to score a pair of query q and table-part d. Top-k table-parts with highest scores will be retrieved. ## Score(Q, D) = Bertqe(Q)[Cls] · Bertce(D)[Cls] After the top-k most relevant table-parts are retrieved, the reader component selects the most likely augmentations for the query table. In the case of column population, the candidate augmentations are all headers from retrieved table-parts; for cell filling it is all cells; and for row population it is only those cell values that are entities. The sequence representation of the query table is paired with each table-part representation, using the standard [CLS] and [SEP] token to demarcate the bounds of each sequence. In the table-part representation, the candidates are marked by special begin and end tokens: '-' and ''. This combined sequence is then the input to a transformer encoder (initialized from BERTLARGE (Devlin et al., 2019)). For each pair of candidate answer marks ('-' and ''), the final token embeddings are concatenated to produce a single vector. Then a linear layer is applied to predict the likelihood that the candidate is a correct answer to the query. $\alpha=[i:t_i=\text{"("]}$ $\omega=[i:t_i=\text{")"]}$ $ans_n=t_{\alpha_n+1},t_{\alpha_n+2},...,t_{\omega_n-1}$ $C=\begin{bmatrix}E_{\alpha0}\oplus E_{\omega0}\\ E_{\alpha1}\oplus E_{\omega1}\\ E_{\alpha2}\oplus E_{\omega2}\\ ...\\ \rho=softmax(C\cdot\mathbf{w_{candidate}})\end{bmatrix}$ Formally, the input is a sequence of tokens T = [t0, t1*, ...*]. The transformer encoder produces a sequence of embeddings BERT*reader*(T) = E = [e0, e1*, ...*]. The candidate representation vectors, C, are then multiplied by the learned parameter vector w**candidate** and a softmax is applied to produce the reader scores, ρ, for the retrieved table-part. Note that the likelihood for a given answer occurrence ansn is ρn. The candidate likelihood vectors for each of the top-k retrieved table-parts, ρ1, ρ2*, ..., ρ*k, are then combined with the softmax normalized retrieval scores, r = [r1, r2*, ..., r*k], to provide a probability distribution over all candidates in all retrieved table-parts. Since these scores are for each occurrence of a candidate string, we aggregate over each distinct normalized candidate string by summing the likelihoods for all occurrences. This produces the final score, s(a) for each answer string a. The loss is the negative loglikelihood of all gold answer strings, G. Because of this formulation, during training any instance with no correct candidates in any retrieved tablepart is skipped. $$\mathbf{p}^{j}=s o f t m a x(\mathbf{r})_{j}\cdot\rho^{j}$$ $$s(a)=\sum_{j=1}^{k}\ \sum_{n:a n s_{n}^{j}=a}\mathbf{p}_{n}^{j}$$ $$l o s s=-\sum_{a\in\mathbf{G}}l o g\left(s(a)\right)$$ We use answer normalization to determine if a candidate matches a gold answer, as described in Appendix B. For row population and cell filling in EntiTables, the cell values are already linked to entities so normalization is not necessary. For RATA training, we iterate through the tables in the training set. To construct input query from a table, we ablate either all rows after the first n*seed* (row population), or all columns after the first n*seed* (column population), or a particular cell (cell filling). We ensure that table-parts from the query table are not retrieved by filtering the retrieved results. Like most previous approaches to end-to-end training of neural retrieval, we train only the query encoder in the end-to-end training phase. This avoids expensive re-indexing of the entire data lake either each time the context encoder is updated, or periodically as in ANCE (Xiong et al., 2020). ## 3.2 Retrieval Training While it is possible in theory to train neural retrieval entirely through impact in the end-to-end table augmentation tasks, a good initialization is important for learning. Without an initial effective retrieval model, there is no answer-bearing evidence to train the reader model, and therefore a high fraction of training examples will be skipped (Lee et al., 2019). One possible approach is to use a pretraining task for retrieval, such as the Inverse Cloze Task (Lee et al., 2019) or a retrieval-based masked language model (Guu et al., 2020). In the table augmentation task, there is the option of training with answerbearing evidence as positives. Since the reader is purely extractive, any evidence that does not contain a correct augmentation string is necessarily a negative. However, not every table-part that contains an answer is a positive. We use a multiple instance learning setup for the positives: we train under the assumption that at least one of the tableparts containing a correct answer is a positive. To gather the training data for retrieval we build an initial keyword index using Anserini2. We use BM25 (Robertson and Zaragoza, 2009) to retrieve potentially relevant table-parts for each table query. From each training table we construct a query for row population, column population or cell filling. Since these queries are constructed from ablated tables, we know a (potentially incomplete) set of correct augmentations or answers. Note that there may be other equally correct augmentations. But since this is a self-supervised task, we consider only the headers or cell values that actually occurred in the table to be correct. Formally, the query constructed from a training table is a pair of the ablated table, Q and the set of gold answers G. The set of table-parts retrieved by the initial retrieval method, for example BM25, is given as R. A retrieved table-part is in the positive set, R+, if it contains any gold answer, otherwise it is a hard negative, R−. $$\begin{array}{l}{{\mathbf{R}^{+}=\{d:d\in\mathbf{R}\wedge\exists a\in\mathbf{G},a\in d\}}}\\ {{\mathbf{R}^{-}=\mathbf{R}-\mathbf{R}^{+}}}\end{array}$$ Following Karpukhin et al. (2020), we use batch negatives along with the retrieved "hard negatives". The batch B = [-q1, R1,-q2, R2*, ...,*-qbz, Rbz] is processed to produce vectors for all queries and retrieved table-parts. All query vectors are multiplied with all table-part vectors to produce scores between all pairs. A softmax is applied per-query to give the normalized scores. Finally, the loss is 2https://github.com/castorini/anserini the negative log-likelihood for the positive scores. $$\begin{array}{c}{{b z}}\\ {{\mathcal{R}=\bigcup_{i=1}\mathbf{R}_{i}}}\\ {{\rho_{i}=s o f t m a x([s o r e(q_{i},d):d\in\mathcal{R}])}}\\ {{l o s s=-\sum_{i=1}^{b z}l o g\left(\sum_{d\in\mathbf{R}_{i}^{+}}\rho_{i,d}\right)}}\end{array}$$ Note that since we are summing over the probability of all table-parts in the positive set, R+, it is not necessary for all answer-bearing retrieved table-parts to be high scoring. Instead, it follows the multiple instance learning framework. All instances marked negative are negative, while at least one instance in the positive set is positive. ## 4 Webtables Dataset Prior work on table augmentation has focused on tables derived from Wikipedia (Zhang and Balog, 2017; Iida et al., 2021; Deng et al., 2022; Zhang and Balog, 2019; Zhang et al., 2019b). In order to better assess the proposed methods and provide the research community with a new benchmark, we introduce a new dataset for table augmentation: WebTables. We construct this dataset using the tables crawled and extracted by Cafarella et al. (2008). We start from the English relational tables of WDC Web Table Corpus 2015. We further filter the dataset to remove the most common types of noisy tables: calendars formatted as tables, lists of forum posts and torrent links, tables with less than four rows or columns, and tables that format large blocks of text. Because previous work on table augmentation focused so heavily on Wikipedia tables, we exclude from this dataset any tables crawled from any "wikipedia" domain. We also deduplicate the corpus, ensuring that there are no two tables with the same content in their cells. Following filtering and deduplication we sample 10 thousand tables each for the development and test sets and one million tables for training. However, in our experiments we use only 300 thousand training examples to limit the computational cost. To parallel the setting of EntiTables we use the "key column" identified by Cafarella et al. (2008) as the target column for row population and we consider entities to be those strings that occur at least three times in the key column for any table in the train set. ## 5 Experiments We experiment on two datasets of tables across three tasks. Table 1 gives statistics on these datasets. EntiTables (Zhang and Balog, 2017) contains 1.6M tables collected from Wikipedia where entity mentions are normalized into its name in DBPedia. For row and column population, we use the development and test sets released by Zhang and Balog (2017) each containing 1,000 randomly sampled queries. For cell filling, we use the test set released by Zhang and Balog (2019). The test set contains 1,000 queries uniformly sampled from four main column data types: entity, quantity, string, and datetime. Though Zhang and Balog (2019) use human annotations as gold labels, we notice that the human annotations are of low quality, so we use the original values in the table cells as gold labels. WebTables is based on Cafarella et al. (2008) - 154M relational tables extracted from HTML tables in Common Crawl. We process the corpus as described in Section 4. For column population we use the original development and test sets of 10,000 tables each. While for row population we necessarily exclude any tables without any entities in the key column after the first n*seed* rows. For cell filling, we use heuristic rules to classify cell values into three types: quantity, string and datetime. Then, we sample 3,000 queries uniformly from the three types as test set and sample another 3,000 queries as development set. | Dataset | Task | Train | Dev | Test | |------------|--------------|---------|-------|--------| | EntiTables | row pop. | 187k | 1k | 1k | | EntiTables | column pop. | 602k | 937 | 950 | | EntiTables | cell filling | 100k | - | 972 | | WebTables | row pop. | 563k | 6.6k | 6.8k | | WebTables | column pop. | 1M | 10k | 10k | | WebTables | cell filling | 1M | 3k | 3k | Table 1: Dataset statistics. We compare our method with two deep learningbased baselines, TABBIE (Iida et al., 2021) and BART (Lewis et al., 2020a). Both TABBIE and BART have no retrieval component involved. TABBIE, described in Section 2, uses three transformers: one for cell values, one for rows, and one for columns. It produces vector embeddings for each cell and each row and column of a table. We follow Iida et al. (2021) for the row and column population and base our experiments on the partial released code and pretrained model3. To apply TABBIE to cell filling, we formulate it as classification on the concatenation of the row and column embedding vectors, similar to row and column population. The classification vocabulary is collected from the training corpus: all cell values that occur at least ten times. We also report the published results for TABBIE on the EntiTables dataset, although we were unable to reproduce these results for row population. BART is a sequence-to-sequence model that takes the linearized table as the source text and generates the row entities, cell headers, or cell value as the target text. We use a beam search in decoding (beam size = 35) to produce a ranked list of predictions. We use the FAIRSEQ toolkit (Ott et al., 2019) for these experiments. For RAG we use the implementation in Hugging Face transformers (Wolf et al., 2019). For both BART and RAG, the sequence representation of the query tables is the same as in RATA. On the EntiTables dataset, we also compare against probabilistic methods that first retrieve tables from the table corpus and next select values for table augmentation. We compare against the published results of Zhang and Balog (2017) for row and column population, and against TMatch (Zhang and Balog, 2019) for cell filling. For evaluation, we report Mean Reciprocal Rank (MRR) and Normalized Discounted Cumulative Gain over the top ten outputs (NDCG@10) for the final prediction performance of row population, column population, and cell filling. To evaluate the performance of DTR retrieval, we also report answer-bearing MRR, where a retrieved table-part is considered correct if it contains one of the correct answers. To determine the significance of these results we use a 95% confidence interval on the t-distribution. We also applied a sampling permutation test, but this did not change any conclusions regarding significance. ## 6 Results Table 2 contains our results for the row population task. Our model, RATA, is able to greatly outperform all other methods on both datasets. Using the non-parametric knowledge of the table corpus is very advantageous for the large and specific vocabulary of entities in key columns. EntiTables WebTables MRR NDCG MRR NDCG TaBERT* 56.0 46.4 - - TABBIE* 57.2 47.1 - - TABBIE† 25.18 15.2 12.44 11.93 BART 45.30 32.76 29.25 19.30 RAG 56.95 43.48 33.20 22.23 RATA **77.15 60.34 45.13 26.70** ±2.32 ±2.18 ±1.10 ±0.73 Table 2: Test results for row population, n*seed* = 2. * As reported in Iida et al. (2021) † Our results EntiTables WebTables MRR NDCG MRR NDCG TaBERT* 60.1 54.7 - - TABBIE* 62.8 55.8 - - TABBIE† 63.9 55.8 84.1 78.96 BART 73.36 65.37 87.40 85.05 RAG 78.64 72.81 89.39 87.58 RATA **88.12 81.01 94.07 89.94** ±1.91 ±1.97 ±0.44 ±0.49 Table 3: Test results for column population, n*seed* = 2. * As reported in Iida et al. (2021) †Our results Table 4: Test results for cell filling. Table 3 contains our results for the column population task. RATA is again substantially better than the other methods, although not by as wide a margin as the row population task. The BART baseline is the best performing of the alternatives with an MRR lower by 6% to 15%. Results on cell filling task are in Table 4. Our method outperforms all baselines on both datasets. TABBIE performs the worst due to the large classification vocabulary and out-of-vocabulary issue. On EntiTables dataset, retrieval-based methods including TMatch and RATA significantly outperform non-retrieval methods including TABBIE and BART. Figure 3 shows an example output from RATA. On WebTables, however, BART outper- | EntiTables | WebTables | | | | |--------------|-------------|-------|-------|-------| | MRR | NDCG | MRR | NDCG | | | TABBIE | 10.62 | 11.56 | 24.79 | 26.17 | | BART | 21.25 | 22.48 | 37.06 | 39.19 | | TMatch | 30.54 | 32.23 | - | - | | RAG | 18.65 | 19.71 | 34.80 | 36.34 | | RATA | 34.32 | 36.25 | 33.58 | 35.33 | | ±2.80 | ±2.82 | ±1.60 | ±1.61 | | Figure 3: RATA example output on EntiTables dataset. The output answer is correct, and the retrieved table provides ![7_image_0.png](7_image_0.png) sufficient evidence for the answer. | Row Population | Column Population | Cell Filling | | | | | |------------------|---------------------|----------------|------------|------------|------------|------------| | EntiTables | WebTables | EntiTables | WebTables | EntiTables | WebTables | | | BM25 | 54.44±2.72 | 41.16±1.06 | 62.93±2.73 | 84.17±0.65 | 28.98±2.59 | 38.48±1.62 | | DTR (initial) | 74.34±2.39 | 47.88±1.10 | 90.07±1.79 | 94.91±0.41 | 34.78±2.72 | 40.80±1.64 | | DTR (post-RATA) | 80.98±2.17 | 49.62±1.11 | 90.97±1.72 | 94.94±0.41 | 37.48±2.81 | 40.26±1.66 | forms RATA. We notice that BART can achieve high scores by either copying values from other rows (as in Figure 5 and Figure 6a), or producing values similar with in other rows (as in Figure 6b and Figure 6c). As shown in the examples, this strategy is able to achieve good performance. ![7_image_1.png](7_image_1.png) Effect of Retrieval To analyze the effectiveness of the DTR component, we report answer bearing MRR in Table 5. We notice that DTR is well trained after the initial retrieval training phase and achieves higher answer bearing MRR compared to BM25. End-to-end training provides meaningful supervision for retrieval and further improves MRR on most tasks. By comparing Table 2, 3, 4 with Table 5, we notice that the final task MRR is close to answer bearing MRR. When the correct answer is present in the retrieved table, the reader can select the correct answer at high accuracy. This indicates that the bottleneck of our system is retrieval. Number of Retrieved Table-Parts RATA was trained with 5 retrieved table-parts for all tasks. This relatively small number for the retrieval size provides good efficiency during training, since train time scales roughly linearly with the number of query / table-part pairs that must be processed by the reader transformer component. But during inference, we are able to adjust the number of retrieved table-parts more freely. Figure 4 shows that table augmentation performance monotonically increases as more evidence is retrieved for row population and cell filling, but column population performance does not improve past 5. ## 7 Conclusion Our retrieval-based transformer architecture for table augmentation, RATA, is able to greatly advance the state-of-the-art in three table augmentation tasks: row population, column population, and cell filling. The non-parametric knowledge in the table corpus is able to substantially enhance the table augmentation capabilities. Furthermore, by training an effective table-to-table retrieval model we are able to provide provenance for the system's proposed augmentations. We also introduce a new benchmark dataset for table augmentation: WebTables and evaluate our model and two recent transformer baselines. Our code for RATA and the newly introduced dataset are available as open source4. ## Limitations A limitation of RATA is always assuming the answer is included in the retrieval corpus, which is not always true. When the corpus does not contain the correct answer, the desired behavior is to inform the user that the answer cannot be obtained, but RATA will provide a poorly supported answer. This also encourages RATA to learn spurious correlations when the retrieved tables coincidentally contain the same value, but does not really support the answer. This problem is especially serious when the answer is very generic (for example, numbers like "0") and same values by coincidence are common. This is related to the answerable question issue (Rajpurkar et al., 2018) or evidentiality issue (Lee et al., 2021; Asai et al., 2022) for question answering. ![8_image_0.png](8_image_0.png) BART output: 12/1/2009; 12/2/2009; 12/1/2010; … ![8_image_1.png](8_image_1.png) ![8_image_2.png](8_image_2.png) RATA output: ![8_image_3.png](8_image_3.png) Figure 5: BART and RATA example outputs on WebTables. For cell-filling on WebTables, BART outperforms RATA often by either copying values from other rows of the query table or producing values similar to those in other rows. However, as shown in Figure 5, RATA's retrieval is often not helpful. Usually, the information required to fill the query table is not repeated in the corpus, so the retrieved table cannot support the query. As a result, RATA is simply retrieving some similar table, and selecting similar values in the tables. ## References Ahmad Ahmadov, Maik Thiele, Julian Eberius, Wolfgang Lehner, and Robert Wrembel. 2015. Towards a hybrid imputation approach using web tables. In 2nd IEEE/ACM International Symposium on Big Data Computing, BDC 2015, Limassol, Cyprus, December 7-10, 2015, pages 21–30. IEEE Computer Society. Akari Asai, Matt Gardner, and Hannaneh Ha- jishirzi. 2022. Evidentiality-guided generation for knowledge-intensive NLP tasks. In *Proceedings of* the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2226–2243, Seattle, United States. Association for Computational Linguistics. Chandra Sekhar Bhagavatula, Thanapon Noraset, and Doug Downey. 2013. Methods for exploring and mining tables on wikipedia. In Proceedings of the ACM SIGKDD Workshop on Interactive Data Exploration and Analytics, IDEA@KDD 2013, Chicago, Illinois, USA, August 11, 2013, pages 18–26. ACM. Felix Bießmann, Tammo Rukat, Philipp Schmidt, Prathik Naidu, Sebastian Schelter, Andrey Taptunov, Dustin Lange, and David Salinas. 2019. Datawig: Missing value imputation for tables. J. Mach. Learn. Res., 20:175:1–175:6. Alex Bogatu, Alvaro A. A. Fernandes, Norman W. Paton, and Nikolaos Konstantinou. 2020. Dataset discovery in data lakes. In 36th IEEE International Conference on Data Engineering, ICDE 2020, Dallas, TX, USA, April 20-24, 2020, pages 709–720. IEEE. Michael J. Cafarella, Alon Y. Halevy, Daisy Zhe Wang, Eugene Wu, and Yang Zhang. 2008. Webtables: exploring the power of tables on the web. *Proc. VLDB* Endow., 1(1):538–549. Xiang Deng, Huan Sun, Alyssa Lees, You Wu, and Cong Yu. 2022. Turl: Table understanding through representation learning. *ACM SIGMOD Record*, 51(1):33– 40. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171–4186. Association for Computational Linguistics. Raul Castro Fernandez, Ziawasch Abedjan, Famien Koko, Gina Yuan, Samuel Madden, and Michael Stonebraker. 2018. Aurum: A data discovery system. In *34th IEEE International Conference on Data* Engineering, ICDE 2018, Paris, France, April 16-19, 2018, pages 1001–1012. IEEE Computer Society. Yifan Gao, Qingyu Yin, Zheng Li, Rui Meng, Tong Zhao, Bing Yin, Irwin King, and Michael Lyu. 2022. Retrieval-augmented multilingual keyphrase generation with retriever-generator iterative training. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 1233–1246, Seattle, United States. Association for Computational Linguistics. Michael Glass, Mustafa Canim, Alfio Gliozzo, Saneem Chemmengath, Vishwajeet Kumar, Rishav Chakravarti, Avi Sil, Feifei Pan, Samarth Bharadwaj, and Nicolas Rodolfo Fauceglia. 2021a. Capturing row and column semantics in transformer based question answering over tables. In *Proceedings of* the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1212–1224, Online. Association for Computational Linguistics. Michael Glass, Gaetano Rossiello, Md Faisal Mahbub Chowdhury, and Alfio Gliozzo. 2021b. Robust retrieval augmented generation for zero-shot slot filling. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 1939–1949, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Michael Glass, Gaetano Rossiello, Md Faisal Mahbub Chowdhury, Ankita Naik, Pengshan Cai, and Alfio Gliozzo. 2022. Re2G: Retrieve, rerank, generate. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2701–2715, Seattle, United States. Association for Computational Linguistics. Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Mingwei Chang. 2020. Retrieval augmented language model pre-training. In *International Conference on Machine Learning*, pages 3929–3938. PMLR. Xin He, Kaiyong Zhao, and Xiaowen Chu. 2021. Automl: A survey of the state-of-the-art. *Knowl. Based* Syst., 212:106622. Jonathan Herzig, Thomas Müller, Syrine Krichene, and Julian Eisenschlos. 2021. Open domain question answering over tables via dense retrieval. In *Proceedings of the 2021 Conference of the North American* Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACLHLT 2021, Online, June 6-11, 2021, pages 512–519. Association for Computational Linguistics. Hiroshi Iida, Dung Thai, Varun Manjunatha, and Mohit Iyyer. 2021. TABBIE: Pretrained representations of tabular data. In *Proceedings of the 2021 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3446–3456, Online. Association for Computational Linguistics. Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2017. Billion-scale similarity search with gpus. *arXiv* preprint arXiv:1702.08734. Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for opendomain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769–6781, Online. Association for Computational Linguistics. Martin Koehler, Edward Abel, Alex Bogatu, Cristina Civili, Lacramioara Mazilu, Nikolaos Konstantinou, Alvaro A. A. Fernandes, John A. Keane, Leonid Libkin, and Norman W. Paton. 2021. Incorporating data context to cost-effectively automate end-to-end data wrangling. *IEEE Trans. Big Data*, 7(1):169– 186. Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6086–6096, Florence, Italy. Association for Computational Linguistics. Kyungjae Lee, Seung-won Hwang, Sang-eun Han, and Dohyeon Lee. 2021. Robustifying multi-hop QA through pseudo-evidentiality training. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6110–6119, Online. Association for Computational Linguistics. Oliver Lehmberg, Dominique Ritze, Petar Ristoski, Robert Meusel, Heiko Paulheim, and Christian Bizer. 2015. The mannheim search join engine. J. Web Semant., 35:159–166. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020a. BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics,* ACL 2020, Online, July 5-10, 2020, pages 7871–7880. Association for Computational Linguistics. Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. 2020b. Retrieval-augmented generation for knowledgeintensive nlp tasks. In *Advances in Neural Information Processing Systems*, volume 33, pages 9459– 9474. Curran Associates, Inc. Yu A Malkov and Dmitry A Yashunin. 2018. Efficient and robust approximate nearest neighbor search using hierarchical navigable small world graphs. IEEE transactions on pattern analysis and machine intelligence, 42(4):824–836. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In *Proceedings of NAACL-HLT* 2019: Demonstrations. Fabio Petroni, Aleksandra Piktus, Angela Fan, Patrick S. H. Lewis, Majid Yazdani, Nicola De Cao, James Thorne, Yacine Jernite, Vladimir Karpukhin, Jean Maillard, Vassilis Plachouras, Tim Rocktäschel, and Sebastian Riedel. 2021. KILT: a benchmark for knowledge intensive language tasks. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 2523–2544. Association for Computational Linguistics. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable questions for squad. In *Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics,* ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 2: Short Papers, pages 784–789. Association for Computational Linguistics. Stephen Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: Bm25 and beyond. *Found. Trends Inf. Retr.*, 3(4):333–389. Anish Das Sarma, Lujun Fang, Nitin Gupta, Alon Y. Halevy, Hongrae Lee, Fei Wu, Reynold Xin, and Cong Yu. 2012. Finding related tables. In Proceedings of the ACM SIGMOD International Conference on Management of Data, SIGMOD 2012, Scottsdale, AZ, USA, May 20-24, 2012, pages 817–828. ACM. Ignacio G Terrizzano, Peter M Schwarz, Mary Roth, and John E Colino. 2015. Data wrangling: The challenging yourney from the wild to the lake. In *CIDR*. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2019. Huggingface's transformers: State-ofthe-art natural language processing. *arXiv preprint* arXiv:1910.03771. Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul Bennett, Junaid Ahmed, and Arnold Overwijk. 2020. Approximate nearest neighbor negative contrastive learning for dense text retrieval. arXiv preprint arXiv:2007.00808. Mohamed Yakout, Kris Ganjam, Kaushik Chakrabarti, and Surajit Chaudhuri. 2012. Infogather: entity augmentation and attribute discovery by holistic matching with web tables. In Proceedings of the ACM SIGMOD International Conference on Management of Data, SIGMOD 2012, Scottsdale, AZ, USA, May 20-24, 2012, pages 97–108. ACM. Pengcheng Yin, Graham Neubig, Wen-tau Yih, and Sebastian Riedel. 2020. TaBERT: Pretraining for joint understanding of textual and tabular data. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8413–8426, Online. Association for Computational Linguistics. Li Zhang, Shuo Zhang, and Krisztian Balog. 2019a. Table2vec: Neural word and entity embeddings for table population and retrieval. In *Proceedings of* the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2019, Paris, France, July 21-25, 2019, pages 1029–1032. ACM. Li Zhang, Shuo Zhang, and Krisztian Balog. 2019b. Table2vec: Neural word and entity embeddings for table population and retrieval. In *Proceedings of* the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1029–1032. Shuo Zhang and Krisztian Balog. 2017. Entitables: Smart assistance for entity-focused tables. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 255–264. Shuo Zhang and Krisztian Balog. 2019. Autocompletion for data cells in relational tables. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, pages 761–770. Erkang Zhu, Dong Deng, Fatemeh Nargesian, and Renée J. Miller. 2019. JOSIE: overlap set similarity search for finding joinable tables in data lakes. In Proceedings of the 2019 International Conference on Management of Data, SIGMOD Conference 2019, Amsterdam, The Netherlands, June 30 - July 5, 2019, pages 847–864. ACM. ## Appendix A Model Hyperparameters Our model is fine-tuned from two BERTBASE models for the retriever and one BERTLARGE model for the reader. This totals 2 · 110M + 340M = 560M parameters. Table 6 shows the hyperparameters used in our experiments. | Hyperparameter | DTR | Reader | |-------------------|--------|------------| | learn rate | 5e-5 | 3e-5 | | batch size | 128 | 32 | | epochs | 3 | 2 | | warmup instances | 0 | 10% | | learning schedule | linear | triangular | | max grad norm | 1 | 1 | | weight decay | 0 | 0 | | Adam epsilon | 1e-8 | 1e-8 | Table 6: RATA hyperparameters The only hyperparameter that varied for the tasks and datasets was the batch size. ## B Dataset And Task Specifics We use two types of answer normalization. For EntiTables column population we implement caseinsensitive matching by normalizing both predictions and gold answers to lowercase. For all row | Dataset | Task | Batch Size | |------------|-------------------|--------------| | Entitables | All | 32 | | WebTables | Row Population | 32 | | WebTables | Column Population | 32 | | WebTables | Cell Filling | 64 | Table 7: Batch size per task and dataset and column population in WebTables we use a normalization that removes unicode accents and non- ASCII characters then lowercases. Cell filling does not use normalization. For reproduction of results from TABBIE on Entitables we carry out the following steps. Column Header Population Based on the above mentioned normalization we create a vocabulary of 182,909 column headers for the Entitables dataset which is approximately equal to the 127,656 possible header labels mentioned in the paper (Iida et al., 2021). Each of the possible headers occurs atleast twice in the training dataset. Row Population Except for above mentioned normalization we use entities which have occurred atleast 7 times in the training dataset which lead to 308,841 possible entities. THis is approximately equal to the 300,000 entities mentioned in (Iida et al., 2021). Cell Filling Except for the above mentioned normalization we use cell values which have occurred atleast 10 times in the training dataset. Query table: Scientific Name - Search Result ![11_image_0.png](11_image_0.png) rogutarius Rondall, 1958 ![11_image_1.png](11_image_1.png) Gold answer: acanthuridae ![11_image_2.png](11_image_2.png) Query table: ![11_image_3.png](11_image_3.png) 2,076,948 mallorcahotelguid:.com 212.227.86.76 2,076,950 minore.info Spain 216.239.38.21 Greece ![11_image_4.png](11_image_4.png) Gold answer: 2,076,946 BART output : 2,076,947; 2,076,945; 2,076,946; ... (b) Additional example 2. ![11_image_5.png](11_image_5.png) Query table: rocco Grand Tou~ | Dates & Prices | KE Adventur ## Cell Filling Bart Examples C Additional BART cell filling output examples on WebTables dataset are in Figure 6. ## Compute Infrastructure D All row and column population experiments were done on a single P100 GPU. This gave train times of 24 to 48 hours. All cell filling experiments were done on a single A100 GPU, with train times of 24 hours. 75 1,045 $1,220 Book Naw Book Now Sun 29 Jan - Tue 7 Feb Spaces ![11_image_6.png](11_image_6.png) 745 1,045 $1,220 Book Now Book Now MGT.2 Sun 19 Feb - Tue 28 Feb Spaces MGT.3 Sun 19 Mar - Tue 28 Mar Spaces 745 1,045 $1,220 Book Now Book Now Gold answer: mgt.1 BART output: mgt.1; mGT.1; mgg.1; ... (c) Additional example 3. Figure 6: Additional BART output examples on WebTables dataset. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitations section ✗ A2. Did you discuss any potential risks of your work? Table augmentation is a data wrangling task useful for many activities involving data. It is not within our expertise to weigh the risks of better data management. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and Section 1: Introduction ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Sections 4 And 5 ✓ B1. Did you cite the creators of artifacts you used? Sections 4 and 5 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Section 7 "open source" ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section 4 and 5 ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? The data was transformed from data collected by Michael J. Cafarella, Alon Y. Halevy, Daisy Zhe Wang, Eugene Wu, and Yang Zhang. 2008. Webtables: exploring the power of tables on the web. Proc. VLDB Endow., 1(1):538–549. We did not do any additional protection or anonymizing, since the data is available in its full form anyway. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 4 and 5 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 5 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ## C ✓ **Did You Run Computational Experiments?** Section 5 And 6 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix A ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Appendix A ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 5 C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Not applicable. Left blank. ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
aggarwal-etal-2023-ecg
{ECG}-{QALM}: Entity-Controlled Synthetic Text Generation using Contextual {Q}{\&}{A} for {NER}
https://aclanthology.org/2023.findings-acl.349
Named Entity Recognition (NER) state-of-the-art methods requires high-quality labeled datasets. Issues such as scarcity of labeled data, under-representation of entities, and privacy concerns with using sensitive data for training, can be significant barriers. Generating synthetic data to train models is a promising solution to mitigate these problems. We propose ECG-QALM, a contextual question and answering approach using pre-trained language models to synthetically generate entity-controlled text. Generated text is then used to augment small labeled datasets for downstream NER tasks. We evaluate our method on two publicly available datasets. We find ECG-QALM is capable of producing full text samples with desired entities appearing in a controllable way, while retaining sentence coherence closest to the real world data. Evaluations on NER tasks show significant improvements (75{\%} - 140{\%}) in low-labeled data regimes.
# Ecg-Qalm: Entity-Controlled Synthetic Text Generation Using Contextual Q&A For Ner Karan Aggarwal Amazon Seattle, WA [email protected] ## Abstract Henry Jin∗ Harvard University Cambridge, MA [email protected] Aitzaz Ahmad Amazon Seattle, WA [email protected] Named Entity Recognition (NER) state-ofthe-art methods requires high-quality labeled datasets. Issues such as scarcity of labeled data, under-representation of entities, and privacy concerns with using sensitive data for training, can be significant barriers. Generating synthetic data to train models is a promising solution to mitigate these problems. We propose ECG-QALM, a contextual question and answering approach using pre-trained language models to synthetically generate entitycontrolled text. Generated text is then used to augment small labeled datasets for downstream NER tasks. We evaluate our method on two publicly available datasets. We find ECGQALM is capable of producing full text samples with desired entities appearing in a controllable way, while retaining sentence coherence closest to the real world data. Evaluations on NER tasks show significant improvements (75% - 140%) in low-labeled data regimes. ## 1 Introduction NLP tasks typically require large amounts of highquality labeled data to train sufficiently accurate and useful models. However, in many domains, such as finance and healthcare, access to labeled data is often limited. In these domains, annotating data often requires strong domain expertise and therefore, crowdsourcing of labeled data is infeasible. The cost of annotating data by training an expert workforce is often too high for feasibility. A small collection of labeled data also runs the risk of bias creeping in the data and may result in algorithms and models that reflect or even exploit this inherent bias. It also degrades the capability of models to generalize as small datasets are much less likely to have population groups or patterns under-represented (Zhou and Bansal, 2020). These issues need solutions that can perform well in lowlabeled data regimes while combating data bias. ∗This work was done during Henry's internship at Amazon 5649 Synthetic data generation presents a promising solution to address the issues outlined above (Bayer et al., 2021). By synthetically generating data, we can augment small labeled datasets to build a training set. Synthetic data generation can also reduce bias in the data by to sufficiently represent all population groups. In particular, the field of controlled synthetic text generation has received increased attention in recent years. Controlled text generation provides the ability to control for traits such as tone, sentiment, and topic in the generation of a language model (Wang and Wan, 2018; Zeng et al., 2021). This lends controlled synthetic text generation as a useful technique for augmenting small or privacysensitive datasets. However, there has been limited work on the topic of entity-controlled synthetic text generation, *i.e.*, the task of generating coherent text while controlling for the named entities that appear in the generation (Dong et al., 2021). In this paper, we study the problem of entitycontrolled synthetic text generation. We propose, ECG-QALM, a Entity Controlled Text Generation with Contextual Question Answering based pretrained Language Model, that can produce coherent text which contains specific entity tokens, generated in an order provided by the user. We are motivated by the need to synthetically augment datasets to improve performance on downstream NER tasks (Zhou et al., 2022). ECG-QALM provides multiple advantages. It is more sample efficient than other methods, as the model is trained on each block of each sample, unlike just seeing a sample in whole for Seq2Seq models like Dong et al. (2021); b) ECG-QALM sees a block of text which is relatively smaller than whole sample, prompted on entity to be inserted and conditioned on previous generation allowing for generation of more coherent text as demonstrated by generation metrics like perplexity versus SOTA Seq2Seq baselines; and c) unlike prior Seq2Seq methods like RNN (Dong et al., 2021) or using a vanilla GPT, where length of ![1_image_0.png](1_image_0.png) text generated is limited to 512/1024, ECG-QALM can generate as many blocks of (maximum) length 1024, as the number of entities to be inserted. We make the following contributions: 1) we propose a novel approach using pre-trained language models to generate entity-controlled blocks of text, which can be chained to produce full synthetic text samples; 2) our method is capable of generating texts semantically closest to the training data while being distinct; and, 3) evaluations on publicly available datasets on NER task show a significant improvement in data augmentation performance for low-labeled data regimes, even by just using a purely synthetic data. ## 2 Related Work Controlled text generation These methods control a certain aspect of generated text (Yang and Klein, 2021; Chan et al., 2020; Pascual et al., 2021) like sentiment (Wang and Wan, 2018) or concepts (Zeng et al., 2021). These methods focus a macro level aspect of the generated text while we want to control a fine grained text generation. Data-to-text generation The idea is to convert a given set of words or structured data from tables into a piece of text. Most popular problem is table summary generation, also called table-to-text (Liu et al., 2018; Parikh et al., 2020; Chen et al., 2021) or keyword to text methods (Pascual et al., 2021; Tan et al., 2021). While similar, the key difference is they have a fixed set of entities in every generation. Entity-controlled generation Works in the intent detection and slot filing literature for conversational systems have attempted entity-controlled generation (Jolly et al., 2020). Recently, Rosenbaum et al. (2022), attempted to use a pre-trained language model with an instruction prompt that uses examples as input in the prompt for model to generate synthetic text. Note, these models have been built in context of conversational systems and hence, have a goal to respond to a specific query which generating the output text, unlike our task of generating text with specified input entities. Dong et al. (2021) proposed a solution to this exact problem for generating text with given entity types and their mentions, using a RNN based Seq2Seq architecture. Our method uses a pretrained language model with a block-by-block generation mechanism, producing superior text over theirs. They do not evaluate on a downstream task like NER, unlike our work. Data Augmentation for Named Entity Recognition These methods rely on substitution of entities in a given example with entity of same type to create examples. (Dai and Adel, 2020) proposed a simple random replacement which was further enhanced using language modeling to exploit context (Zhou et al., 2022; Ding et al., 2020). While these methods need seed text to generate each example, our method only needs entity tags to generate an example. ## 3 Methodology We use a contextual question and answering based training approach to generate blocks of text with desired entity tags. This approach is able to reliably generate augmented text samples while retaining sentence coherence. Our method generates blocks of text delimited by entities to be inserted, and chaining these generated blocks to create full text samples. We use a GPT-2 language model in place of a recurrent network used by Dong et al. (2021) Table 1: Dataset Statistics ![1_image_1.png](1_image_1.png) Table 2: Generation Quality Metrics for the two datasets: Perplexity, Distinctness-3 (tri-gram), and Rouge-L. Cells with best score are highlighted in blue. (↑): higher the better; (↓): lower the better. | JNPBLA (10%) | BC5CDR (10%) | | | | | | | | | |----------------|----------------|--------------|--------------------------------------------|--------|----------------------------------|--------|------------------------------|--------|--------| | Metric | Gold Data | RS | EntInj DACA MELM ECG-LM ECG-QALM Gold Data | RS | EntInj DACA MELM ECG-LM ECG-QALM | | | | | | Perplexity(↓) | 400.36 | 605.75 796.5 | 556.93 519.24 | 518.09 | 488.56 | 388.42 | 5856.61 1521.3 884.53 692.06 | 510.98 | 477.66 | ![2_image_0.png](2_image_0.png) to take advantage of using pre-trained large language models. The intuition being that using a pre-trained model helps in increasing diversity of the generated text. ## 3.1 Training We first preprocess real world training text samples into blocks, whereby each block is composed of non-entity tokens and ends with an entity tag as shown in Figure 1. Every text sample is then decomposed into these blocks of text. An end of text token is added at the end. Therefore, a full text sample generation consists of chaining generated blocks until a block with an <ENDTEXT> token appears. Side benefit of creating blocks is increased number of (shorter, manageable) training examples that are easier to learn on, unlike existing methods that input entire text at once. After decomposing text samples into such blocks, we arrange blocks into the question and answering format, which consists of three segments: context, question and answer. The context segment provides preceding text blocks, the question segment prompts the model for the desired token, and the answer block is the desired generation. Context section consists of all blocks in the text sample, preceding the current block. This was motivated by the need for the model to be aware of the context for successive generation. The generation of each block must be a continuation of preceding blocks to maintain sentence level coherence. Question segment prompts for the desired entity to appear in the next block. Therefore, through this prompting mechanism we control the desired entity tag to be generated. Following the "Question: " tag is a single token representing the desired entity. Answer segment contains the desired text block to be generated. The final token in this block will therefore be the same token as in the question segment. With this three segment format, every block from the corpus represents a training sample for the language model. ## 3.2 Generation During Inference ![2_Image_1.Png](2_Image_1.Png) At inference time, ECG-QALM generates text conditioned on two segments of context and question. To generate the first block, the context segment is blank, while the question segment contains the desired token to be generated in the first block. The model then completes the answer segment with a generated block, which is inserted into the context segment for the next block generation. A full text sample then is produced by concatenating blocks until an <ENDTEXT> token. If the desired entity tag does not appear in the generated block, we re-generate the block text until the tag appears. ## 3.3 Metrics To evaluate the generated text, we quantitatively measure the quality of generation and performance on NER task. We use three generation quality metrics used in prior literature (Dong et al., 2021) 1. Perplexity measures the 'surprisingness' of the generated text evaluated on a GPT model (Radford et al., 2018). **Distinctness** (Li et al., 2015) measures the uniqueness of tri-grams in the corpus. Rouge-L (Lin, 2004): One trivial sanity check is regurgitation, *i.e.,* if the generation model is simply memorizing the training set. Rouge-L score measures the similarity of the generated text with the training data by calculating the longest common sub-strings. Rouge-L score should be low, if the model is not just spitting out the training examples. A lower Rouge-L score indicates that the generated data is not trivially similar to the training data, hence, ensuring privacy complaint models by not regurgitating the private training data. ## 4 Experiments We evaluate our model on two datasets described in Table 1. We compare with the following baselines: Gold Data: Refers to the real world training data. 1Grammaticality (Warstadt et al., 2019) used in prior works is not a general metric as it is based on English literature. | Generated Data | Gold Data Augmented with Generated Data | | | | | | | | | | | | | | | |------------------|-------------------------------------------|-------|--------|-------|-------|--------|-------------|--------|-------|-------|--------|----------|-------|-------|---------| | Training Data | #Samples Gold Data | RS | EntInj | DACA | MELM | ECG-LM | ECG-QALM RS | EntInj | DACA | MELM | ECG-LM | ECG-QALM | ∆ | | | | JNLPBA (1%) | 186 | 0.311 | 0.172 | 0.087 | 0.184 | 0.229 | 0.273 | 0.347 | 0.392 | 0.313 | 0.425 | 0.459 | 0.482 | 0.546 | 75.5%* | | JNLPBA (10%) | 1860 | 0.722 | 0.375 | 0.219 | 0.525 | 0.552 | 0.596 | 0.641 | 0.709 | 0.484 | 0.722 | 0.723 | 0.722 | 0.723 | 0.1% | | BC5CDR (1%) | 45 | 0.192 | 0.209 | 0.193 | 0.220 | 0.239 | 0.262 | 0.283 | 0.330 | 0.264 | 0.377 | 0.378 | 0.403 | 0.463 | 142.1%* | | BC5CDR (10%) | 456 | 0.749 | 0.587 | 0.424 | 0.689 | 0.711 | 0.734 | 0.741 | 0.711 | 0.738 | 0.744 | 0.758 | 0.757 | 0.760 | 1.4%* | Table 3: Macro F1 scores on NER. Method with highest F1 scores among the generation methods is **boldfaced** while method with highest F1 score overall is indicated by blue. ∆ is percentage difference in F1 scores of gold data and ECG-QALM (w/ augmentation). (*) indicates statistically significant increase with student t-test (p<0.01). RS (Dai and Adel, **2020):** Simple data generation by substitution of named entities with entities of same type from the training gold samples. DACA (Ding et al., **2020):** Substitution method using a LSTM based language model to replace entities in the gold samples, by exploiting context. MELM (Zhou et al., **2022):** Substitution method using Masked Entity Language Model with XLMRoberta with linearization, for a richer context. EntInj (Dong et al., **2021):** Text generation method based on LSTM Seq2Seq model. Closest work to ours, as it performs actual text generation. ECG-LM: This is our own baseline Seq2Seq method, which generates the entire text given a list of entities, without a block-by-block generation. Note: Generated text length in DACA, MELM, EntInj, and ECG-LM is limited by number of tokens model can generate (512/1024) at once; ECGQALM is not, as it chains the generated blocks. ## 4.1 Experimental Settings We use the training, validation, and testing data splits provided publicly in the datasets on Huggingface2. We use the training dataset (and its mentioned subsets) for training both the text generation models as well as training the downstream NER model. We use BERT (Devlin et al., 2018) for downstream NER task. NER results are reported on the complete test set for both the datasets. We use an instance of OpenAI's GPT-2 (Radford et al., 2019) for ECG-QALM. Our model is trained with the Adam optimizer on a learning rate of 1e3, one hundred warm-up steps, and an epsilon of 1e-8. The default CrossEntropy loss function is used, and the model is trained for up to 100 epochs. For the NER task, we train the BERT model for upto 10 epochs with a learning rate of 2e-3. These parameters were set based on hyper-parameter tuning on the validation set. *During generation, we* 2https://huggingface.co/ exactly mimic the entity distribution of the gold data. We can also change the entity distribution to boost under-represented entities as shown in Appendix A.1. ## 5 Results And Discussion 5.1 Generation Quality Generation quality results are shown in Table 2. We clearly observe that our method is lower on all three metrics against the original dataset, which is expected as ours is synthetically generated data. Our method works better than the only other text generation baseline EntInj (Dong et al., 2021) on all three metrics across the two datasets. Particularly, for the BC5CDR dataset, we note EntInj tends to generate repetitive text. The correct benchmark are the substitution based baselines as our method inserts the entities in the same fashion. We observe for the substitution based baselines, distinctness is highest, as expected as we have swapped commonly occurring trigram entities, while the perplexity is worse than ECG-QALM. This shows that swapping affects the lexical meaning of the text, even when done intelligently in DACA/MELM. While we also insert randomly chosen entities in our generated text, these results indicate that our method generates coherent generic text where semantic meaning of the type of the entity is preserved. Our generated data has the lowest Rouge-L scores. Hence, our generated data is not simply memorizing the training data, it is quite different than the gold data. We can see the huge gap with the substitution methods; while the data from substitution methods is practically same as the gold data, ours is distinct. Based on these metrics, we can claim that generated text is semantically closest to the original corpus, while being distinct. ## 5.2 Named Entity Recognition Task We took two subsets of the JNLPBA and BC5CDR datasets: 1% and 10% as we found the performance on datasets was already saturated at their full sizes as number of samples was enough. Hence, we present the results on first 1% and 10% examples of training splits to show the comparisons. We present two settings: (a) w/o augmentation with gold data; and (b) augmentation with gold data. Generated text for all methods is same size as gold data. Note, no changes were made to test/val sets. Table 3 shows the results for the two subsets of the two datasets. From the results five things stand out: 1) Augmenting gold data with our synthetically generated data always out-performs a model trained with the gold data; 2) using **only** synthetically generated data is comparable in performance to the gold data in medium labeled setting (10%) ; 3) our synthetically generated data outperforms gold data in low labeled data setting (1%) subsets; 4) our synthetically generated data gives better performance vs all baseline methods; and 5) our novel block-by-block generation approach significantly improves over a vanilla GPT-2 (ECG-LM) model. Our finding that synthetically generated data can get us a comparable performance to gold data has an application in making the models trained for downstream tasks like NER, privacy preserving, as they do not have to be trained on the real data. This finding can be attributed to zero/few-shot capabilities of large language models (Wei et al., 2021). Hence, the capability to produce texts that can generalize better on unseen test set while other models are only able to capture subset of test set distribution reflected in the training gold dataset. Our results show our method of generation can be quite effective as a data augmentation method in a low labeled data regime. ## 5.3 Generating More Text In Low Resource Previously, we only showed the results by generating synthetic data of same size as the gold data. We perform an experiment to see if there is further ![4_image_0.png](4_image_0.png) improvement in the performance as we add more generated data with the JNLPBA (1%) dataset. We observe that F1 score keeps improving going up to 0.70 vs gold data at 0.31 in Figure 2. Note, we only use the entity mentions found in the JNLPBA (1%) dataset to fill in the entity tags in the generated text. This is remarkable considering that 10x real data for JNLPBA (10%) has a F1 score of 0.72. This is a further evidence that our model is able to generate text that is similar to real data. ## 6 Conclusion Synthetic data generation is a promising approach to train large language models in order to deal with scarcity of labeled data. In this work, we study the problem of conditional text generation where the conditions are provided as a list of entities that must appear in the text in a manner desired by the user. We propose ECG-QALM that can generate blocks of text conditioned on the desired entities. We test our generation system on generation quality metrics and NER task. Evaluations show that our method outperforms baselines in terms of both generation quality and NER performance. Our blockby-block generation provides significant gains over using a fine-tuned vanilla LLM for generation. ## 7 Limitations The major limitations of this work are: - We show results on two public datasets, from bio-medical and bio-chemical domains. These results may not generalize to other domains. - Our results indicate benefit in low resource settings, while no appreciable benefit is seen for medium or high resource settings. - Our method relies on GPT-2, a large language model that needs humongous compute resources and a long training time. It takes about 2 hours to generate 50 samples, versus the baselines like vanilla GPT-2 (ECG-LM) taking 30 mins or EntInj taking about 10 mins to generate same number of examples with much less memory requirements. - We use quantitative measures to evaluate the quality of text generation, which might not be enough to capture the quality of generated text. Gold standard of measuring the quality is human evaluation, which is expensive. ## References Markus Bayer, Marc-André Kaufhold, and Christian Reuter. 2021. A survey on data augmentation for text classification. *ACM Computing Surveys*. Alvin Chan, Yew-Soon Ong, Bill Pung, Aston Zhang, and Jie Fu. 2020. Cocon: A self-supervised approach for controlled text generation. arXiv preprint arXiv:2006.03535. Wenqing Chen, Jidong Tian, Yitian Li, Hao He, and Yaohui Jin. 2021. De-confounded variational encoderdecoder for logical table-to-text generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5532– 5542. Xiang Dai and Heike Adel. 2020. An analysis of simple data augmentation for named entity recognition. In Proceedings of the 28th International Conference on Computational Linguistics, pages 3861–3867. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*. Bosheng Ding, Linlin Liu, Lidong Bing, Canasai Kruengkrai, Thien Hai Nguyen, Shafiq Joty, Luo Si, and Chunyan Miao. 2020. DAGA: Data augmentation with a generation approach for low-resource tagging tasks. In *Proceedings of the 2020 Conference on* Empirical Methods in Natural Language Processing (EMNLP), pages 6045–6057, Online. Association for Computational Linguistics. Xiangyu Dong, Wenhao Yu, Chenguang Zhu, and Meng Jiang. 2021. Injecting entity types into entity-guided text generation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 734–741, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Shailza Jolly, Tobias Falke, Caglar Tirkaz, and Daniil Sorokin. 2020. Data-efficient paraphrase generation to bootstrap intent classification and slot labeling for new features in task-oriented dialog systems. In Proceedings of the 28th International Conference on Computational Linguistics: Industry Track, pages 10–20. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2015. A diversity-promoting objective function for neural conversation models. *arXiv* preprint arXiv:1510.03055. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In *Text summarization* branches out, pages 74–81. Tianyu Liu, Kexiang Wang, Lei Sha, Baobao Chang, and Zhifang Sui. 2018. Table-to-text generation by structure-aware seq2seq learning. In Thirty-Second AAAI Conference on Artificial Intelligence. Ankur P Parikh, Xuezhi Wang, Sebastian Gehrmann, Manaal Faruqui, Bhuwan Dhingra, Diyi Yang, and Dipanjan Das. 2020. Totto: A controlled table-to-text generation dataset. *arXiv preprint arXiv:2004.14373*. Damian Pascual, Beni Egressy, Clara Meister, Ryan Cotterell, and Roger Wattenhofer. 2021. A plug-andplay method for controlled text generation. In *Findings of the Association for Computational Linguistics: EMNLP 2021*, pages 3973–3997, Punta Cana, Dominican Republic. Association for Computational Linguistics. Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. 2018. Improving language understanding by generative pre-training. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Andy Rosenbaum, Saleh Soltan, Wael Hamza, Yannick Versley, and Markus Boese. 2022. Linguist: Language model instruction tuning to generate annotated utterances for intent classification and slot tagging. arXiv preprint arXiv:2209.09900. Bowen Tan, Zichao Yang, Maruan Al-Shedivat, Eric Xing, and Zhiting Hu. 2021. Progressive generation of long text with pretrained language models. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4313–4324, Online. Association for Computational Linguistics. Ke Wang and Xiaojun Wan. 2018. Sentigan: Generating sentimental texts via mixture adversarial networks. In *IJCAI*, pages 4446–4452. Alex Warstadt, Amanpreet Singh, and Samuel R Bowman. 2019. Neural network acceptability judgments. Transactions of the Association for Computational Linguistics, 7:625–641. Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. 2021. Finetuned language models are zero-shot learners. In *International Conference on Learning Representations*. Kevin Yang and Dan Klein. 2021. Fudge: Controlled text generation with future discriminators. *arXiv* preprint arXiv:2104.05218. Qingkai Zeng, Jinfeng Lin, Wenhao Yu, Jane ClelandHuang, and Meng Jiang. 2021. Enhancing taxonomy completion with concept generation via fusing relational representations. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pages 2104–2113. Ran Zhou, Xin Li, Ruidan He, Lidong Bing, Erik Cambria, Luo Si, and Chunyan Miao. 2022. Melm: Data augmentation with masked entity language modeling for low-resource ner. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2251– 2262. Xiang Zhou and Mohit Bansal. 2020. Towards robustifying nli models against lexical dataset biases. arXiv preprint arXiv:2005.04732. ## A Appendix A.1 Ablation: Generating Under-Represented Entities We perform a simple experiment to see how ECGQALM can potentially also be beneficial to generate data that could be augmented to boost the performance of under-represented entities in the original training data. To refresh, we kept the entity distribution exactly same as training data while generating data through our method. To boost the relative frequency of the under-represented entities, we generate examples proportional to the inverse frequency of the entities present. Let the training data have n samples. Each sample has a set of named entities in it, *e.g.*, a sample containing the set of entities, {<B-Protein>, <B-DNA>, <B-DNA>}, has two distinct entities in it. We calculate the frequency of each named entities over the entire training corpus. Next, we calculate the score of each sample by adding the inverse frequency of each named entity in that sample. For example, if the <B-Protein> has a inverse frequency of 10 and <B-DNA> has a inverse frequency of 100, this sample would get a score of 210. Next, we normalize these scores by the sum of scores of every sample in the corpus. This gives us a probability score for using the entity set of a sample to be picked while generating. Entity set of a sample with a probability score of 1% would be picked 10 times while generating 1000 synthetic examples, for instance. Hence, this ensures that under-represented entities are boosted in the new generated data. This could be used for augmenting the original data to improve performance on under-represented entities. Note, we can also generate random entity sets just with under-represented examples. However, we prefer not to do it as it could alter the co-occurrence of entities in the generated text, shifting the training set distribution so significantly that it no longer represents the original training set. We take JNLPBA (10%) dataset for this measure, as it has a large number of entities. Results after generating a synthetic data of same size as the original training set are shown in Table 4. While there is a 1% increase in the macro average, we observe the performance over different entities are mixed. While there is generally an increase in the performance for the under-represented entities, there is a drop for selected entities like <B-RNA>, despite almost doubling of number of samples for the entity. For the abundant entities like <B-Protein> performance is similar. In future, it would be worthwhile to experiment with different distributions of co-occurrence of entities instead of deriving it from the gold (training) data. ## A.2 Examples Of Generated Text In the section below we shows few examples of generated text by ECG-QALM and EntInj (Dong et al., 2021), the only text generation method in baselines. Our method generates semantically meaningful examples, while EntInj generates quite repetitive examples. Text highlighted in Red marks the entities. ## A.2.1 Ecg-Qalm The examples below seem grammatically correct, as was the observation over the entire generated corpus. However, as we randomly insert entity mentions after we generated the entity tags, most of the generated examples are not factual. E.g., DTG is not associated with treatment of blood clotting as generated in the first example. Our goal was not factual correctness but ensuring that the generated data preserves the distribution of the training data, which seems to be the case based on generation metrics and results on NER task. The efficacy of DTG in the treatment of impaired blood clotting likewise did not appear to be affected by the rate of administration, although no formal statistical comparisons were made . The prevalence rate for death was the most important reason for preference, cited by 67 . 3 % of patients preferring Picloxydine and 54 . 2 % of patients who preferred a p < or = 0 . 001 ) . The reduction of acetaminophen at 1 and 4 days after gestation not glomeruli with ataxic movements than control rats . Table 4: Macro F1 scores on the NER task for gold data, our generated data, our generated data with Underrepresented generation (+URG) for JNLPBA (10%) dataset. Cells with highest (almost highest) F1 score for an entity (row) are highlighted in blue. Second highest value is underlined. | W/o Augmentation | W/ Augmentation | | | | | | | |-------------------------------------------------------------|--------------------------------------------|-----------------------------------|-------------------------|------|------|------|------| | Training Data (→) | % original frequency % generated frequency | Gold Data ECG-QALM ECG-QALM(+URG) | ECG-QALM ECG-QALM(+URG) | | | | | | Entity(↓) JNLPBA (10%) [#training samples=1860] B-DNA 8.53% | 8.45% | 0.70 | 0.65 | 0.65 | 0.71 | 0.71 | | | B-RNA | 0.85% | 1.92% | 0.69 | 0.64 | 0.68 | 0.70 | 0.68 | | B-cell-line | 3.45% | 5.57% | 0.51 | 0.46 | 0.45 | 0.51 | 0.53 | | B-cell-type | 6.23% | 6.69% | 0.69 | 0.59 | 0.60 | 0.70 | 0.70 | | B-protein | 27.98% | 20.38% | 0.76 | 0.76 | 0.64 | 0.75 | 0.76 | | I-DNA | 14.15% | 14.04% | 0.78 | 0.68 | 0.69 | 0.79 | 0.79 | | I-RNA | 1.35% | 2.43% | 0.82 | 0.70 | 0.81 | 0.81 | 0.83 | | I-cell-line | 6.66% | 8.92% | 0.55 | 0.46 | 0.45 | 0.53 | 0.54 | | I-cell-type | 8.18% | 8.20% | 0.69 | 0.63 | 0.63 | 0.71 | 0.70 | | I-protein | 22.58% | 20.15% | 0.77 | 0.69 | 0.69 | 0.77 | 0.77 | | Macro Avg. | 0.72 | 0.64 | 0.66 | 0.72 | 0.73 | | | The aims of this study were to confirm our previous findings in a separate cohort of patients and to determine the time course of the cardiovascular consequences of stopping sertraline in the expectation that this might shed light on the mechanisms by which the mechanisms by Tamoxifen is being a significant reduction of the activity on the drug causes the sodium associated with cephalothin sodium associated with povidone - iodine is associated with cocaine and inhibition with the use of tuberculosis and area in this effect. MR imaging with quantitative diffusion mapping of E4031 ( 0 . g ), p - choloroaniline ) and outcome in organ transplant controls, and / L and the development of blood coagulation by a potential is also more than the development of systolic dysfunction and possibly . ## A.2.2 Entinj (Dong Et Al., **2021)** We observed a lot of repetition in the generated text by EntInj method. This looping behavior is shown in Example 2 and 3 below. Note, unlike our method, EntInj has access to the exact same entity mentions as they appear in the training data, having an inherent advantage with this additional information. | a | result | of | monoamine | oxidase | or | in | |-----|----------|------|-------------|-----------|------|---| | hibition of monoamine oxidase or inhibition of monoamine oxidase or inhibition of monoamine oxidase or inhibition of monoamine oxidase or inhibition of monoamine oxidase or inhibition of monoamine oxidase or inhibition of monoamine oxidase or inhibition of monoamine oxidase or inhibition of monoamine oxidase or inhibition of monoamine oxidase or inhibition of monoamine oxidase or inhibition of monoamine oxidase or inhibition of monoamine oxidase or inhibition of monoamine oxidase or inhibition of monoamine oxidase or inhibition of monoamine oxidase or inhibition of monoamine oxidase or inhibition of monoamine oxidase or inhibition of monoamine oxidase or inhibition of monoamine oxidase or inhibition of monoamine oxidase or inhibition of monoamine oxidase or inhibition of monoamine oxidase or inhibition of monoamine oxidase or inhibition of monoamine oxidase or inhibition of monoamine oxidase or inhibition | | | | | | | <unk> : The cardiovascular responses to standing and standing . 4 patients were studied in the drug . 4 days . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . 4 . telithromycin - induced bromo tetrahydropyranyladriamycin pituitary carsinom agitation one : a longitudinal study of In the study was undertaken to the combination of painful , headache , bleed , which was only induced by epilepticus drug , and bronchitis Investigation of anti - inflammatory agents are warranted in the caudate nucleus . injection of Allopurinol injection of bacterial collagenase - induced ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Sec 7 ✗ A2. Did you discuss any potential risks of your work? Our method does not pose a risk that we can think of. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and Sec 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Sec 5 ✓ B1. Did you cite the creators of artifacts you used? Sec 4.1 B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Sec 4 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Sec 4 ## C ✓ **Did You Run Computational Experiments?** Sec 4.1 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? We used opensource GPT-2 model The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Sec 4.1 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Sec 5 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Sec 4.1 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
limisiewicz-etal-2023-tokenization
Tokenization Impacts Multilingual Language Modeling: Assessing Vocabulary Allocation and Overlap Across Languages
https://aclanthology.org/2023.findings-acl.350
Multilingual language models have recently gained attention as a promising solution for representing multiple languages in a single model. In this paper, we propose new criteria to evaluate the quality of lexical representation and vocabulary overlap observed in sub-word tokenizers.Our findings show that the overlap of vocabulary across languages can be actually detrimental to certain downstream tasks (POS, dependency tree labeling). In contrast, NER and sentence-level tasks (cross-lingual retrieval, NLI) benefit from sharing vocabulary. We also observe that the coverage of the language-specific tokens in the multilingual vocabulary significantly impacts the word-level tasks. Our study offers a deeper understanding of the role of tokenizers in multilingual language models and guidelines for future model developers to choose the most suitable tokenizer for their specific application before undertaking costly model pre-training.
# Tokenization Impacts Multilingual Language Modeling: Assessing Vocabulary Allocation And Overlap Across Languages Tomasz Limisiewicz and **Jirí Balhar** ˇ and **David Marecek** ˇ Institute of Formal and Applied Linguistics, Faculty of Mathematics and Physics Charles University, Prague, Czech Republic {limisiewicz, marecek}@ufal.mff.cuni.cz ## Abstract Multilingual language models have recently gained attention as a promising solution for representing multiple languages in a single model. In this paper, we propose new criteria to evaluate the quality of lexical representation and vocabulary overlap observed in sub-word tokenizers. Our findings show that the overlap of vocabulary across languages can be actually detrimental to certain downstream tasks (POS, dependency tree labeling). In contrast, NER and sentence-level tasks (cross-lingual retrieval, NLI) benefit from sharing vocabulary. We also observe that the coverage of the language-specific tokens in the multilingual vocabulary significantly impacts the wordlevel tasks. Our study offers a deeper understanding of the role of tokenizers in multilingual language models and guidelines for future model developers to choose the most suitable tokenizer for their specific application before undertaking costly model pre-training.1 ## 1 Introduction Multilingual language models perform surprisingly well in a variety of NLP tasks for diverse languages (Devlin et al., 2019; Conneau and Lample, 2019; Conneau et al., 2019). It has been observed that the representation of the input sequence has a significant effect on their effectiveness (Mielke et al., 2021). In the widely used Transformer (Vaswani et al., 2017) models achieving state-of-the-art results through diverse tasks, a large fraction of parameters are allocated in the input encoding layer.2 The popular language-independent approach to represent the input texts is to learn a vocabulary of frequently appearing strings that may consist of words or parts of words (Sennrich et al., 2016; Song et al., 2021; Kudo and Richardson, 2018). 1The code is available at: github.com/tomlimi/ entangled_in_scripts. 2For instance, in XLM-RobertaBase, 192M out of 270M parameters are in the input embedding layer (approximately 70%). ![0_image_0.png](0_image_0.png) In this work, we focus on the characteristics of subword tokenization methods in a multilingual setting. Our main contribution is the introduction of the methods for measuring whether tokenizers effectively represent meaningful language-specific tokens in the vocabulary (*vocabulary allocation*) and whether the units they learn are shared across languages (*vocabulary overlap*). We posit the following questions: (Q1) How do sub-word tokenizers differ in overlap and *allocation* **of learned vocabularies?** To answer this question, we apply the metrics to tokenizers obtained with two widely used algorithms: SentencePiece Unigram LM (Kudo and Richardson, 2018), and BPE (Sennrich et al., 2016). Furthermore, we propose two methods of learning tokenizers on monolingual corpora and then combining them to allow the tokenization of multilingual texts. ## (Q2) Which Properties Of Multilingual Tokenizers Affect The Lm'S Representation Quality? We address this question by training small language models utilizing different tokenization methods. We evaluate the models on masked word prediction and a diverse set of downstream tasks: POS, NER tagging, dependency tree labeling, NLI, and cross-lingual sentence retrieval. The proposed evaluation scheme offers a good prediction of language models' performance. Notably, we show that the system results significantly improve when tokenizers allocate more vocabulary units for specific languages. Our investigation shows that this aspect has a bigger influence than the *vocabulary overlap* for word-level tasks (see Figure 1). To the best of our knowledge, the interactions between multilingual *vocabulary allocation* and *vocabulary overlap* have not been investigated in past research. ## 2 Multilingual Subword Tokenization The majority of the currently deployed models use subword tokenization as a way to pre-process the input texts. The input is represented as a sequence of units from a finite vocabulary, which can be translated into numeric representation by an input embedding layer. The benefits of subword tokenization are the ability to obtain numeric representation for meaningful words frequently used in the resources and handling less frequent words by splitting them into subwords. The latter property mitigates the problem of out-of-vocabulary (OOV) words by breaking them down into smaller parts (sub-words) already present in the vocabulary. It is crucial in handling multilingual texts, especially in languages with large vocabularies and complex morphology. In the following section, we describe two widely used algorithms of subword tokenization: ## 2.1 Background: Subword Tokenization Byte-pair encoding BPE: (Sennrich et al., 2016) is a subword tokenization method that iteratively replaces the most frequent pair of vocabulary units in the input text with a single unit. The process starts with taking unique characters of the training text as the initial vocabulary. Subsequently, we take the most frequent pair of vocabulary units, merge the pair, and add it as a new unit to the vocabulary. This process is repeated until a pre-set vocabulary size N is reached. Unigram LM: (Kudo, 2018) is the method of obtaining subword vocabulary that was first introduced as the underlying tokenizer of SentencePiece algorithm (Kudo and Richardson, 2018). The prerequisite is obtaining an extensive vocabulary, e.g., consisting of all strings present in data with at most, a predefined number of characters. The expectation-maximization algorithm is used to estimate the probability of vocabulary units. After EM convergence, the portion of units with the lowest contribution to the likelihood of the training corpus is removed from the vocabulary. The procedure is repeated until the pre-set vocabulary size is obtained. ## 2.2 Combining Monolingual Tokenizers Rust et al. (2021) observed that subword tokenizers trained on monolingual data outperform multilingual ones. The latter can overrepresent the subwords specific to languages constituting a large portion of the training corpora (e.g., English). Moreover, their vocabulary is less likely to contain morphemes important in modeling low-resource languages and instead prioritizes less meaningful character sequences appearing across languages. To alleviate this issue, we suggest utilizing monolingual tokenizers for multilingual tokenization. First, the Unigram LM tokenizers are trained on separate monolingual corpora. The tokenizers are then combined to create a tokenizer suitable for multilingual data. We propose two methods for combining monolingual tokenizers: Language-specific Tokenization NOOVER-LAP: We train Unigram tokenizers for each of L considered languages with the same vocabulary size for each of the languages N L . In multilingual tokenization, we apply the tokenizer for a specific language separately and produce a token with language identification.3 The vocabulary consists of L 3Only the special tokens are shared across languages, e.g., segments of total size N. Naturally, the tokenized texts in different languages will consist of tokens from distinct vocabulary segments. Noticeably, the same character sequence in different languages can be assigned different token ids. Language-Mixed Tokenization TOKMIX: We train Unigram LM tokenizers for each of L languages. Subsequently, we averaged vocabulary unit probabilities across tokenizers, sorted them, and trimmed the vocabulary to the pre-set vocabulary size N keeping the units with the highest probability. 4 $${\hat{\theta}}=\sum_{i=1}^{L}w_{i}\theta_{i}\qquad\qquad\qquad(1)$$ wi are weights assigned to each language. By default, we set the weights to be uniform and equal to 1L . Unlike NOOVERLAP, the same vocabulary units coming from distinct monolingual tokenizers are merged into one unit with averaged probability. ## 2.3 Tokenizer And Model Training Setting We initially focused on a group of 6 languages varying both in the script and language family: Arabic, Chinese, Greek, Turkish, Spanish, and English. In subsequent experiments, we extend the method to 20 languages. We download 10% of CC corpus available atv https://data.statmt.org/cc-100/. Following the methodology in (Conneau and Lample, 2019), we subsample each language's data to ensure that the training corpus is well-balanced across languages. An equation defines the sample size cl for language l: $$c_{l,\alpha}=c_{\min}\cdot\left(\frac{|C_{l}|}{c_{\min}}\right)^{\alpha}\tag{2}$$ Where $c_{\min}$ is the minimal sample size (defined by the smallest language), and Clis all data available for a language, α is the so-called "balancing parameter". In our experiments, we set cmin to 10 M characters, Clis, e.g., 8.8 B characters for English. We set α to 0.25, which corresponds to a balancing factor picked for XLM-Roberta (Conneau et al., 2019). The training data for the tokenizer and the model are the same. The vocabulary size N was set to 120,000. Appendix A contains technical details about our approach. "<s>" - the beginning of a sentence token. 4To account for possible overlaps between languagespecific vocabularies, we set their sizes above N L . It assures that joint vocabulary will have at least N tokens. ## 3 Measuring Tokenizer Properties This section presents our in-depth analytical approach to evaluate different aspects of multilingual tokenization. We introduce non-parametric measures that describe the key properties of multilingual tokenizers: quality of vocabulary representation for particular languages and lexical overlap across languages. We base our analysis on the empirical probability distribution of vocabulary units v ∈ V computed on training corpus for each language l: $$d_{l,\nu}(v)={\frac{f(v,C_{l})}{\sum_{v\in\nu}f(v,C_{l})}}\qquad\qquad(3)$$ Function f(*v, C*l) is the number of occurrences of a vocabulary unit v in monolingual training corpus Cl. ## 3.1 Vocabulary Allocation We aim to quantify how well multilingual vocabulary represents meaningful lexical units of particular languages. Our intuition is that a good lexical representation is obtained when: 1. It uses a vast portion of multilingual vocabulary, and thus a larger part of the embedding layer is devoted to the language; 2. The text in the language is split into longer and potentially more meaningful tokens. Vocabulary Allocation: Average Rank To measure the number of vocabulary units available for modeling specific languages, we propose an estimation of the average rank of vocabulary units in distribution over a monolingual corpus.5 This measure denotes how many tokens are typically considered by a language model that has access to language identity information but no context (probabilistic unigram LM). $$\mathrm{AR}_{l,\mathcal{V}}=\sum_{v\in\mathcal{V}}\mathrm{rank}(v,d_{l,\mathcal{V}})d_{l,\mathcal{V}}(v)\qquad(4)$$ Our intuition is that model will have better information about the language's lexicon when vocabulary is distributed over a larger number of tokens as more parameters of the input embedding layer would be allocated to represent language-specific features. Moreover, larger vocabularies tend to cover longer and more meaningful units. 5In this context, rank is the position of unit v in the vocabulary V sorted in descending order by the probability distribution dl,V Vocabulary Allocation: Characters per Token In line with previous intuition, longer tokens have a more meaningful representation. Therefore, we measure text fragmentation by computing the average number of characters for a vocabulary unit in monolingual corpus Cl.: $$\mathrm{CPT}_{l,\nu}={\frac{|C_{l}|}{|T_{\nu}(C_{l})|}}$$ $$(S)$$ TV(Cl) is the tokenization of the corpus with vocabulary V; |Cl| is the size of the corpus measured as the number of characters. We choose the number of characters as the unit to relate to because it's not susceptible to cross-lingual differences regarding word boundaries and the average length of words. Still, the amount of information conveyed by a single character varies largely with the writing systems, e.g., texts written in logographic scripts (e.g., Chinese, Japanese) tend to be shorter in the number of letters than similarly informative ones in the phonetical script (e.g., Latin) (Perfetti and Liu, 2005). ## 3.2 Vocabulary Overlap Another important property of multilingual vocabulary is sharing lexical units across languages. Previous works claimed that vocabulary overlap improves cross-lingual transfer for learning downstream tasks (Pires et al., 2019; Wu and Dredze, 2019). We measure overlap as the divergence between corpora distributions dl (defined in equation 3). We use the Jensen-Shanon divergence.6 We apply JSD because it is symmetric and applicable for distribution with different supports. The latter is often the case when distributions are estimated for languages with distinct writing systems. $$\mathrm{JSD}(d_{l1,\nu}||d_{l2,\nu})=$$ $$=\frac{1}{2}\sum_{v\in\mathcal{V}}d_{l1,\nu}(v)\log_{2}\frac{d_{l1,\nu}(v)}{m_{l1,l2,\nu}(v)}+$$ $$+\frac{1}{2}\sum_{v\in\mathcal{V}}d_{l2,\nu}(v)\log_{2}\frac{d_{l2,\nu}(v)}{m_{l1,l2,\nu}(v)}\tag{6}$$ where: ml1,l2,V = 1 2 dl1,V + 1 2 dl2,V (7) 6In NLP literature, JSD is also known as "information radius" (Manning and Schütze, 2001). JSD is bounded in the range 0 to 1. The lower the value, the larger the overlap across corpora. Another possibility to quantify overlap is to count unique vocabulary units appearing in tokenized texts across languages. The advantage of divergence is that it reflects the frequency of shared tokens across corpora. It is also less affected by the choice of the data size used for estimating empirical probability distributions (dl). ## 4 Evaluating Language Modeling And Downstream Tasks In this section, we present the tasks and measures for evaluation of multilingual language models trained with different tokenizers. ## 4.1 Language Modeling We evaluate the masked language modeling performance with mean reciprocal rank: $$\mathrm{MRR}=\frac{1}{N}\sum_{i=1}^{N}\frac{1}{\mathrm{rank}(x_{i},\hat{P}(\cdot|X\setminus x_{i}))}\quad\mathrm{(8)}$$ where Pˆ(·|X \ xi) is the probability over vocabulary of predicting token xi by the model given its context: X \ xi. ## 4.2 Downstream Evaluation The downstream tasks are taken from the XTREME (Hu et al., 2020), which is the collection of diverse datasets with predefined splits used to evaluate multilingual models' representation. We probe the models' output representation to evaluate how useful the learned representation is for the downstream tasks. Only an additional linear layer is trained for the task, while the base model representation is frozen. The approach is suitable for evaluating how well the pre-trained model encodes linguistic phenomena as it does not change parameters learned in pre-training in contrast to regular fine-tuning (Conneau et al., 2018a; Belinkov, 2022). Word-level Tasks The first set of tasks covers classification on a single word or word pair level. The probe is a linear layer taking word representations on input and outputting one of the classes. For word representations, we take the model's output embedding of the first subwords. We evaluate the results with an F1 score averaged across classes (macro-average). | ar | tr | zh | el | es | en | | | |---------|-----------|------|------|------|------|------|------| | Unigram | 2129 | 2719 | 5919 | 2070 | 1439 | 1513 | | | BPE | 2972 | 3226 | 4294 | 2907 | 2220 | 2143 | | | AR | NoOverlap | 2537 | 2653 | 2090 | 2065 | 1661 | 1597 | | TokMix | 3485 | 4167 | 3961 | 2639 | 1999 | 1898 | | | Unigram | 3.16 | 4.01 | 1.84 | 3.5 | 3.88 | 3.91 | | | BPE | 3.7 | 4.19 | 2.03 | 3.97 | 4.34 | 4.22 | | | CPT | NoOverlap | 3.53 | 4.19 | 1.56 | 3.81 | 4.15 | 4.15 | | TokMix | 3.7 | 4.45 | 1.73 | 3.9 | 4.24 | 4.18 | | We test syntactic tasks: **Part of Speech** and Dependency labeling on Universal Dependencies (de Marneffe et al., 2021) and **Named Entity** Recognition on Wikiann dataset (Pan et al., 2017). In dependency labeling, we use edge probe (Tenney et al., 2019) on top of the representation of two words connected by the dependency arc. Sentence-level Tasks In this set of tasks, we examine whether the model learns sentence-level representations that capture its semantics and can be transferred across languages. To obtain this sentence embedding, we average the model's output representation across all the tokens in the sentence. We evaluate **Natural Language Inference** on XNLI dataset (Conneau et al., 2018b) and **Sentence** Retrieval on Tatoeba bitext corpus (Artetxe and Schwenk, 2019). For NLI, we use edge probing. Sentence retrieval is solved by an unsupervised algorithm matching sentences based on their cosine similarity. In Appendix A.3, we provide details of the datasets and probe training. ## 4.2.1 In-Language Vs. Cross-Lingual Transfer For all the downstream tasks, except sentence retrieval, we compute in-language performance by training the probe and evaluating it on held-out test data in the same language. We quantify crosslingual transfer by training a probe on one language (source) and evaluating it on the test set for another language (target). ## 5 Experiments And Results We train four tokenizers for the smaller set of diverse 6 languages (en, es, tr, el, zh, ar) using existing methods: Unigram, BPE, and our methods for monolingual tokenizer merging: NOOVER-LAP, TOKMIX. Using these tokenizers, we then train four models7following the settings of XLM-7Details about the pretraining and probing procedures are described in Appendix A.2 ![4_image_0.png](4_image_0.png) Roberta (Conneau et al., 2019) which we then use for the probing experiments. In Section 5.1, we analyze the distribution of learned vocabulary units and compute vocabulary allocation and *vocabulary overlap* measures described in Section 3. Then in Section 5.2, we evaluate the models' performance measures introduced in Section 4 and compare them with the measures for tokenizers. Subsequently, we repeat the analysis for the broader set of 20 diverse languages (including six mentioned earlier and: he, ka, ur, hi, mr, th, ta, te, bg, ru, sw, vi, fr, de) with three tokenization methods used in three pre-trained models. In this setting, we do not use NOOVERLAP tokenizer, which cannot be trained effectively due to the necessity of constraining vocabulary for each language to N L = 6, 000. ## 5.1 Evaluation Of Tokenizers' Properties Vocabulary allocation **largely varies throughout languages and tokenization methods.** Table 1 shows that the average rank noticeably differs across languages. The highest AR is observed for Chinese, which is caused by the fact that logographic scripts require an extensive vocabulary capacity to encode all characters. Multilingual *vocabulary allocation* is highly dependent on the tokenization method used. Vocabulary learned with Unigram underperforms BPE and | V. Allocation | MLM | NER | POS | Dep. labeling | NLI | | | |-----------------|-------|-------|-------|-----------------|-----------|-----------|-----------| | (AR) | (CPT) | (MRR) | (F1) | (F1) | (F1) | (Acc) | | | Unigram | 2042 | 3.17 | 42.0 | 62.8 ±0.1 | 57.1 ±0.2 | 48.1 ±0.4 | 53.4 ±0.5 | | BPE | 2193 | 4.47 | 35.6 | 70.4 ±0.1 | 68.9 ±0.2 | 58.7 ±0.4 | 53.3 ±0.3 | | NoOverlap | 1829 | 3.16 | 42.7 | 69.4 ±0.1 | 69.2 ±0.2 | 58.8 ±0.3 | 53.0 ±0.4 | | TokMix | 2198 | 3.34 | 38.7 | 70.2 ±0.1 | 67.3 ±0.1 | 57.3 ±0.4 | 53.3 ±0.4 | | (a) 6 languages | | | | | | | | | V. Allocation | MLM | NER | POS | Dep. labeling | NLI | | | | (AR) | (CPT) | (MRR) | (F1) | (F1) | (F1) | (Acc) | | | Unigram | 623 | 2.89 | 52.6 | 58.9 ±0.2 | 54.0 ±0.4 | 43.7 ±0.4 | 53.2 ±0.3 | | BPE | 809 | 3.43 | 40.5 | 66.3 ±0.2 | 67.3 ±0.4 | 54.5 ±0.5 | 53.5 ±0.3 | | TokMix | 689 | 3.23 | 44.8 | 65.4 ±0.3 | 66.5 ±0.4 | 53.9 ±0.5 | 52.3 ±0.3 | V. Allocation MLM (AR) (CPT) (MRR) CPT **0.790** - - MRR **-0.723 -0.913** - NER **0.394 0.657 -0.745** POS 0.320 **0.724 -0.754** Dep l. 0.266 **0.675 -0.695** NLI **0.56** 0.388 **-0.437** TOKMIX in both average rank and character per token. Table 7 presented in the Appendix shows that this trend exists throughout languages except for Chinese. This suggests that our vanilla Unigram is a suboptimal multilingual vocabulary learner. It is important to note that NOOVERLAP scores even lower than Unigram in the *vocabulary allocation* measures due to the limited vocabulary size for each language and disallowing overlap. However, as shown in the next sections, LM trained with this tokenizer can achieve good results on some tasks. The choice of tokenization method affects *vocabulary overlap*. Figure 2 shows Jensen-Shanon divergencies between the vocabularies of six languages. We observe that the highest cross-lingual overlaps appear in the vocabulary obtained by Unigram, followed by TOKMIX, and BPE. Expectedly, we do not observe overlaps for NOOVERLAP's setting (JSD = 1). Jensen-Shanon divergence is a good predictor of whether the languages share the script. For all tokenization methods, the divergence is significantly smaller in the bottom-right square grouping of the languages using Latin script. This effect is even more visible in the visualization of JSD computed for twenty languages (Figure 8 in Appendix C). ## 5.2 Tokenizer Properties Impact Language Model'S Performance High *vocabulary allocation* **improves downstream results for word-level tasks.** In Table 2a, we observe that the choice of the tokenization method significantly impacts the results for POS, dependency labeling, and NER. We presume it results from learning good lexical representations throughout languages, e.g., by BPE and TOKMIX. The higher *vocabulary allocation* is especially beneficial for word-level tasks. Whereas the influence on the sentence-level task (NLI) is minimal. Notably, the model instance with NOOVERLAP tokenizer achieves the best F1 in POS and dependency labeling despite underperforming in *vocabulary allocation*. It is the result of learning languagespecific representation for tokens that is especially useful for syntactic tasks. Better MLM performance doesn't bring improvement to downstream tasks. In Table 2a, we observe that the models performing better on masked token prediction (MRR) tend to be worse on downstream tasks (POS and NER). It is the result of different average ranks. The higher it is, the more vocabulary units a language model needs to consider for masked token filling, making | Different | Same | All | | | | | | |--------------------|-----------|-----------|-----------|-----------|-----------|-----------|-----------| | Metric | Tokenizer | script | script | transfers | Different | Same | All | | Tokenizer | script | script | transf | | | | | | Unigram | 0.75 | 0.58 | 0.73 | | | | | | BPE | 0.83 | 0.67 | 0.81 | | | | | | Unigram | 0.77 | 0.62 | 0.74 | | | | | | BPE | 0.83 | 0.68 | 0.8 | | | | | | NoOverlap | 1.0 | 1.0 | 1.0 | | | | | | TokMix | 0.8 | 0.65 | 0.77 | | | | | | Overlap (JSD) | TokMix | 0.8 | 0.64 | 0.78 | | | | | Unigram | 31.3 ±0.4 | 55.4 ±0.2 | 36.1 ±0.4 | | | | | | BPE | 33.5 ±0.5 | 59.9±0.2 | 38.7 ±0.4 | | | | | | NoOverlap | 32.0 ±0.5 | 48.6 ±0.4 | 35.3 ±0.5 | | | | | | TokMix | 31.8 ±0.4 | 58.0 ±0.3 | 37.0 ±0.4 | Unigram | 33.2 ±0.5 | 50.7 ±0.6 | 35.4 ±0.5 | | BPE | 36.6 ±0.6 | 54.3 ±0.3 | 38.8 ±0.5 | | | | | | NER (F1) | TokMix | 36.5 ±0.6 | 53.7 ±0.5 | 38.7 ±0.6 | | | | | Unigram | 23.4 ±0.5 | 32.9 ±0.3 | 24.6 ±0.5 | | | | | | BPE | 30.5 ±0.6 | 40.7 ±0.4 | 31.8 ±0.6 | | | | | | Unigram | 18.1 ±0.4 | 38.3 ±0.4 | 22.2 ±0.4 | | | | | | BPE | 25.8 ±0.5 | 40.8 ±0.4 | 28.8 ±0.5 | | | | | | NoOverlap | 20.1 ±0.5 | 41.9 ±0.5 | 24.5 ±0.5 | | | | | | TokMix | 21.9 ±0.4 | 40.4 ±0.3 | 25.6 ±0.4 | | | | | | POS (F1) | TokMix | 29.2 ±0.5 | 40.4 ±0.3 | 30.7 ±0.5 | | | | | Unigram | 13.0 ±0.6 | 15.6 ±0.5 | 13.4 ±0.6 | | | | | | BPE | 16.5 ±0.6 | 19.2 ±0.5 | 16.9 ±0.5 | | | | | | Unigram | 11.1 ±0.3 | 25.5 ±0.3 | 14.0 ±0.3 | | | | | | BPE | 15.9 ±0.4 | 27.0 ±0.4 | 18.1 ±0.4 | | | | | | NoOverlap | 12.8 ±0.4 | 27.8 ±0.5 | 15.8 ±0.4 | | | | | | TokMix | 12.6 ±0.5 | 26.1 ±0.3 | 15.3 ±0.5 | | | | | | Dep. labeling (F1) | TokMix | 16.0 ±0.5 | 19.4 ±0.4 | 16.5 ±0.5 | | | | | Unigram | 37.3 ±0.5 | 37.5 ±0.4 | 37.4 ±0.5 | | | | | | BPE | 36.2 ±0.5 | 38.7 ±0.5 | 36.7 ±0.5 | | | | | | Unigram | 42.2 ±0.7 | 43.7 ±0.7 | 42.5 ±0.7 | | | | | | BPE | 42.4 ±0.7 | 45.2 ±0.8 | 43.0±0.7 | | | | | | NoOverlap | 37.3 ±0.6 | 37.1 ±0.5 | 37.2 ±0.6 | | | | | | TokMix | 41.2 ±0.7 | 42.7 ±0.5 | 41.5 ±0.7 | | | | | | NLI (Acc) | TokMix | 37.8 ±0.5 | 39.2 ±0.5 | 38.1 ±0.5 | | | | | Unigram | 44.1 | 44.4 | 44.2 | | | | | | BPE | 44.1 | 49.1 | 45.1 | | | | | | Unigram | 21.0 | 43.9 | 25.6 | | | | | | BPE | 20.9 | 40.7 | 24.9 | | | | | | NoOverlap | 12.3 | 28.0 | 15.4 | | | | | | TokMix | 23.0 | 43.4 | 27.1 | | | | | | (a) 6 languages | | | | | | | | | Retrieval (Acc) | TokMix | 42.8 | 46.9 | 43.6 | | | | | (b) 20 languages | | | | | | | | masked word prediction harder. At the same time, a high average rank means that the vocabulary is broader and contains lexical units important for downstream tasks. Again, this trend does not hold for the results for NOOVERLAP setting, in which the search space for the masked-word problem is limited to the language-specific tokens leading to the best performance in MLM and syntactic tasks (POS and dependency label prediction). In Table 3, we show that the strong relationship between *vocabulary allocation* (avg. rank and CPT) and LM performance (MRR) is statistically supported. The length of token units has a strong positive influence on POS, dependency labeling, and NER results (r > 0.65) and a negative influence on MRR (r < −0.9), while it does not significantly affect NLI results. The correlation between the average rank and MRR, NER scores is weaker but still significant. Moreover, it is significantly correlated with XNLI accuracy with a medium coefficient r = 0.56, even though the changes in XNLI are low across tokenizers. ## Impact Of Vocabulary Overlap **On Cross-Lingual** transfer varies across tasks. We observed that NOOVERLAP approach obtains competitive results for POS tagging . Surprisingly no vocabulary sharing also improves cross-lingual transfer in the task among languages with Latin script (shown in Table 4a and Figure 3b). We think that the reason behind the strength of NOOVERLAP approach is that some tokens have different meanings across languages, e.g., the word "a" is an indefinite article in English and a preposition in Spanish. Nevertheless, vocabulary overlap is crucial to cross-lingual transfer in some tasks. Especially NER within the same script languages (Figure 3a) and sentence-level tasks. For these tasks, NOOVER-LAP significantly underperforms other tokenization methods. The drop within Latin script languages is in the range: 6.8 - 11.3% for NER and 12.7 - 15.9% for sentence retrieval. In these cases, usage of the same tokens can indicate that texts refer to the same entities across languages, e.g., names are usually the same strings in the languages sharing writing system. ![7_image_0.png](7_image_0.png) (a) NER (F1) ![7_image_1.png](7_image_1.png) (b) POS (F1) V. Overlap V. Allocation SRC V. Allocation TGT (JSD) (AR) (CPT) (AR) (CPT) | NER | -0.111 | 0.249 | 0.33 | 0.209 | 0.28 | |-----------|----------|---------|--------|---------|--------| | POS | 0.395 | 0.365 | 0.547 | 0.489 | 0.653 | | Dep l. | 0.463 | 0.19 | 0.425 | 0.249 | 0.44 | | NLI | -0.516 | 0.421 | 0.203 | 0.297 | 0.103 | | Retrieval | -0.648 | 0.235 | 0.082 | 0.238 | 0.085 | Table 5 presents the correlations for crosslingual transfer scores with JSD measuring *vocabulary overlap*. The coefficient supports our previous observation that lower overlap (thus higher JSD) improves transfer for POS tagging and dependency labeling and deteriorates it for other tasks. Although, the correlation for NER is not significant. The *vocabulary allocation*s of source and target languages significantly influence the cross-lingual transfers. Similarly to the in-language correlations, the influence of character per token is more substantial on word-level tasks, while Average Rank affects sentence-level tasks to a larger extent. This observation underlines the importance of allocating a sufficient portion of vocabulary for low-resource for better cross-lingual transfer. 8 Results generalize to the larger set of languages. The key observation for six language sets holds in the model trained for twenty languages. Table 2b shows that BPE and TOKMIX obtain better *vocabulary allocation* than Unigram leading to improved results for word-level downstream tasks (NER, POS, Dependency labeling). Due to the smaller vocab size to the language number ratio, average ranks decrease for all methods. We observe in Table 4b that the cross-language 8We describe the correlation analysis in detail in Appendix C.3. vocabulary overlap is the highest for Unigram and lowest for BPE, similar to the six languages settings. However, the association between *vocabulary overlap* and the cross-lingual transfers is less pronounced. ## 6 Related Work Importance of *vocabulary overlap*. Wu and Dredze (2019); Pires et al. (2019) claimed that multilingual overlap benefits cross-lingual transfer. In contrast to this work, they compare overlaps for different language pairs with only one tokenizer. We think that their observations may be confounded by the typological similarity between languages. In the following works, Conneau et al. (2020) found that sharing parameters in top layers is more important to multilingualism than same token embedding. Similar results were demonstrated by Wang et al. (2021); Dufter and Schütze (2020) who show that in bilingual models, artificially removing *vocabulary overlap* (similarly to ours NOOVERLAP) does not deteriorate cross-lingual transfer. In contrast to many previous approaches, we used probing for evaluation because this method offers better insight into representation learned in pre-training. Similarly, our results, Malkin et al. (2022); Limisiewicz et al. (2022) observed that differences in scripts could, in some cases, improve the crosslingual transfer in masked language modeling and for downstream tasks. Importance of *vocabulary allocation*. The effect of *vocabulary allocation* on model performance was studied to a lower extent. Zheng et al. (2021) observed that limited vocabulary capacity allocated for specific languages impedes the downstream tasks' performance and thus proposed a method to obtain more balanced *vocabulary allocation* throughout languages. For the same purpose, Chung et al. (2020) proposed a novel approach to generating multilingual vocabulary based on clustering the target languages and merging separate vocabularies. Recently, Liang et al. (2023) based on the elements of both approaches and increased vocabulary to train the XLM-V model, achieving better results than its predecessor (XLM-Roberta Conneau et al. (2019)). In a monolingual setting, Bostrom and Durrett (2020) argued that Unigram tokenization produces subword tokens that are more aligned with morphological units that bring improvement for downstream tasks. This contrasts with our finding of Unigram's underperformance when applied to a multilingual corpus. ## Improving Multilingual Sub-Word Tokenization. Patil et al. (2022) proposed a modification to BPE algorithm that increases overlap between similar languages and benefits cross-lingual transfer. Rust et al. (2021) observed that models with dedicated monolingual tokenizers outperform multilingual ones. This observation can be utilized by adapting the embedding layer of the model for a target language (Pfeiffer et al., 2020; Artetxe et al., 2020; Minixhofer et al., 2022). However, these approaches require language-specific modification of the model, limiting its multilingual aspect. Alternatives to sub-word tokenization. There are multiple alternative approaches for inputting text into deep models, such as character-based representation (Clark et al., 2022), byte input (Xue et al., 2022), or representing the input text as images (Salesky et al., 2021). Mielke et al. (2021) summarize a wide range of methods and point out that they offer trade-offs and may be better suited for certain tasks or languages. ## 7 Conclusions We introduced a new framework for the evaluation of multilingual subword tokenizers. We show that *vocabulary allocation* is a crucial aspect affecting the results of many downstream tasks. Specifically, we have observed the following trends: 1. Including longer and more diverse vocabulary units (higher *vocabulary allocation*) improves inlanguage results and cross-lingual transfers for word-level tasks; 2. *vocabulary overlap* is beneficial for cross-lingual transfer in sentence-level tasks; 3. Among languages with the same script, vocabulary overlap improves transfer for NER and deteriorates it for POS and dependency labeling. Our conclusions are in line with the observation of Mielke et al. (2021) that there is no "silver bullet solution" tokenizer suiting all purposes. We release the code for measuring tokenizer properties: github.com/tomlimi/entangled_ in_scripts. We believe that it will be a useful evaluation tool for the developers of models who can get a better insight into the tokenization method before computationally expensive model training. ## Limitations To achieve robust, unbiased results, we decided to train first on a smaller number of languages, fix our methodology and then confirm our findings on the full set of languages. This meant that two rounds of pretraining needed to be done and because of that, we scaled our models down for computational efficiency reasons. Another limitation of our methodology is the choice to train linear probes on top of the contextualized word representations instead of the more common finetuning approach. Nevertheless, we think that probing gives better insight into the pretrained model's representation. ## Ethics Statement We do not identify ethical risks connected to this work. ## Acknowledgements We thank Jindˇrich Libovický, Martin Popel, Gabriel Stanovsky, and anonymous ACL reviewers for their valuable comments and suggestions for improvement. This work has been supported by grant 338521 of the Charles University Grant Agency. We have been using language resources and tools developed, stored, and distributed by the LINDAT/CLARIAH-CZ project of the Ministry of Education, Youth and Sports of the Czech Republic (project LM2018101). ## References Mikel Artetxe, Sebastian Ruder, and Dani Yogatama. 2020. On the Cross-lingual Transferability of Monolingual Representations. In *Proceedings of the 58th* Annual Meeting of the Association for Computational Linguistics, pages 4623–4637. ArXiv:1910.11856 [cs]. Mikel Artetxe and Holger Schwenk. 2019. Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond. *Transactions of* the Association for Computational Linguistics, 7:597– 610. ArXiv:1812.10464 [cs]. Yonatan Belinkov. 2022. Probing Classifiers: Promises, Shortcomings, and Advances. *Comput. Linguistics*, 48(1):207–219. Kaj Bostrom and Greg Durrett. 2020. Byte Pair Encoding is Suboptimal for Language Model Pretraining. In *Findings of the Association for Computational Linguistics: EMNLP 2020, Online Event, 16-20 November 2020*, volume EMNLP 2020 of *Findings of ACL*, pages 4617–4624. Association for Computational Linguistics. Hyung Won Chung, Dan Garrette, Kiat Chuan Tan, and Jason Riesa. 2020. Improving Multilingual Models with Language-Clustered Vocabularies. In *Proceedings of the 2020 Conference on Empirical Methods in* Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 4536–4546. Association for Computational Linguistics. Jonathan H. Clark, Dan Garrette, Iulia Turc, and John Wieting. 2022. Canine: Pre-training an Efficient Tokenization-Free Encoder for Language Representation. *Trans. Assoc. Comput. Linguistics*, 10:73–91. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Unsupervised Cross-lingual Representation Learning at Scale. arXiv preprint arXiv:1911.02116. Alexis Conneau, German Kruszewski, Guillaume Lample, Loïc Barrault, and Marco Baroni. 2018a. What You Can Cram Into a Single $&!\#* Vector: Probing Sentence Embeddings for Linguistic Properties. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2126–2136, Melbourne, Australia. Association for Computational Linguistics. Alexis Conneau and Guillaume Lample. 2019. Crosslingual Language Model Pretraining. In *Advances* in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 7057–7067. Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel R. Bowman, Holger Schwenk, and Veselin Stoyanov. 2018b. XNLI: Evaluating Cross-lingual Sentence Representations. In *Proceedings of the 2018 Conference on Empirical Methods* in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 2475–2485. Association for Computational Linguistics. Alexis Conneau, Shijie Wu, Haoran Li, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Emerging Crosslingual Structure in Pretrained Language Models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 6022–6034. Association for Computational Linguistics. Marie-Catherine de Marneffe, Christopher D. Manning, Joakim Nivre, and Daniel Zeman. 2021. Universal Dependencies. *Comput. Linguistics*, 47(2):255–308. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In *Proceedings of the 2019 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Philipp Dufter and Hinrich Schütze. 2020. Identifying Elements Essential for BERT's Multilinguality. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4423–4437, Online. Association for Computational Linguistics. Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan Firat, and Melvin Johnson. 2020. XTREME: A Massively Multilingual Multitask Benchmark for Evaluating Cross-lingual Generalization. *CoRR*, abs/2003.11080. Taku Kudo. 2018. Subword Regularization: Improving Neural Network Translation Models with Multiple Subword Candidates. In *Proceedings of the 56th* Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 66–75. Association for Computational Linguistics. Taku Kudo and John Richardson. 2018. SentencePiece: A Simple and Language Independent Subword Tokenizer and Detokenizer for Neural Text Processing. In *Proceedings of the 2018 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 66–71, Brussels, Belgium. Association for Computational Linguistics. H. W. Kuhn. 1955. The Hungarian Method for the Assignment Problem. *Naval Research Logistics Quarterly*, 2(1-2):83–97. Davis Liang, Hila Gonen, Yuning Mao, Rui Hou, Naman Goyal, Marjan Ghazvininejad, Luke Zettlemoyer, and Madian Khabsa. 2023. XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models. *CoRR*, abs/2301.10472. Tomasz Limisiewicz, Dan Malkin, and Gabriel Stanovsky. 2022. You Can Have Your Data and Balance It Too: Towards Balanced and Efficient Multilingual Models. *CoRR*, abs/2210.07135. Ilya Loshchilov and Frank Hutter. 2019. Decoupled Weight Decay Regularization. In *7th International* Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. Dan Malkin, Tomasz Limisiewicz, and Gabriel Stanovsky. 2022. A Balanced Data Approach for Evaluating Cross-Lingual Transfer: Mapping the Linguistic Blood Bank. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2022, Seattle, WA, United States, July 10-15, 2022, pages 4903–4915. Association for Computational Linguistics. Christopher D. Manning and Hinrich Schütze. 2001. Foundations of Statistical Natural Language Processing. MIT Press. Sabrina J. Mielke, Zaid Alyafeai, Elizabeth Salesky, Colin Raffel, Manan Dey, Matthias Gallé, Arun Raja, Chenglei Si, Wilson Y. Lee, Benoît Sagot, and Samson Tan. 2021. Between Words and Characters: A Brief History of Open-Vocabulary Modeling and Tokenization in NLP. *ArXiv*, abs/2112.10508. Benjamin Minixhofer, Fabian Paischer, and Navid Rekabsaz. 2022. WECHSEL: Effective Initialization of Subword Embeddings for Cross-Lingual Transfer of Monolingual Language Models. In *Proceedings of* the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2022, Seattle, WA, United States, July 10-15, 2022, pages 3992– 4006. Association for Computational Linguistics. Xiaoman Pan, Boliang Zhang, Jonathan May, Joel Nothman, Kevin Knight, and Heng Ji. 2017. Cross-lingual Name Tagging and Linking for 282 Languages. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 1946–1958. Association for Computational Linguistics. Vaidehi Patil, Partha P. Talukdar, and Sunita Sarawagi. 2022. Overlap-based Vocabulary Generation Improves Cross-lingual Transfer Among Related Languages. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 219–233. Association for Computational Linguistics. Charles A. Perfetti and Ying Liu. 2005. Orthography to Phonology and Meaning: Comparisons Across and Within Writing Systems. *Reading and Writing*, 18(3):193–210. Jonas Pfeiffer, Ivan Vulic, Iryna Gurevych, and Sebastian Ruder. 2020. MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer. In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 7654– 7673. Association for Computational Linguistics. Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How Multilingual is Multilingual BERT? In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 4996–5001. Association for Computational Linguistics. Afshin Rahimi, Yuan Li, and Trevor Cohn. 2019. Massively Multilingual Transfer for NER. In *Proceedings of the 57th Annual Meeting of the Association for* Computational Linguistics, pages 151–164, Florence, Italy. Association for Computational Linguistics. Phillip Rust, Jonas Pfeiffer, Ivan Vulic, Sebastian Ruder, and Iryna Gurevych. 2021. How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 3118–3135. Association for Computational Linguistics. Elizabeth Salesky, David Etter, and Matt Post. 2021. Robust Open-Vocabulary Translation from Visual Text Representations. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 7235–7252. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In *Proceedings of the 54th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715–1725, Berlin, Germany. Association for Computational Linguistics. Xinying Song, Alex Salcianu, Yang Song, Dave Dopson, and Denny Zhou. 2021. Fast WordPiece Tokenization. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 2089– 2103. Association for Computational Linguistics. Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R. Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel R. Bowman, Dipanjan Das, and Ellie Pavlick. 2019. What Do You Learn From Context? Probing for Sentence Structure In Contextualized Word Representations. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998–6008. Xinyi Wang, Sebastian Ruder, and Graham Neubig. 2021. Multi-view Subword Regularization. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational* Linguistics: Human Language Technologies, NAACLHLT 2021, Online, June 6-11, 2021, pages 473–482. Association for Computational Linguistics. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-Art Natural Language Processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, EMNLP 2020 - Demos, Online, November 16-20, 2020, pages 38–45. Association for Computational Linguistics. Shijie Wu and Mark Dredze. 2019. Beto, Bentz, Becas: The Surprising Cross-Lingual Effectiveness of BERT. In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and* the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 833–844. Association for Computational Linguistics. Linting Xue, Aditya Barua, Noah Constant, Rami AlRfou, Sharan Narang, Mihir Kale, Adam Roberts, and Colin Raffel. 2022. ByT5: Towards a TokenFree Future With Pre-Trained Byte-to-Byte Models. ArXiv:2105.13626 [cs]. Bo Zheng, Li Dong, Shaohan Huang, Saksham Singhal, Wanxiang Che, Ting Liu, Xia Song, and Furu Wei. 2021. Allocating Large Vocabulary Capacity for Cross-Lingual Language Model Pre-Training. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 3203–3215. Association for Computational Linguistics. ## A Technical Details A.1 Tokenizer Training Details We use the Huggingface Tokenizers library for training the Unigram and BPE tokenizers. We kept the default values for the training parameters. Namely, for Unigram, we use a maximum piece length of 16 and a shrinking factor of 0.75. For BPE, we use alphabet size 1000 and minimum merge frequency 2. For all languages, we use SentencePiece (Kudo and Richardson, 2018) for word segmentation techniques instead of languagespecific word tokenizers. ## A.2 Model Architecture And Pre-Training In this study, we employed the Huggingface library (Wolf et al., 2020) to conduct all experiments. The model architecture is based on XLM-Roberta, although for our purposes, it was scaled down. Specifically, the size of the embeddings is 768, the number of attention layers is 8, and the number of attention heads is 6. The maximum sentence length is 128, and the vocabulary size is 120000. The number of parameters is 150M and, therefore, roughly 2 times smaller than the XLM-Roberta base model. The model was pre-trained for 10 epochs with a batch size of 1024. The learning rate was 5e-5 with linear decay and weight decay and 1% warm-up steps. In pretraining, we used AdamW optimizer (Loshchilov and Hutter, 2019). In total, we pretrained 7 models. The models were trained on 3 Nvidia GPUs. The probing experiments were run on 1 Nvidia GPU with 40GB of memory (Nvidia A40). The pretraining took about 17 hours for each 6-language model and 60 hours for the models trained on the full set of 20 languages. We didn't pursue any extensive hyperparameter search efforts as this was not the focus of our work. We selected the best batch size and learning rates for the pre-training based on a few trials. ## A.3 Downstream Data And Training The probes were for 30 epochs with early stopping and batch size 16. We used an initial learning rate of 2e-5. Other training parameters were the same as in pretraining. Probing experiments took between 5 to 180 minutes to complete on the same infrastructure as used for pretraining. We ran around 360 probe trainings. POS We use Part of Speech annotations from Universal Dependencies (de Marneffe et al., 2021). The dataset is available for 17 languages analyzed by us (not covered: Swahili, Thai, Georgian). Each word is assigned one of the 17 coarse POS tags. NER We use Wikiann dataset (Pan et al., 2017) consisting of Wikipedias article with annotated named entities of three types: location, person, and organization in IOB2. Following XTREME, we use balanced data splits from (Rahimi et al., 2019). Dependency labeling As in Part of Speech, we use Universal Dependencies (de Marneffe et al., 2021) for the dependency relation annotations. We use the largest UD treebank available for each language. For each word we predict one of the 37 universal relations to its head word. Because the relation is between two words, we use the concatenation of the two word representations along with their element-wise product as an input to the probe ([hw1; hw2; hw1 ⊙ hw2]). NLI We use XNLI dataset (Conneau et al., 2018b) for Natural Language Inference. We train the linear classification probe on top of the concatenation of two sentence vectors and their elementwise product: [hs1; hs2; hs1 ⊙ hs2]. We predict one of two relations between the first of sentences (called premise): contradicts, entails, or is neutral to the second sentence (called a hypothesis). We evaluate XNLI with the accuracy of classification. XNLI contains data for 15 languages (not covered: te, ta, mr, he, ka). Sentence Retrieval We use up to 1,000 sentences aligned for pairs of languages from Tatoeba dataset (Artetxe and Schwenk, 2019). For the pairs including English, we use the same sample as in XTREME data collection. For other pairs, we perform sampling ourselves. We compute the cosine similarity between sentence representations across languages and find the best alignment with the Hungarian algorithm(Kuhn, 1955). We compute the accuracy as the number of correctly aligned sentences divided by the total number of sentences. ## B In-Depth Tokenizers Analysis In Figure 4, we present the probabilities of vocabulary units, computed on concatenate six languages corpora, learned by different tokenization ![13_image_1.png](13_image_1.png) ![13_image_2.png](13_image_2.png) algorithms. Unigram and NOOVERLAP use a bigger fraction of the vocabulary for rarely appearing tokens (with probability lower than 10−6). BPE and TOKMIX produce a vast set of tokens with probabilities in the range between 10−5and 10−6. Interestingly, the former algorithm allocates about 6000 vocabulary entries to tokens not appearing in the corpora. BPE is better than Unigram in *vocabulary allocation* **throughout languages.** To support this claim, we train Unigram and BPE tokenizers for different vocabulary sizes. We observe that both the average rank (Figure 5) and CPT (Figure 6) stop rising for vocab sizes above 250,000 (except for Chinese). For BPE, the metrics still steadily rise after this threshold, which makes it overperform Unigram for most languages. We think that the reason why Unigram does not learn valuable tokens after this point is the way the ![13_image_0.png](13_image_0.png) ![13_image_3.png](13_image_3.png) initial vocabulary is constructed, i.e., it is the set of all character n-grams appearing in the corpus with n lower than 16. In contrast to BPR, Unigram's vocabulary won't cover longer words than 16 characters, which are useful in modeling some languages. We believe that further work on identifying optimal strategies for multilingual tokenization is needed. Vocabulary units preferred by tokenizers. In Table 6, we show the tokens with the highest differences in empirical probabilities obtained with BPE and Unigram tokenizers for three languages. We see that Unigram prefers suffixes to prefixes. Also, it splits text more often into single, possibly due to lower *vocabulary allocation*. ## C Supplementary Results C.1 Visualizations We present the additional visualization for the results for transfers across six languages for the tasks not presented in the main text: Dependency labeling 7a and NLI cross-lingual accuracy 7b, Sentence retrieval accuracy 7c. The results of experiments for 20 languages: Jensen-Shanon Divergences 8, and cross-lingual transfers for POS 10a, NER 10b, dependency tree labeling 10c, XNLI 9a, sentence alignment 9b. ## C.2 Results For All Languages We also include detailed results for the in-language experiments along with the proposed tokenizer metrics. In Table 7, we present the results for the six languages. ## C.3 Correlation Analysis We present paired correlation plots for in-language metrics in Figure 11. We use the results from 20 language settings to increase the number of observations. In this analysis, we focus on the differences between the tokenization methods and want to marginalize the language-specific features (such as the pre-training and fine-tuning data size or the model's preference for Indo-European languages). Therefore, for *vocabulary allocation* measures (AR, CPT) and downstream tasks, we subtract the mean for each language. For *vocabulary overlap* measure (JSD) and transfer values, we subtract the mean value for each pair of languages. In both cases, means are computed across all tokenizers. We present Spearman's correlation coefficient and associated p-value. ![14_image_0.png](14_image_0.png) ![14_image_1.png](14_image_1.png) ![14_image_2.png](14_image_2.png) | V. Allocation (AR) V. Allocation (CPT) MLM (MRR) NER (F1) POS (F1) Dep. labeling (F1) NLI (Acc) | |---------------------------------------------------------------------------------------------------| ar tr zh el es en All metric tokenizer Unigram 2129 2719 5919 2070 1439 1513 2042 BPE 2972 3226 4294 2907 2220 2143 2193 NoOverlap 2537 2653 2090 2065 1661 1597 1829 TokMix 3485 4167 3961 2639 1999 1898 2198 Unigram 3.16 4.01 1.84 3.5 3.88 3.91 3.17 BPE 3.7 4.19 2.03 3.97 4.34 4.22 4.47 NoOverlap 3.53 4.19 1.56 3.81 4.15 4.15 3.16 TokMix 3.7 4.45 1.73 3.9 4.24 4.18 3.34 Unigram 36.0 36.0 34.2 46.3 49.7 49.6 42.0 BPE 28.7 33.6 28.6 38.6 43.1 41.0 35.6 NoOverlap 38.1 39.6 41.4 42.8 47.5 46.6 42.7 TokMix 31.5 30.6 38.2 41.2 45.3 45.6 38.7 Unigram 66.4 ±0.1 73.0 ±0.1 35.1 ±0.1 68.0 ±0.1 68.0 ±0.1 66.1 ±0.2 62.8 ±0.1 BPE 76.1 ±0.0 76.7 ±0.0 54.2 ±0.1 70.3 ±0.1 75.2 ±0.1 70.0 ±0.0 70.4 ±0.1 NoOverlap 76.5 ±0.1 72.8 ±0.0 58.4 ±0.1 69.6 ±0.1 71.6 ±0.1 67.3 ±0.1 69.4 ±0.1 TokMix 76.6 ±0.1 76.2 ±0.1 56.1 ±0.0 70.1 ±0.1 74.3 ±0.1 68.1 ±0.1 70.2 ±0.1 Unigram 54.8 ±0.1 46.9 ±0.2 29.3 ±0.1 52.9 ±0.3 76.5 ±0.2 81.9 ±0.1 57.1 ±0.2 BPE 66.7 ±0.1 52.1 ±0.1 62.2 ±0.0 63.4 ±0.1 81.7 ±0.4 87.4 ±0.1 68.9 ±0.2 NoOverlap 66.5 ±0.1 52.5 ±0.2 60.6 ±0.1 67.5 ±0.1 81.3 ±0.6 86.7 ±0.1 69.2 ±0.2 TokMix 66.0 ±0.1 52.1 ±0.2 56.2 ±0.0 61.7 ±0.2 81.3 ±0.2 86.3 ±0.1 67.3 ±0.1 Unigram 13.5 ±0.6 58.6 ±0.8 20.7 ±0.1 58.4 ±0.4 71.9 ±0.1 65.7 ±0.2 48.1 ±0.4 BPE 13.8 ±0.0 63.7 ±1.2 59.5 ±0.1 68.2 ±0.8 77.0 ±0.2 70.3 ±0.4 58.7 ±0.4 NoOverlap 13.2 ±0.0 65.0 ±0.5 60.5 ±0.2 67.7 ±0.2 77.1 ±0.3 69.2 ±0.3 58.8 ±0.3 TokMix 14.1 ±0.0 62.9 ±1.2 53.8 ±0.1 67.3 ±0.5 76.5 ±0.1 69.1 ±0.2 57.3 ±0.4 Unigram 52.5 ±0.3 52.9 ±0.3 47.5 ±1.4 55.0 ±0.2 55.3 ±0.3 57.4 ±0.5 53.4 ±0.5 BPE 52.2 ±0.3 53.6 ±0.5 45.2 ±0.4 55.6 ±0.3 55.7 ±0.2 57.8 ±0.2 53.3 ±0.3 NoOverlap 52.9 ±0.7 54.0 ±0.2 44.0 ±0.8 54.8 ±0.1 54.9 ±0.3 57.3 ±0.3 53.0 ±0.4 TokMix 52.0 ±0.2 53.6 ±0.5 46.2 ±1.0 55.4 ±0.3 55.3 ±0.1 57.5 ±0.2 53.3 ±0.4 ![16_image_0.png](16_image_0.png) ![16_image_1.png](16_image_1.png) Figure 10: Cross-lingual transfer for the token-level tasks on 20 languages. The absolute values are presented for the Unigram tokenizer. For other tokenization methods, we show the difference from the unigram algorithm. (c) Dependency labeling he ar ur hi mr ta te el bg vi tr fr de es BPE BPE zh he ar ur hi mr ta te el ru bg vi tr fr de es zh he ar ur hi mr ta te el ru bg vi tr fr de es en zh he ar ur hi mr ta te el ru bg vi tr fr de es en zh he ar ur hi mr ta te el ru bg vi tr fr de es en −20 −15 −10 −5 0 5 10 15 20 −20 −15 −10 −5 0 5 10 15 20 −20 −15 −10 −5 0 5 10 15 20 ![17_image_0.png](17_image_0.png) zh en ru ![17_image_1.png](17_image_1.png) zh he ka ar ur hi mr ta te th el ru bg sw vi tr fr de es en 25 25 23 25 34 30 28 27 14 28 23 25 26 27 28 25 26 27 27 21 55 47 44 51 46 44 40 15 47 50 57 46 42 49 43 44 46 43 18 46 39 37 52 50 47 44 17 44 45 55 37 35 40 35 34 36 35 15 48 45 40 50 43 39 41 15 46 45 52 43 44 41 46 44 46 41 23 35 42 36 49 47 42 39 16 38 37 38 44 36 43 36 37 38 39 18 34 44 36 33 53 44 46 14 40 36 41 47 42 42 42 42 37 43 16 37 46 37 41 58 46 49 13 39 38 44 43 40 47 41 41 38 39 21 39 50 36 39 60 55 53 17 44 38 45 43 40 42 37 37 34 33 14 19 20 15 17 21 24 21 10 19 20 23 17 17 20 19 18 17 16 23 20 22 20 16 24 22 22 21 20 19 21 19 22 19 19 16 21 19 20 45 53 43 39 45 41 38 40 18 51 61 45 39 48 43 46 47 45 18 38 49 36 32 42 38 37 36 13 47 63 41 38 39 40 42 39 40 19 48 58 43 36 50 43 45 39 17 53 53 54 48 54 54 54 53 53 15 22 25 19 15 23 22 21 19 12 24 23 26 30 28 32 27 32 30 16 38 46 40 29 50 38 39 37 15 44 42 54 66 55 59 51 55 59 20 47 57 37 43 52 50 46 43 16 52 46 61 59 50 58 60 60 57 15 46 55 42 31 48 40 40 37 12 52 47 60 64 55 58 59 68 58 19 47 55 47 41 51 43 41 41 14 55 49 61 63 50 62 61 61 60 17 44 51 36 42 45 41 41 37 14 52 47 61 64 57 60 66 59 58 17 44 56 40 35 51 42 41 38 14 52 46 60 67 59 62 63 63 60 zh he ka ar ur hi mr ta te th el ru bg sw vi tr fr de es en (a) Part of Speech Tagging (b) Named Entity Recognition ![17_image_2.png](17_image_2.png) en 8 12 1 19 22 25 24 29 19 19 20 13 2 21 17 16 2 5 6 4 4 6 4 7 6 5 6 5 5 6 6 5 4 12 2 17 19 16 14 15 25 22 24 12 17 2 24 19 8 24 2 17 21 19 15 17 40 41 43 19 22 2 37 35 8 21 2 20 22 20 19 24 39 37 38 19 23 2 34 39 12 15 2 22 24 26 30 31 19 21 20 19 2 22 18 18 6 8 7 6 5 6 7 9 10 8 8 9 7 8 8 7 7 16 0 26 25 21 19 16 35 31 30 20 19 1 31 25 11 29 1 20 24 18 20 18 51 47 47 31 23 3 46 42 12 26 1 23 24 22 21 21 45 44 46 29 25 1 44 46 12 1 12 11 10 12 8 12 14 13 16 13 1 15 12 11 11 6 20 21 18 17 16 34 34 32 29 18 6 32 35 27 1 2 2 3 1 2 1 2 2 2 2 1 7 2 2 1 7 14 1 68 21 27 24 21 21 21 18 22 1 27 19 17 10 16 4 66 27 29 27 22 24 24 21 25 3 28 22 20 3 7 2 9 11 10 12 8 7 8 8 10 1 8 6 7 5 8 3 15 17 10 15 10 9 10 9 13 2 13 8 7 5 7 2 10 9 11 15 9 8 8 10 13 2 8 7 7 9 27 4 21 21 14 19 17 40 45 23 21 4 42 43 32 9 20 3 20 21 14 17 17 32 40 21 19 3 30 33 24 12 22 4 23 25 17 20 20 46 50 27 22 4 41 45 34 9 15 4 15 16 11 14 14 17 19 20 15 4 19 19 18 10 15 4 23 25 24 27 31 21 22 21 17 4 23 20 17 1 5 6 3 3 4 4 3 5 4 4 5 4 6 5 4 7 16 3 24 23 17 19 14 35 30 31 19 19 3 31 24 12 30 6 24 24 19 19 19 51 46 49 29 23 6 47 42 16 24 6 26 26 17 22 22 48 45 46 29 24 5 42 47 ![18_image_0.png](18_image_0.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Yes, section Limitations. ✓ A2. Did you discuss any potential risks of your work? Yes. we cannot think of many risks. One of possible risks might lie in under-representing the low-resource languages but actually we propose possible improvements on that. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Yes. Section 1 ✓ A4. Have you used AI writing assistants when working on this paper? For reformulation of our English text (this does not need to be disclosed) we also report the use of coding assistants in the README we never use long coding assistant suggestions in verbatim and check the outputs closely. B ✓ **Did you use or create scientific artifacts?** > Yes, we use SentencePiece, Huggingface - cited in section 2 and appendix. Datasets used are cited in section 4.2. We do not publish the models we trained but we publish the code to reproduce the results along with a metric-computation utility package ✓ B1. Did you cite the creators of artifacts you used? Yes, in the same sections as above ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? We do not discuss the license terms in the paper. The libraries we use are licensed under the Apache 2.0 license which allows the use of the tools for research. The datasets are released for public use. ✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? No, We don't discuss the use of the existing artifacts, nevertheless our use is consistent with their intended use. We will specify the license under which we release our code. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. We use existing, publicly released datasets. We therefore assume that these steps were already taken. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Yes, Section 4.2.1 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Yes, Section 4.2 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ## C ✓ **Did You Run Computational Experiments?** Yes, Section 5 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Yes, Appendix A.1 ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Yes, Appendix A.1 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Yes, Section 5 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Yes, Appendix A.1 ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
fang-etal-2023-whole
The Whole Truth and Nothing But the Truth: Faithful and Controllable Dialogue Response Generation with Dataflow Transduction and Constrained Decoding
https://aclanthology.org/2023.findings-acl.351
In a real-world dialogue system, generated text must be truthful and informative while remaining fluent and adhering to a prescribed style. Satisfying these constraints simultaneously isdifficult for the two predominant paradigms in language generation: neural language modeling and rule-based generation. We describe a hybrid architecture for dialogue response generation that combines the strengths of both paradigms. The first component of this architecture is a rule-based content selection model defined using a new formal framework called dataflow transduction, which uses declarative rules to transduce a dialogue agent{'}s actions and their results (represented as dataflow graphs) into context-free grammars representing the space of contextually acceptable responses. The second component is a constrained decoding procedure that uses these grammars to constrain the output of a neural language model, which selects fluent utterances. Our experiments show that this system outperforms both rule-based and learned approaches in human evaluations of fluency, relevance, and truthfulness.
# The Whole Truth And Nothing But The Truth: Faithful And Controllable Dialogue Response Generation With Dataflow Transduction And Constrained Decoding Hao Fang∗ Anusha Balakrishnan∗ **Harsh Jhamtani**∗ John Bufe Jean Crawford Jayant Krishnamurthy Adam Pauls Jason Eisner Jacob Andreas Dan Klein Microsoft Semantic Machines <[email protected]> ## Abstract ✔︎ In a real-world dialogue system, generated text must be truthful and informative while remaining fluent and adhering to a prescribed style. Satisfying these constraints simultaneously is difficult for the two predominant paradigms in language generation: neural language modeling and rule-based generation. We describe a hybrid architecture for dialogue response generation that combines the strengths of both paradigms. The first component of this architecture is a rule-based content selection model defined using a new formal framework called dataflow transduction, which uses declarative rules to transduce a dialogue agent's actions and their results (represented as dataflow graphs) into context-free grammars representing the space of contextually acceptable responses. The second component is a constrained decoding procedure that uses these grammars to constrain the output of a neural language model, which selects fluent utterances. Our experiments show that this system outperforms both rule-based and learned approaches in human evaluations of fluency, relevance, and truthfulness. ## 1 Introduction In a task-oriented dialogue system, response generation is naturally posed as a conditional language modeling problem: dialogue agents must produce a contextually appropriate natural language string conditioned on the history of the user and agent interaction. But unlike many language generation problems, a good dialogue response generation model is not (just) a model of typical human utterances in context. Instead, effective dialogue agents must balance fluent generation with a set of much stricter constraints. Consider the dialogue shown in Fig. 1. In turn (1) of this dialogue, the user makes a request, which the dialogue agent correctly translates into ∗Equal contribution. ![0_image_0.png](0_image_0.png) Smith at 2pm today. ✔︎ Figure 1: Interaction between a user and a dialogue agent. Once the user's request is translated into an agent action—expressible as a program or dataflow graph (a)—the agent must generate a response. Agent responses might simply state the result of the agent's action, but must do so truthfully (b). Often responses should describe both the action and the result, *e.g.*, to help users identify when the agent has misunderstood their request (c). These responses should be straightforward for system designers to inspect and modify. a computation—here represented as a dataflow graph (Fig. 1a) in the style of Semantic Machines et al. (2020). The agent now needs to accurately describe this computation's return value (namely, 5). The wrong answer in Fig. 1b shows it instead describing a different value that happens to appear elsewhere in the dataflow graph. Turn (2) illustrates a more subtle risk: due to a speech recognition error, the agent has mistakenly created a meeting with *Tara Smith* rather than *Sarah Smith*. The wrong answer in Fig. 1c shows it describing this result too briefly, which might lead the user to assume that their request was completed successfully. To avoid confusion, a system designer might wish to ensure that the agent instead echoes back to the user the details of the agent's action. This example highlights challenges in building real-world dialogue response generation systems. First, response generation is not simply a problem of describing the *result* of a computation in natural language. In some cases, response generators may also usefully **describe the provenance** of that result—the computation itself and its intermediate values. In many human-to-human conversations, a response as detailed as Fig. 1c would be over-informative, violating Grice's maxim of quantity (1975). But for a speaker that is prone to mistakes, such as an AI agent, describing its own understanding can increase user trust when the understanding is accurate and provides an opportunity for correction when it is not. Second, dialogue response generation systems must **guarantee truthfulness**: since the user often has no way to check the responses, even occasional errors could have disastrous consequences and would greatly undermine trust. Yet truthful utterances might be low-probability under a domaingeneral language model (LM), particularly when they reflect errors in language understanding (as in Fig. 1c). Finally, response generation systems must **support declarative specification of agent behavior**. When confusing or infelicitous responses are discovered, it should be possible to easily and precisely modify them without changing the dialogue agent's behavior in other contexts. In recent years, the main focus of academic dialogue research has been on "end-to-end" learned models for response generation, especially neural sequence models (Vinyals and Le, 2015; Zhang et al., 2020b). But while such models excel at producing fluent and coherent output, research continues to find that they struggle in maintaining faithfulness (Wiseman et al., 2017; Maynez et al., 2020; Raunak et al., 2021; Liu et al., 2023; Zhang et al., 2023). Perhaps more fundamentally, because the behavior of such systems is encoded implicitly in their training data, designing a dialogue system requires system builders to write and edit a large number of training examples whose final effect may be difficult to predict. As a result, many dialogue systems in the real world remain rule-based: system builders handwrite rules (*e.g.*, in the form of a synchronous grammar) for transforming dialogue states into text, and these rules are applied directly during deployment. But such rule-based systems are also notoriously difficult to build and maintain (Walker et al., 2002; Reiter, 2022). They require designers to anticipate every low-level question about surface realization, and to encode these in the same grammar that is responsible for enforcing highlevel properties like truthfulness. Given the many strengths of modern LMs, is there a way to leverage them while satisfying the numerous other demands on dialogue response generation systems? In this paper, we describe a hybrid approach that combines the advantages of end-to-end and rule-based approaches. This approach has two components: - A dataflow transduction procedure (§3) that maps any computation by the agent (represented as a dataflow graph) to a small context-free grammar (CFG) that defines the space of natural language descriptions or responses allowed for the given computation. The mapping is defined by declarative rules. This formal framework makes it possible to write rules to precisely and truthfully describe both data and its provenance, while performing supplementary computation where needed to produce informative responses. - A constrained decoding procedure (§4) that uses beam search to identify strings that are both grammatical under the CFG and probable under a given language model (LM). In effect, this intersects the CFG with the LM. This makes it possible to decompose language generation into a **content selection model** (implemented by the dataflow transducer) and a separate fluency model (implemented by the LM). Hybrid generation systems of this kind have a long history in NLP, dating back to Knight and Hatzivassiloglou (1995) and Langkilde and Knight (1998). They mapped an abstract meaning representation (AMR) to an acyclic finite-state automaton (FSA) and scored its paths with an n-gram LM.We replace AMR with dataflow, replace their mapping rules with dataflow transduction rules, upgrade their FSA to a CFG, and upgrade their n-gram LM to a neural LM. In this way, we respectively support computation graphs, arbitrary tests and transductions, nested syntactically typed generation templates (already present in Knight and Hatzivassiloglou, 1995), and modern language models. Together, dataflow transduction and constrained decoding allow a compact generation system to faithfully and fluently describe a complex and open-ended space of actions. We built such a hybrid system for calendar event queries in the SMCalFlow domain (Semantic Machines et al., 2020). When evaluated on a subset of annotated dialogues, it was consistently rated as more truthful, relevant, and fluent than either a rule-based or end-to-end neural system (§5.2). Results were similar on MultiWOZ dialogues (Budzianowski et al., 2018; Eric et al., 2020) (§5.4). Code, data, and trained models used in our experiments are released at https://github. com/microsoft/dataflow2text. ## 2 Problem Formulation We study the problem of response generation for task-oriented dialogue. A dialogue, like the one in Fig. 1, consists of a sequence of **turns** k, each consisting of a **user utterance** xk, one or more **actions** ak, and an **agent response** yk. The job of a dialogue agent is to predict an appropriate action and response from a dialogue history, *i.e.*, , to map from (x1, a1, y1, x2, a2, y2, . . . , xn) 7→ (an, yn). How is this done? Typically, a **language understanding module** maps the user utterance xk (in context) to a **formal meaning representation**. The agent reasons about this meaning representation to determine its own actions ak. Finally, a **response generation module** maps these actions or their results (in context) to the agent utterance yk. The focus of this paper is the response generator. We assume that the formal meaning representation takes the form of an executable program, as is common in the semantic parsing literatureand that the actions are produced by evaluating this program, possibly with side effects. As described by Semantic Machines et al. (2020), the program may be viewed as a **dataflow graph** in which each node is labeled with a function, constructor, or primitive value, as well as a return value once the node is executed. We aim to implement a response generator that, when applied to an evaluated dataflow graph, satisfies the three properties outlined in §1: description of data and its provenance, guaranteed truthfulness, and declarative specification. In practice, for guidance when developing our generator, we refer to a development set of dialogues annotated with goldstandard dataflow graphs and agent responses. ## 3 Dataflow Transduction Given a dataflow graph G (*e.g.*, Fig. 1a) rooted at a node vroot (the return value of the program represented by the dataflow graph), our task is to generate a string that describes vroot and its provenance. To achieve this, we propose a new formal framework for generation based on **dataflow transduction**. At a high level, the formalism uses declarative rules that describe how to transform a dataflow graph into a small graph-specific grammar (specifically a **quasi-synchronous context-free grammar**, or QCFG) that defines the space of allowed responses. These rules walk along the graph, introduce new computations (dataflow subgraphs) as needed, and add rules to the grammar. Formally, a dataflow transducer S is defined by a 4-tuple (T , Σ, R, tstart) where T is a set of nonterminal types,1 Σ is the set of terminals (word types), R is a set of dataflow transduction rules (see §3.1), and tstart ∈ T is the start nonterminal. When applied to G, the dataflow transducer produces a QCFG. As a side effect, it may extend the graph with new computations. We use G¯ to denote the extended graph. A QCFG (Smith and Eisner, 2006) is a specialized CFG whose nonterminals include alignments to the nodes V (G¯) of G¯. Where an ordinary CFG might specify ways to generate an NP (noun phrase) or a DATE, a QCFG would specify ways to generate an NP or DATE that describes the result and provenance of v, for each appropriately typed node v ∈ V (G¯). A QCFG resulting from dataflow transduction is a 4-tuple (T ×V (G¯), Σ,P,(tstart, vroot)) where T ×V (G¯) is the QCFG's set of nonterminals and P is its set of productions. A QCFG production has the form α → β1β2 *· · ·* βN where the left-hand-side α = (t, v) *∈ T ×* V (G¯) is a QCFG nonterminal, and each βi can be either a nonterminal (ti, vi) or a terminal in Σ. The vi of a right-hand-side nonterminal βi may have appeared in the original G, or may have been added to G¯ by the dataflow transducer. These production rules then derive a set of strings just as in any CFG. ## 3.1 Dataflow Transduction Rules A dataflow transduction rule is applied to a node v ∈ V (G¯) (if v has appropriate properties) to cre-1In practice, nonterminal types might correspond to dialogue acts, syntactic categories, semantic categories, etc. This is up to the system designer. ![3_image_0.png](3_image_0.png) ate a single QCFG production (t, v) *→ · · ·* that could be used to describe v. An example rule is shown in Fig. 2. A rule has three components: (1) a **head**, namely the nonterminal type t ∈ T ; (2) a **body**, which is a piece of code that determines whether the rule can apply to v, and which may look up or create nodes that are related to v; and (3) a **response template**, which specifies the right-hand side of the QCFG production in terms of the related nodes that identified in the body. Rule Head. This nonterminal type characterizes the type of node that the transduction rule is able to describe and the type of description that it will produce.1 When a rule with head t is successfully applied to the node v, the resulting QCFG production has left-hand-side (*t, v*). Rule Body. The rule body tests whether the rule can be applied by examining the dataflow graph G¯v rooted at v. It also binds variables to other nodes of G¯ that are to be described recursively.2 For example, the rule body in Fig. 2 checks whether G¯v has the form findEventsOnDate(date). If so, it binds the variable date accordingly, and introduces new nodes into G¯, bound to the variables num and event, which compute the number of events and the first event. All three of these variables will be referenced in the response template. Response Template. The response template says how to create the right-hand side of the QCFG rule—a sequence β1 *· · ·* βN of terminals and nonterminals. Each QCFG nonterminal βi = (ti, vi) specifies a related node vi ∈ V (G¯) to describe, along with a dataflow nonterminal tithat says how to describe it. The possible descriptions of vi will thus emerge from applying transducer rules with head tito node vi. In our template syntax, the notation {EVENT <event>} would construct the QCFG nonterminal (EVENT, v), if the rule body has bound the variable event to the node v. This syntax is illustrated in Fig. 2, whose response template constructs a right-hand side that intersperses terminal symbols with three QCFG nonterminals, which pair types LEX, PP, and EVENT with nodes that were identified by the rule body. Our actual template format is more flexible than shown here. It allows choices within the template in order to specify variant phrasings.3 This advanced feature is described in Appendix A. Details and examples of dataflow transduction rules used in our experiments are provided in Appendix B. ## 3.2 Dataflow Transduction Procedure Given a dataflow transducer S and a dataflow graph G rooted at node vroot, we can transduce the graph into a QCFG as follows. The system starts out by creating QCFG productions that can expand the start nonterminal (tstart, vroot). For each transduction rule in R whose head is tstart, it executes the body, which checks any additional conditions for whether the rule can be applied to vroot, binds variables, and uses the response template to create a QCFG production. If these productions mention new nonterminals, the system recursively creates further QCFG productions, in the same way, that can expand those nonterminals. As a special case, to expand a nonterminal of the form (LEX, v), the system creates a QCFG production whose right-hand side gives the value of v, as rendered into natural language using a lexicalization function rather than a template; *e.g.*, a value Integer(42) would be rendered as *"42"*. The recursive process continues until productions have been created for every nonterminal that appears in the QCFG. The resulting QCFG compactly represents a combinatorial space of possible responses. It will generally include multiple productions aligned to the same node v, created by different dataflow transduction rules. This mechanism can be used to copy simple values like strings and numbers from the dataflow graph, as well as to create more complex recursive descriptions. Note that (1) transduction rules are selected via their head but also condition on the dataflow graph through their body, and (2) all QCFG nonterminals are grounded in the dataflow ![4_image_0.png](4_image_0.png) graph. Together, this provides a means to ensure truthfulness when generating responses. Note there may be multiple transduction rules for each QCFG nonterminal βi and the QCFG may admit combinatorially many derivation trees. Each of these derivation trees derives a truthful response. However, since different trees use different rules, the responses may vary in their information content, presentation order, linguistic style, and choice of terminals. The amount of variation can be controlled by the author of the dataflow transducer. In this paper, we use a neural LM with constrained decoding to select a fluent and appropriate response from all these truthful responses, as described in the next section (§4). ## 4 Constrained Decoding In this section, we describe how to integrate the formal framework above with a general LM to perform response generation, as illustrated in Fig. 3. Given a derived QCFG of the kind described in §3.2, we will perform constrained decoding as in (Shin et al., 2021; Roy et al., 2022), generating response candidates from a pretrained LM. The QCFG resulting from dataflow transduction implicitly represents a set of possible derivation trees and the agent responses they yield. As long as transduction rules faithfully describe the nodes they apply to, every derivation in this set will correspond to a truthful agent utterance. But these utterances may not always be grammatical or natural. For example, the response template in Fig. 2 may be realized as *"I found 2 event on Monday"* since the rule body does not check whether the value of num is 1. Similarly, the response template EVENT ⟨*event*⟩ starts on DATE ⟨*date*⟩ . may be realized as *The product meeting on Monday starts on Monday*, if the grammar permits identifying events by their dates. With carefully engineered and highly specialized rules (*e.g.*, using extremely fine-grained nonterminal types), it would be possible to ensure that the responses are always fluent and even that there is always a single possible outcome from the top-down search procedure. However, this would usually require much a more complicated set of rules, which creates a burden for system development and maintenance. Our proposed approach instead uses a largescale pretrained LM (preferably fine-tuned) to select among truthful utterances produced by the QCFG.4 One option is to use the LM to re-rank all strings that can be produced by the QCFG, but that would be very computationally expensive even when that set is finite. Instead, we follow Shin et al. (2021) and Roy et al. (2022), who decode sentences from a given LM under the constraint that they must be valid under a given CFG. In contrast to these prior papers, which used a static CFG, we derive a new CFG each time the dialogue agent needs to generate a response, by applying the dataflow transducer to the current dataflow graph. The constrained decoding process is a special case of beam search. For each ℓ = 0, 1*, . . .*, it maintains up to K prefixes of the same length ℓ and tries to extend each in all legal ways to length ℓ + 1, pruning back to the K most probable extensions. For each prefix y1y2 *. . . y*ℓ and each terminal symbol yℓ+1 ∈ T , the extension y1y2 *. . . y*ℓ+1 is only legal if it is a prefix of some legal complete response—i.e., some string that is grammatical under the QCFG. This check can be efficiently performed via an incremental contextfree parsing algorithm (Earley, 1970) using the parsing state of the prefix y1y2 *. . . y*ℓ. In other words, constrained decoding only considers a prefix if it could be extended into at least one legal complete response. Note that the combinatorially many legal responses are never enumerated individually. Rather, the set is compactly represented by the set of QCFG productions. ## 5 Experiments To evaluate the proposed approach, we conducted a set of detailed experiments (§5.1–§5.3) on a subset of the SMCalFlow dataset (Semantic Machines et al., 2020), and a brief study (§5.4) applying our approach to the MultiWOZ dataset (Budzianowski et al., 2018). ## 5.1 Data And Evaluation Metrics with a correct dataflow program (i.e., computation) and a "gold" response that would be desirable for the agent to produce.5 We use the v2.0 release processed by Platanios et al. (2021). We focus on a subset of SMCalFlow involving calendar event queries. This subset contains 8938 training examples and 1041 validation examples. We found that 187 transduction rules, written by some of us in a matter of hours, were sufficient to cover all gold system responses in these examples.6 We package the annotated examples, transduction rules, and necessary meta information for executing the dataflow programs as a new dataset, SMCalFlow2Text. Automatic Metrics. For automatic evaluation, we use several reference-based metrics: BLEU4 (Papineni et al., 2002) and ROUGE-L (Lin, 2004) are computed using GEM-metrics,7and BERTScore-F1 is computed using HuggingFace Evaluate.8 Following the recommendation of Zhang et al. (2020a), we use the re-scaled version of BERTScore, which is easier to interpret. We additionally consider exact match scores, i.e., R@K, which measure whether one of the top K response candidates exactly matches the reference. Both R@1 and R@5 scores are reported. We lowercase all the strings and remove any extra spaces in the predictions and references before computing the evaluation metrics. Human Evaluation. It is well-known that popular automatic evaluation metrics may not always reflect the true quality of the generated responses (Celikyilmaz et al., 2021). Thus, we further carry out human evaluation on 297 examples randomly sampled from the validation data. Specifically, for each generated response, we collect human judgments on three questions: **grammaticality** (*"has the virtual assistant made any grammar errors?"*), **relevance** (*"has the virtual assistant misunderstood the user's request?"*), and **truthfulness** ("has the virtual assistant provided any incorrect information as judged using the database and timestamp?"). Three judgments are collected | System | Automatic Metrics | Human Evaluation (%) | | | | | | | |---------------------------|---------------------|------------------------|-----|-----|-------------|----------|----------|------| | BLEU | ROUGE | BERTSc. | R@1 | R@5 | Grammatical | Relevant | Truthful | | | QCFG Random Sampling | .35 | .58 | .50 | .02 | .06 | 62.3 | 90.9 | 92.3 | | Unconstrained Decoding | .77 | .87 | .87 | .47 | .66 | 98.7 | 93.3 | 82.2 | | QCFG-Constrained Decoding | .80 | .86 | .85 | .56 | .78 | 99.0 | 96.6 | 91.6 | | Gold | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 99.0 | 98.0 | 92.3 | for each question, and we report the percentage of examples where *"no"* is the majority-voted answer. Higher percentages are better. Crowdworkers are recruited from Amazon Mechanical Turk with qualification requirements such as having a work approval rate higher than 80% and having performed a minimum of 100 annotations. They are paid at the rate of $0.15 per judgment. For responses generated by the constrained decoding approach, annotators generally agree with each other on the three questions, *i.e.*, the percentage of examples where all three workers choose the same answer are around 90%, 78% and 76%, respectively. More details are provided in Appendix C. ## 5.2 Main Results Table 1 shows our main evaluation results on SMCalFlow2Text. The first baseline we considered is to randomly sample responses from the generated QCFG. The other baseline is unconstrained LM decoding without using dataflow transduction. Model outputs are compared to "gold" agent utterances. For both unconstrained and constrained decoding, the text used to prompt the LM is a string representation of the computation graph (in the format released in SMCalFlow v2.0), followed by its execution result rendered as a JSON string. In both cases, we decode using beam search with beam size K = 5. The LM is initialized from CodeT5-base (Wang et al., 2021) and fine-tuned on all training examples. See Appendix D for more details. As expected, the QCFG random sampling baseline struggles on all the automatic metrics, since dataflow transduction rules are written with an emphasis on truthfulness rather than fluency. This is reflected in the grammaticality score from the human evaluation as well. However, the truthfulness score matches that of the gold responses (92.3%): this baseline rarely generates incorrect responses. Its responses are sometimes generic and omit information that would be relevant to the user—its relevance score is the lowest among all compared approaches—although this behavior contributes to the high truthfulness score. In contrast, unconstrained decoding LM without dataflow transduction achieves impressive scores on automatic evaluation. Human evaluation also suggests that the generated responses are grammatically correct and relevant to the user's request in most cases. However, unconstrained decoding scores low on truthfulness, making false statements in about one-fifth of the generated responses. This high rate of factual errors from neural LMs is consistent with findings in prior work (Wiseman et al., 2017; Maynez et al., 2020; Raunak et al., 2021; Liu et al., 2023; Zhang et al., 2023). It is usually unacceptable in real-world applications. Compared with unconstrained decoding, our proposed QCFG-constrained decoding achieves significantly better scores on exact match, truthfulness, and even relevance, while maintaining similar scores on BLEU, ROUGE, BERTScore and grammaticality. In particular, human evaluation results indicate that the quality of generated responses is very close to that of the gold responses. Table 2 shows some example predictions. We share some qualitative analysis in Appendix E. Since even the gold responses did not achieve 100% on human evaluation scores, we manually inspected those problematic examples. There are 4 examples for which the majority-voted answer to the ungrammaticality question is *"yes but understandable,"* and others are all rated as not containing any grammar errors. For the relevance question, 4 examples are due to arguably bad data and 2 examples receive tied votes. For the truthfulness question, 9 examples are due to arguably bad data, | User Utterance | Gold | QCFG-Constrained | Unconstrained Decoding | | | | | | |---------------------------------------------------------------------------------|---------------------|--------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------|-------------------|-----------------------------|------------------------|-----|-------| | Decoding | | | | | | | | | | A. | Do | I | have | any | events | | | | | scheduled on Sunday the 3rd? | No, | I didn't find any | No, | I didn't find any | No, I didn't find any event | | | | | event on March 3rd. | event on March 3rd. | on May 3rd. | | | | | | | | B. | Does Pikachu fly in tomorrow? | Yes, I found one event matching "flying in" with Pikachu tomorrow. It's "fly in" from 4:00 to 4:30 PM. | Yes, I found one event matching "flying in" with Pikachu tomorrow. It's from 4:00 to 4:30 PM. | Yes, | I | found | one | event | | matching "flying in" with Pikachu tomorrow. It's "Fly in" from 5:00 to 5:30 PM. | | | | | | | | | | C. | Can | you | find | me | any | Yes, I found one event | | | | events during last March? | last | March. | It | was | | | | | | "Dancing in Athenny" on March 17, 2019 from 4:00 to 4:30 PM. | Looks like it. | Yes, I found one event last March. It was "Erin Go Bragh" on March 17, 2019 from 5:00 to 5:30 PM. | | | | | | | | BLEU | ROUGE | BERTSc. | R@1 | R@5 | | |------------------------------------------|---------|-----------|-------|-------|-----| | 1. LM without fine-tuning | | | | | | | ✗ | .00 | .03 | −.47 | .00 | .00 | | ✓ | .04 | .28 | .05 | .02 | .02 | | 2. LM fine-tuned on 3% training data | | | | | | | ✗ | .68 | .81 | .80 | .26 | .40 | | ✓ | .73 | .83 | .80 | .39 | .62 | | 3. LM fine-tuned on full training data | | | | | | | ✗ | .77 | .87 | .87 | .47 | .66 | | ✓ | .80 | .86 | .85 | .56 | .78 | | 4. LM prompted without execution results | | | | | | | ✗ | .58 | .70 | .72 | .27 | .42 | | ✓ | .78 | .85 | .84 | .54 | .77 | | 5. LM prompted with user utterance | | | | | | | ✗ | .77 | .87 | .87 | .48 | .65 | | ✓ | .79 | .86 | .84 | .57 | .78 | 8 examples are due to to crowd worker mistakes, and 6 examples receive tied votes. ## 5.3 Ablation Study We next analyze how the amount of fine-tuning data and the context used in the prompt impact the quality of generated responses. Results are summarized in Table 3. Impact of fine-tuning: Without fine-tuning the LM, neither unconstrained nor constrained decoding works well. This is likely due to the mismatch between the pre-training tasks and the response generation task. However, after fine-tuning on only a random 3% of the training data, both approaches achieve significantly better scores, with larger gains on QCFG-constrained decoding. This suggests that QCFG-constrained decoding is much more data-efficient in the low-data regime (268 examples). Indeed, QCFG-constrained decoding using 3% of the training data is on par with unconstrained decoding using 100% of the training data, indicating that several expert hours spent on creating dataflow transduction rules hold almost equivalent value to a large volume of training data. While gaps between unconstrained and QCFG-constrained decoding on automated metrics are small in the full-data setting (Table 1), unconstrained decoding still performs poorly on the truthfulness evaluation. Thus, truthfulness failures from unconstrained decoding are not straightforwardly solved by scaling up training data; QCFGconstrained decoding offers an easier path to faithful response generation. Impact of context: Results in groups 3–5 in Table 3 all use 100% of the training examples to fine-tune the LM. The difference is in the context used in the LM prompt (during both training and inference). For group 3, the text used to prompt the LM is the computation concatenated with the execution result, which is the same setup used in §5.2. For group 4, we omit the execution results from the LM prompt (but not from the decoder constraints), whereas for group 5, we add the user utterance (prefixed to the computation). Comparing group 3 and group 4, omitting execution results significantly harms the performance of unconstrained decoding. In contrast, dataflow transduction rules can execute the computation internally, and do not require the LM to condition on it. Comparing group 3 and group 5, adding the user utterance to the LM prompt does not benefit both approaches much. ## 5.4 Experiments With Multiwoz Dataset To demonstrate the general applicability of our approach for response generation, we carry out a brief study on the widely used MultiWOZ 2.1 dataset (Budzianowski et al., 2018; Eric et al., 2020). We automatically convert the agent action annotations to dataflow computations and write 14 transduction rules. For generating responses, we use the predicted agent actions from the MTTOD system (Lee, 2021). Similar to our experiments on SMCalFlow, we fine-tune CodeT5-base on all training examples, using the ground-truth belief state and predicted action as the text used to prompt the LM. For evaluation, we randomly sample 100 examples from the test split, and two authors manually rate the generated responses from our QCFG-constrained decoding system and the MTTOD system. The inter-annotator agreement is 100%. Almost all generated responses are grammatically correct and relevant to the user utterance. To rate truthfulness, we use the predicted actions as the references. Our QCFG-constrained decoding approach produces truthful responses for all 100 examples, whereas only 89 responses from the MTTOD system are truthful with respect to the predicted actions. Among the 11 remaining examples, 7 of them are due to imperfect delexicalization and 4 are due to hallucination. ## 6 Related Work One line of response generation research focuses on generating fluent and coherent responses directly from user utterances without any intermediate structured representation. This paradigm is mostly used for chatbots, as in early rule-based systems (Weizenbaum, 1966; Wallace, 2009), neural conversation models (Vinyals and Le, 2015; Shang et al., 2015; Sordoni et al., 2015; Li et al., 2016; Serban et al., 2016), and recent largescale pretrained LMs like DialoGPT (Zhang et al., 2020b) and GPT-3 (Brown et al., 2020). Another line focuses on generating text from structured data, with applications beyond dialogue response generation. For example, the WebNLG challenge (Gardent et al., 2017) generates natural language descriptions from relation tuples, and Lebret et al. (2016) generate a biography from a structured "infobox" record. Many recent dialogue response generation tasks adopt dialogue-act-based meaning representations, including the MultiWOZ dataset (Budzianowski et al., 2018), the Schema-Guided dialogue dataset (Rastogi et al., 2020), and the E2E NLG challenge (Dusek et al., 2020). In contrast, our response generation task uses computations as the input, which do not directly encode the dialogue acts of the responses. This is a more challenging task, as the system needs to perform extra reasoning to obtain the derived information. In this sense, our task is similar to the one in CoSQL (Yu et al., 2019) and Logic2Text (Chen et al., 2020). Constrained decoding techniques for neural LMs have been developed for text generation with different types of constraints (Balakrishnan et al., 2019; Dathathri et al., 2020; Lu et al., 2021, 2022). As §4 noted, we follow Shin et al. (2021) but choose our grammar constraints dynamically for each response. ## 7 Conclusion We have described a hybrid approach for building dialogue response generation systems. Our approach introduces a new formalism for transducing a dataflow graph into a QCFG, which is then used in a constrained decoder that intersects the QCFG with a neural LM. This formal framework makes it possible to write rules to precisely and truthfully describe data and its provenance while deferring surface realization decisions to a flexible language model. This new approach outperforms unconstrained conditional language modeling in both automatic and human evaluations, especially on truthfulness. Moreover, using 3% of the training data, the constrained decoding approach is on par with the unconstrained decoding approach when it uses 100% of the training data, indicating that several expert hours spent on authoring rules hold almost equivalent value to a large volume of training data. ## 8 Limitations And Future Directions Authoring transduction rules is relatively easy but may still be labor-intensive for complex domains. Future work might explore (semi-)automatically deriving transduction rules from data, learning to synthesize them from domain specifications, or curating a collection of domain-general transduction rules that can be imported into new domains. Our experiments in this paper generated text only in English. It would be interesting to apply the framework to datasets in other languages, e.g., GlobalWoZ (Ding et al., 2022). While our framework is intended to be agnostic to the output language, our notation for response templates might need to be slightly extended (along the lines of Appendix A) to be more convenient to use with morphologically complex languages or free-wordorder languages. In these settings, presumably, the QCFG should systematically generate many inflections or orderings for the LM to choose among. To support multilingual dialogue systems, future work could consider automatically translating the response templates into additional languages—perhaps by working backwards from automatic translations of natural language responses that use those templates. Relatedly, we have only tested the proposed approach on dataflow graphs. Future work could apply the method to generate textual descriptions of other graph-structured inputs, such as graph databases or abstract meaning representation (AMR) graphs. While QCFG productions were unweighted in this paper, giving them weights would allow the QCFG to express its own preferences about which productions to use for a given input. For example, in a product-of-experts architecture, the probability of a given response y, would be proportional to the LM probability of y times the weights of all productions used in the QCFG derivation of y (summed over all such derivations). Beam search (§4) could then be carried out using prefix weights (Opedal et al., 2023). The weights could be trained using gold responses. Weighting the QCFG raises the possibility that the dataflow transduction rules could encode pragmatic context-dependent policies. For example, a dataflow transduction rule could call a neural network to assess the suitability of applying the rule to a given node in the dataflow graph, and then weight the resulting QCFG production ac- ## Ethics Statement Our proposed approach strongly outperforms a purely neural model at truthfully describing the result of a computation and its provenance. However, our approach can still make pragmatically unhelpful omissions, making it potentially risky to deploy in some scenarios. Additionally, we leverage pre-trained neural language models such as CodeT5, and as such, we acknowledge that our approach might inherit some biases present in these pre-trained models. ## Acknowledgements We would like to thank Ben Van Durme, Baolin Peng, Subhro Roy, Richard Shin, and Patrick Xia for valuable discussions and feedback. We also thank the anonymous reviewers for their insightful comments and suggestions. ## References Anusha Balakrishnan, Jinfeng Rao, Kartikeya Upasani, Michael White, and Rajen Subba. 2019. Constrained decoding for neural NLG from compositional representations in task-oriented dialogue. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 831– 844, Florence, Italy. Association for Computational Linguistics. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel HerbertVoss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Proceedings of Advances in Neural Information Processing Systems, volume 33, pages 1877–1901. Curran Associates, Inc. Paweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Iñigo Casanueva, Stefan Ultes, Osman Ramadan, and Milica Gašic. 2018. ´ MultiWOZ - a large-scale multi-domain Wizard-of-Oz dataset for task-oriented dialogue modelling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5016–5026, Brussels, Belgium. Association for Computational Linguistics. Asli Celikyilmaz, Elizabeth Clark, and Jianfeng Gao. 2021. Evaluation of text generation: A survey. arXiv:2006.14799v2 [cs.CL]. Zhiyu Chen, Wenhu Chen, Hanwen Zha, Xiyou Zhou, Yunkai Zhang, Sairam Sundaresan, and William Yang Wang. 2020. Logic2Text: Highfidelity natural language generation from logical forms. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 2096–2111, Online. Association for Computational Linguistics. Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric Frank, Piero Molino, Jason Yosinski, and Rosanne Liu. 2020. Plug and play language models: A simple approach to controlled text generation. In *Proceedings of 8th International Conference on* Learning Representations, ICLR 2020. Bosheng Ding, Junjie Hu, Lidong Bing, Mahani Aljunied, Shafiq Joty, Luo Si, and Chunyan Miao. 2022. GlobalWoZ: Globalizing MultiWoZ to develop multilingual task-oriented dialogue systems. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1639–1657, Dublin, Ireland. Association for Computational Linguistics. Ondrej Dusek, Jekaterina Novikova, and Verena Rieser. 2020. Evaluating the state-of-the-art of end-to-end natural language generation: The E2E NLG challenge. *Computer Speech and Language*, 59:123– 156. Jay Earley. 1970. An efficient context-free parsing algorithm. *Communications of the ACM*, 13(2):94– 102. Mihail Eric, Rahul Goel, Shachi Paul, Abhishek Sethi, Sanchit Agarwal, Shuyang Gao, Adarsh Kumar, Anuj Goyal, Peter Ku, and Dilek Hakkani-Tur. 2020. MultiWOZ 2.1: A consolidated multi-domain dialogue dataset with state corrections and state tracking baselines. In *Proceedings of the Twelfth Language Resources and Evaluation Conference*, pages 422–428, Marseille, France. European Language Resources Association. Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017. The WebNLG challenge: Generating text from RDF data. In Proceedings of the 10th International Conference on Natural Language Generation, pages 124–133, Santiago de Compostela, Spain. Association for Computational Linguistics. Paul Grice. 1975. Logic and conversation. In *Syntax* and semantics, volume 3, pages 41–58. Academic Press. Kevin Knight and Vasileios Hatzivassiloglou. 1995. Two-level, many-paths generation. In 33rd Annual Meeting of the Association for Computational Linguistics, pages 252–260. Irene Langkilde and Kevin Knight. 1998. Generation that exploits corpus-based statistical knowledge. In 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, Volume 1, pages 704–710, Montreal, Quebec, Canada. Association for Computational Linguistics. Rémi Lebret, David Grangier, and Michael Auli. 2016. Neural text generation from structured data with application to the biography domain. In *Proceedings of the 2016 Conference on Empirical Methods* in Natural Language Processing, pages 1203–1213, Austin, Texas. Association for Computational Linguistics. Yohan Lee. 2021. Improving end-to-end task-oriented dialog system with a simple auxiliary task. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 1296–1303, Punta Cana, Dominican Republic. Association for Computational Linguistics. Jiwei Li, Will Monroe, Alan Ritter, Dan Jurafsky, Michel Galley, and Jianfeng Gao. 2016. Deep reinforcement learning for dialogue generation. In *Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing*, pages 1192– 1202, Austin, Texas. Association for Computational Linguistics. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In *Text Summarization Branches Out*, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Nelson F. Liu, Tianyi Zhang, and Percy Liang. 2023. Evaluating verifiability in generative search engines. arXiv:2304.09848 [cs.CL]. Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In *Proceedings of* International Conference on Learning Representations, ICLR 2019. Ximing Lu, Sean Welleck, Peter West, Liwei Jiang, Jungo Kasai, Daniel Khashabi, Ronan Le Bras, Lianhui Qin, Youngjae Yu, Rowan Zellers, Noah A. Smith, and Yejin Choi. 2022. NeuroLogic a*esque decoding: Constrained text generation with lookahead heuristics. In *Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, pages 780–799, Seattle, United States. Association for Computational Linguistics. Ximing Lu, Peter West, Rowan Zellers, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2021. NeuroLogic decoding: (un)supervised neural text generation with predicate logic constraints. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4288–4299, Online. Association for Computational Linguistics. Joshua Maynez, Shashi Narayan, Bernd Bohnet, and Ryan McDonald. 2020. On faithfulness and factuality in abstractive summarization. In *Proceedings* of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1906–1919, Online. Association for Computational Linguistics. Andreas Opedal, Ran Zmigrod, Tim Vieira, Ryan Cotterell, and Jason Eisner. 2023. Efficient semiringweighted Earley parsing. In *Proceedings of the Association for Computational Linguistics (ACL)*. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Emmanouil Antonios Platanios, Adam Pauls, Subhro Roy, Yuchen Zhang, Alexander Kyte, Alan Guo, Sam Thomson, Jayant Krishnamurthy, Jason Wolfe, Jacob Andreas, and Dan Klein. 2021. Valueagnostic conversational semantic parsing. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3666– 3681, Online. Association for Computational Linguistics. Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta, and Pranav Khaitan. 2020. Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset. In *Proceedings* of the AAAI Conference on Artificial Intelligence, pages 8689–8696. Vikas Raunak, Arul Menezes, and Marcin JunczysDowmunt. 2021. The curious case of hallucinations in neural machine translation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1172–1183, Online. Association for Computational Linguistics. Ehud Reiter. 2022. What are the problems with rulebased NLG? https://ehudreiter.com/2022/ 01/26/problems-with-rule-based-nlg/. Subhro Roy, Sam Thomson, Tongfei Chen, Richard Shin, Adam Pauls, Jason Eisner, and Benjamin Van Durme. 2022. BenchCLAMP: A benchmark for evaluating language models on semantic parsing. arXiv:2206.10668 [cs.CL]. Semantic Machines, Jacob Andreas, John Bufe, David Burkett, Charles Chen, Josh Clausman, Jean Crawford, Kate Crim, Jordan DeLoach, Leah Dorner, Jason Eisner, Hao Fang, Alan Guo, David Hall, Kristin Hayes, Kellie Hill, Diana Ho, Wendy Iwaszuk, Smriti Jha, Dan Klein, Jayant Krishnamurthy, Theo Lanman, Percy Liang, Christopher H. Lin, Ilya Lintsbakh, Andy McGovern, Aleksandr Nisnevich, Adam Pauls, Dmitrij Petters, Brent Read, Dan Roth, Subhro Roy, Jesse Rusak, Beth Short, Div Slomin, Ben Snyder, Stephon Striplin, Yu Su, Zachary Tellman, Sam Thomson, Andrei Vorobev, Izabela Witoszko, Jason Wolfe, Abby Wray, Yuchen Zhang, and Alexander Zotov. 2020. Task-oriented dialogue as dataflow synthesis. *Transactions of the Association for Computational Linguistics*, 8:556–571. Iulian V. Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierarchical neural network models. In *Proceedings of* the AAAI Conference on Aritificial Intelligence. Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1577–1586, Beijing, China. Association for Computational Linguistics. Richard Shin, Christopher Lin, Sam Thomson, Charles Chen, Subhro Roy, Emmanouil Antonios Platanios, Adam Pauls, Dan Klein, Jason Eisner, and Benjamin Van Durme. 2021. Constrained language models yield few-shot semantic parsers. In *Proceedings of* the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7699–7715, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. David Smith and Jason Eisner. 2006. Quasisynchronous grammars: Alignment by soft projection of syntactic dependencies. In *Proceedings on* the Workshop on Statistical Machine Translation, pages 23–30, New York City. Association for Computational Linguistics. Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015. A neural network approach to context-sensitive generation of conversational responses. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 196–205, Denver, Colorado. Association for Computational Linguistics. Oriol Vinyals and Quoc Le. 2015. A neural conversation model. In *Proceedings of ICML Deep Learning* Workshop. Marilyn A. Walker, Owen C. Rambow, and Monica Rogati. 2002. Training a sentence planner for spoken dialogue using boosting. *Computer Speech & Language*, 16(3):409–433. Spoken Language Generation. Richard S. Wallace. 2009. The anatomy of A.L.I.C.E. In *Parsing the Turing Test*, pages 181–210. Springer, Dordrecht. Yue Wang, Weishi Wang, Shafiq Joty, and Steven C.H. Hoi. 2021. CodeT5: Identifier-aware unified pretrained encoder-decoder models for code understanding and generation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8696–8708, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Joseph Weizenbaum. 1966. ELIZA - a computer program for the study of natural language communication between man and machine. *Communications of* the ACM, 9(1):36–45. Sam Wiseman, Stuart Shieber, and Alexander Rush. 2017. Challenges in data-to-document generation. In *Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing*, pages 2253–2263, Copenhagen, Denmark. Association for Computational Linguistics. Tao Yu, Rui Zhang, Heyang Er, Suyi Li, Eric Xue, Bo Pang, Xi Victoria Lin, Yi Chern Tan, Tianze Shi, Zihan Li, Youxuan Jiang, Michihiro Yasunaga, Sungrok Shim, Tao Chen, Alexander Fabbri, Zifan Li, Luyao Chen, Yuwen Zhang, Shreya Dixit, Vincent Zhang, Caiming Xiong, Richard Socher, Walter Lasecki, and Dragomir Radev. 2019. CoSQL: A conversational text-to-SQL challenge towards crossdomain natural language interfaces to databases. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1962– 1979, Hong Kong, China. Association for Computational Linguistics. Muru Zhang, Ofir Press, William Merrill, Alisa Liu, and Noah A. Smith. 2023. How language model hallucinations can snowball. arXiv:2305.13534 [cs.CL]. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020a. BERTScore: Evaluating text generation with BERT. In *Proceedings of the 8th International Conference on Learning* Representations, ICLR 2020. Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2020b. DIALOGPT : Large-scale generative pre-training for conversational response generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 270–278, Online. Association for Computational Linguistics. ## A Alternatives In Response Templates A dataflow transduction rule can be equipped with multiple templates, and our template format also allows choices within a single template. Specifically, our implementation allows the use of vertical bar to encode alternatives within a template, e.g., "*I {{ didn't find any | found no }} {{ matching events | events matching {LEX [subject]} }}* on your calendar." During dataflow transduction, this template is automatically converted into a small system of QCFG productions, *i.e.*, introducing new nonterminals for the alternations. ## B Dataflow Transduction Rule Details In our experiments, there are 9 head types (including the START symbol) for the 187 transduction rules for SMCalFlow2Text, and 3 head types for the 14 transduction rules for MultiWOZ. Our framework is agnostic to the nonterminal types (see footnote 1). We mainly used syntactic categories like NP, PP, DT, VB, UH (interjection), Copula, etc. One potential challenge is that the domain developers may need to have some linguistic knowledge about the syntactic categories. Alternatively, they could use semantic categories. The complete set of rules for SMCalFlow2Text is available in our released Python code. The 187 transduction rules cover the 8938 and 1041 examples from the training and validation set in the original SMCalFlow data, *i.e.*, the gold agent responses can all be produced from the transduction rules. The authors who wrote the rules were able to look at both the training and validation examples. The remaining training and validation examples in the original SMCalFlow dataset are not covered by these rules. Below we explain some examples of dataflow transduction rules. \# Head : PP \# Body : match computation : case FullMonthofPreviousMonth ( month ) : return {" month ": month } \# Response Template : " last {NP [ month ]}" The rule head PP suggests that the computation is described as a preposition phrase. The body simply checks whether the computation being described is a call to the function FullMonthofPreviousMonth and extracts the argument month. The response template lexicalizes the function call as "last" and defers describing the month to appropriate NP rules such as the one below. # Head : NP ![13_image_0.png](13_image_0.png) ![13_image_1.png](13_image_1.png) # Body : if computation . __value__ == Month . March : return {} # Response Template : " March " For this rule, its body checks the value of the computation rather than its structure. Since the response template has no nonterminal, the body does not return any variable binding. Note returning an empty dictionary is different from returning None (which is the default return value in Python), as the latter indicates that the rule cannot be applied. ![13_image_2.png](13_image_2.png) The head of this rule is S, which is our start nonterminal. The function GetAttr is similar to Python's builtin getattr method, *i.e.*, it is used to access the values of an object's attributes, and the special constructor StructAttribute specifies the name of the attribute and optionally its type. Here, the body checks whether the computation is describing the organizer of an event, as reflected in the response template as well. Note the response template uses the vertical bar for alternatives, as described in Appendix A. A more precise rule could choose between are and is based on whether there are multiple organizers or not. We usually recommend leaving such decisions to the neural LM instead of hard-coding them in transduction rules, but the latter approach is still possible if the system designer prefers. ## C Human Evaluation Details A screenshot of the MTurk interface for human evaluation is shown in Fig. A1. Table A1 shows the percentages of examples where all three workers choose the same answer for individual systems. It can be observed that the gold responses receive the highest agreements on all three questions. The ![14_image_0.png](14_image_0.png) QCFG-constrained decoding has slightly higher agreements than the unconstrained dcoding. The QCFG random sampling receives a significantly lower agreement on "Grammatical," probably because this approach may produce ungrammatical responses but people may not agree on whether these are understandable. ## D Model Configurations For SMCalFlow, we fine-tune the CodeT5 model for a fixed number of epochs (=10). For MultiWOZ, we fine-tune the model for at most 10 epochs and do early stopping based on the on the loss on the development set. We use the AdamW optimizer (Loshchilov and Hutter, 2019 ) with β 1 = 0.9 and β 2 = 0.999, using a linear learning rate scheduler with an initial learning rate of 5 × 10− 5 . For decoding, we always use a fixed ## Beam Size Of 5. The CodeT5-base models used in our experiments have 220 million parameters. We used machines with 32GB V100 GPUs for model finetuning while the decoding experiments were carried out on CPU-only machines. For SMCalFlow experiments, the input sequence to the LM is the string representation of the computation in the lispress format followed by its execution result rendered as a JSON string, e.g., "Plan: (Yield (Event.start ( . . ))) Result: {"type": "DateTime", "value": ... ) <s>", where the last token is a special token to separate the input and the output. For the ablative study (group 5) in §5.3 , the user utterance is prefixed to the sequence, e.g., " User: When do I have thee oil change on my car scheduled for? Plan: . . . Result: .. <s>". | System | Grammatical | Relevant | Truthful | |---------------------------|---------------|------------|------------| | QCFG Random Sampling | .58 | .75 | .71 | | Unconstrained Decoding | .86 | .71 | .71 | | QCFG-Constrained Decoding | .90 | .78 | .76 | | Gold | .95 | .81 | .80 | Table A1: The percentage of examples where all three workers choose the same answer. | Unconstrained | Constrained | | |------------------|---------------|----| | Untruth | 19 | 0 | | Omission | 3 | 11 | | Addition | 17 | 18 | | Minor Difference | 10 | 13 | | Disfluency | 1 | 1 | | Annotation Error | 7 | 8 | | Total | 57 | 51 | For MultiWOZ experiments, the computation is rendered as a raw JSON string that encodes the ground-truth belief state and the predicted system act. There is no execution result for these computations. ## E Qualitative Analysis We looked at 100 randomly selected examples from the experiments on SMCalFlow from §5.2, and compared the generated responses from both unconstrained decoding and QCFG-constrained decoding with the human-annotated gold responses provided by the dataset. We summarize the differences between the generated and gold responses in Table A2, using the following categories: Untruth The system reports incorrect information. Omission The system fails to mention information mentioned in the gold response. Addition The system mentions additional (correct) information that is not mentioned in the gold response. Minor Difference The system uses a different phrasing than the gold response that nonetheless has the same information and fluency. Disfluency The system output is disfluent. Annotation Error The system output is acceptable but the gold annotation contains a fluency or factuality error. For unconstrained decoding, 57 out of 100 responses differ from the gold responses, whereas for QCFG-constrained decoding, only 51 of 100 responses differ. This result is consistent with the R@1 column of Table 1 (mismatch rates of 53% and 44% respectively on the full validation set). As expected, the most noticeable difference is in the number of Untruths. The QCFG-constrained system produced no Untruths. The unconstrained system produced 19%, close to the 18% rate found in the human evaluations in Table 1. We show some examples of Untruths in Table 2. Conversely, the QCFG-constrained system produces substantially more Omissions than the unconstrained system. Of the 11 omissions produced by the constrained system, 3 are are identical to the unconstrained output while 7 are on inputs for which the unconstrained output produce an Untruth. In other words, our system successfully removed the 19 Untruths by the system, but in 7 of those cases, it produced a shorter (but still factually correct) input than the preferred gold annotation for that example. We also note that the gold dataset is not consistent in how much information is included in the responses - short answers like "Looks like it" in Example C from ?? are present in the gold annotations on examples similar to Example C. Furthermore, both systems produce more Additions than Omissions, indicating that there is not a systematic bias towards shorter answers overall. In future work, the model could be made to select more descriptive responses by adding a brevity penalty in the decoder or by weighting the QCFG productions, so that responses are scored not only by the LM but also by the QCFG. ## F Dataset License The SMCalFlow dataset is distributed under the CC BY-SA 4.0 license. To the best of the authors' knowledge, the MultiWOZ datasets were released under MIT license as shown in https://github. com/budzianowski/multiwoz. Our experiments follow the intended use of these datasets, which is to advance research in dialogue systems. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 8 A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. ✓ B1. Did you cite the creators of artifacts you used? Section 5.1 and Section 5.4 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Appendix F ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Appendix F B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 5.1 ## C ✓ **Did You Run Computational Experiments?** Left Blank. ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix D The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Appendix D C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Not applicable. Left blank. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 5.1 D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Appendix C ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Section 5.1 ✗ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? This is an oversight. ✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? We did not go through IRB. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
wang-etal-2023-know
Know What {I} don{'}t Know: Handling Ambiguous and Unknown Questions for Text-to-{SQL}
https://aclanthology.org/2023.findings-acl.352
The task of text-to-SQL aims to convert a natural language question into its corresponding SQL query within the context of relational tables. Existing text-to-SQL parsers generate a plausible SQL query for an arbitrary user question, thereby failing to correctly handle problematic user questions. To formalize this problem, we conduct a preliminary study on the observed ambiguous and unanswerable cases in text-to-SQL and summarize them into 6 feature categories. Correspondingly, we identify the causes behind each category and propose requirements for handling ambiguous and unanswerable questions. Following this study, we propose a simple yet effective counterfactual example generation approach that automatically produces ambiguous and unanswerable text-to-SQL examples. Furthermore, we propose a weakly supervised DTE (Detecting-Then-Explaining) model for error detection, localization, and explanation. Experimental results show that our model achieves the best result on both real-world examples and generated examples compared with various baselines. We release our data and code at: \url{https://github.com/wbbeyourself/DTE}.
# Know What I Don'T Know: Handling Ambiguous And Unanswerable Questions For Text-To-Sql IMDB Rating Rotten Tomatoes Rating Content Rating ![0_image_0.png](0_image_0.png) Bing Wang†∗, Yan Gao§, Zhoujun Li†**, Jian-Guang Lou**§ †State Key Lab of Software Development Environment, Beihang University, Beijing, China §Microsoft Research Asia {bingwang, lizj}@buaa.edu.cn, {yan.gao, jlou}@microsoft.com ## Abstract The task of text-to-SQL aims to convert a natural language question into its corresponding SQL query within the context of relational tables. Existing text-to-SQL parsers generate a "plausible" SQL query for an arbitrary user question, thereby failing to correctly handle problematic user questions. To formalize this problem, we conduct a preliminary study on the observed ambiguous and unanswerable cases in text-to-SQL and summarize them into 6 feature categories. Correspondingly, we identify the causes behind each category and propose requirements for handling ambiguous and unanswerable questions. Following this study, we propose a simple yet effective counterfactual example generation approach that automatically produces ambiguous and unanswerable text-to-SQL examples. Furthermore, we propose a weakly supervised DTE (Detecting-Then-Explaining) model for error detection, localization, and explanation. Experimental results show that our model achieves the best result on both real-world examples and generated examples compared with various baselines. We release our data and code at: https://github.com/wbbeyourself/DTE. ## 1 Introduction Text-to-SQL task aims to generate an executable SQL query given a natural language (NL) question and corresponding tables as inputs. It builds a natural language interface to the database to help users access information in the database (Popescu et al., 2003), thereby receiving considerable interest from both industry and academia (Guo et al., 2019; Wang et al., 2020; Liu et al., 2021). Correspondingly, a series of new model architectures have been proposed, such as IRNet (Guo et al., 2019), RAT-SQL (Wang et al., 2020), ETA (Liu et al., 2021), etc. These models have achieved satis- ∗Work done during an internship at Microsoft Research Asia. 7.9 86% 7.6 7.8 87% 7.7 Ambiguous Question: Show me the top rating **movie.** Sales Year 1,933,099 2021 1,804,824 2021 Previous: **SELECT [Brand] ORDER BY [Sales] DESC** name" cannot be mapped to any concepts in your table. Figure 1: Ambiguous and unanswerable examples in text-to-SQL task as well as our explanations. Blue font denotes the problematic question span and red font means the "plausible" column name selected by previous models. factory results on well-known benchmarks, including Spider (Yu et al., 2018) and WikiSQL (Zhong et al., 2017). However, state-of-the-art models trained on the leaderboard datasets still demonstrate inadequate performance in practical situations, where user queries are phrased differently, which can be problematic. Concretely, from our study with realworld text-to-SQL examples (Sec. 2), it is found that about 20% of user questions are problematic, including but not limited to *ambiguous* and *unanswerable* questions. Ambiguous questions refer to those which can have multiple semantic meanings based on a single table. For instance, in Figure 1(a), the word "rating" in a user's query could be mapped to disparate columns, such as "IMDB Rating", "Rotten Tomatoes Rating", or "Content Rating". On the other hand, unanswerable questions pertain to those that cannot be answered based on the information provided by the tables. For example, in Figure 1(b), there is no column about "model name" in the table. State-of-the-art models are capable of generating "plausible" SQL queries, even in the presence of ambiguous or unanswerable questions. This phenomenon reveals two problems of previous methods. Firstly, with regard to data, the training samples utilized in these approaches lack ambiguous and unanswerable questions. Current training datasets gather queries by either using templates (Zhong et al., 2017) or by manually annotating controlled questions and filtering out poorly phrased and ambiguous ones (Yu et al., 2018). This data-gathering approach ensures that a correct answer exists within the table context. Secondly, in regards to the model, end-to-end parsing models ignore modeling questions in a fine-grained manner, which results in an inability to precisely detect and locate the specific reasons for ambiguous or unanswerable questions. To address the data shortage problem, we propose a counterfactual examples generation approach that automatically produces ambiguous and unanswerable text-to-SQL examples using existing datasets. Given the free-form nature of the text, conventional natural language modification techniques are not always accurate. In contrast to plain text, tables exhibit well-defined structures, usually consisting of rows and columns. Consequently, table modification is more controllable. In light of this, we propose to generate ambiguous and unanswerable examples by modifying the structured table. Furthermore, we propose a weakly supervised model DTE (Detecting-Then-Explaining) for handling ambiguous and unanswerable questions. To locate ambiguous or unanswerable tokens in user questions, we formulate the location process as a sequence labeling problem, where each token in the user question will be tagged as being related to an ambiguous label, an unanswerable label, or others (Sec. 4.1). Since there is no labeled data for sequence labeling, we extract the set of column names and cells appearing in the SQL query and use this set as the weak supervision. In this way, we could generate explicit explanations for ambiguous and unanswerable questions to end users. Note that the sequence labeling information is pseudo and derived from our model, thus alleviating heavy manual efforts for annotation. Experimental results show that our approach achieves the best results on both real-world examples collected from realistic applications and automatically generated ambiguous and unanswerable examples, compared with various baselines. Our contributions are as follows: - We conduct a preliminary study on the ambiguous and unanswerable questions in textto-SQL and summarize 6 featured categories. We also identify the causes behind each category and propose requirements that should be met in explainable text-to-SQL systems. - We propose a counterfactual examples generation approach that automatically produces ambiguous and unanswerable text-to-SQL examples via modifying structured tables. - We propose a weakly supervised model for ambiguous and unanswerable question detection and explanation. Experimental results show that our approach brings the model with the best explainability gain compared with various baselines. ## 2 Preliminary Study On Ambiguous And Unanswerable Problem To understand user behaviors in a real-world application, we conduct a comprehensive user study on our commercial text-to-SQL product. Firstly, around 3,000 failed user questions in the product are collected. They obtained over 30 data tables from multiple domains, including education, finance, government, etc. Then, we manually group these questions into multiple categories. At last, we explore the causes and potential solutions to deal with them. According to our analysis, nearly 20% of the questions are problematic, including 55% ambiguous and 45% unanswerable questions respectively, revealing the importance of handling problematic questions. In the following, we will introduce their categories, causes, and potential solutions for handling them. ## 2.1 Problem Categories In this section, we formalize ambiguous and unanswerable questions and identify 6 sub-categories. Ambiguous Problem In the text-to-SQL task, ambiguity means that one user question could have multiple semantic meanings (e.g., SQL query) based on one table. Specifically, we can subdivide them into two sub-categories, namely column ambiguity and value ambiguity, which account for 45% and 10% of all problematic questions, respectively. Percentage Column ![2_image_0.png](2_image_0.png) Columns: Movie, IMDB Rating, Rotten Tomatoes Rating, Content Rating 45% Percentage Column Question: Show me model name by sales. Note: The span "model name" is unanswerable because no such a column named 30% Value Question: Count the total of Private hospitals. **Columns: NHHospitalCategory, State, Year, BenefitsPaid** Note: The span "Private hospitals" is unanswerable because no such value in 7% Calculation ![2_image_2.png](2_image_2.png) Columns: Country, Imports, Exports, … Note: The span "balance of trade" is unanswerable, because model does not know the calculation formula : Balance of Trade = Exports - Imports. 6% Ambiguous Problem Unanswerable Problem 10% ![2_image_1.png](2_image_1.png) 2% Value Columns: Engineer, Constructor, License issued, License expires, … Column ambiguity means that some tokens in the user question could be mapped to multiple columns. For example in Table 1, we don't know exactly which "Rating" the user wants since there are three rating columns. Value ambiguity means that some tokens in the user question could be mapped to multiple cell values in the table. For example in Table 1, *Jack* in the user question can be mapped to the name of either an "Engineer" or a "Constructor". Unanswerable Problem The unanswerable problem can be classified into four categories: column unanswerable, value unanswerable, calculation unanswerable, and out-of-scope, which account for 30%, 7%, 6%, and 2% of all problematic questions, respectively, as shown in the bottom part of Table 1. (1) The column unanswerable means that the concepts mentioned in the question do not exist in table columns. In the first example, the model name does not exist in the given columns, but our product incorrectly associates it with the irrelevant column "Brand". (2) The value unanswerable indicates that the user question refers to cell values that do not exist in the table. As the second example shows, no such *Private hospitals* value exists in the table. (3) The calculation unanswerable category is more subtle. It requires mapping the concept mentioned in the user question to composite operations over existing table columns. For example, the *balance of trade* is a concept derived from "Exports − *Imports*". Such mapping functions require external domain knowledge. Our product which is trained from a general corpus captures limited domain knowledge, and thus often fails. (4) The out-of-scope category means that the question is out of SQL's operation scope, such as chart operations. ## 2.2 Causes Through communicating with end users and analyzing the characteristic of questions as well as corresponding table contexts, we identify three fundamental causes for ambiguous and unanswerable questions: (1) end users are unfamiliar with the content of the table and don't read the table carefully, causing unanswerable questions; (2) ambiguity arises due to the richness of natural language expressions and the habitual omission of expressions by users (Radhakrishnan et al., 2020); (3) the emergence of similar concepts in the table tends to cause more ambiguous questions. Note that around 95% of problematic questions are constructed un- Schema: Date, Attendance, Record, **Score** Question: What is the score **where record is 0–2 ?** ![3_image_0.png](3_image_0.png) O COL **O COL O VAL O** Schema: Date, Attendance, Record, **Score** Question: What is the score **where record is 0–2 ?** Label: O O O UNK **O COL O VAL O** Question: What is the score **where record is 0–2 ?** Label: O O O AMB **O COL O VAL O** Figure 2: Ambiguous and unanswerable examples generated by our approach. intentionally, revealing the importance of making users *conscious of being wrong*. ## 2.3 Explainable Parser Requirements Based on the findings and analysis above, to deal with ambiguous and unanswerable questions, we propose to make a text-to-SQL system *know-whatI-don't-know*. On one hand, a parsing system should detect ambiguous and unanswerable questions. On the other hand, a parsing system should locate the specific reasons and generate corresponding explanations to guide the user in rectification. Achieving *know-what-I-don't-know* can benefit from two aspects: (1) from model view: enhances models' ability to deal with problematic questions and improve user trust; (2) from user view: makes it clear to users which part of their questions are problematic, guiding them to revise their questions. In our user study experiments, we find that 90% of the problematic questions can be corrected by prompting users with explanations shown in Table 1, and the remaining 10% of questions can only be solved by injecting external knowledge into the model. In the following, we will introduce how we mitigate the challenges mentioned in Sec. 1 ## 3 Counterfactual Examples Generation To alleviate the data shortage issue, we propose a counterfactual examples generation approach for automatically generating problematic text-to-SQL examples. In our approach, we mainly focus on generating two major types of problematic exam- | NoisySP | WikiSQL | WTQ | | |-------------------------|-----------|--------|-------| | Train # ambiguous | 4,760 | 0 | 0 | | # unanswerable | 10,673 | 0 | 0 | | # answerable | 0 | 56,350 | 7,696 | | # tables | 4,861 | 17,984 | 1,283 | | Development # ambiguous | 1,581 | 0 | 0 | | # unanswerable | 1,652 | 0 | 0 | | # answerable | 0 | 8,142 | 1,772 | | # tables | 1,232 | 2,614 | 325 | | Test # ambiguous | 2,332 | 0 | 0 | | # unanswerable | 2,560 | 0 | 0 | | # answerable | 0 | 15,362 | 0 | | # tables | 1,993 | 5,031 | 0 | ples: column ambiguity and column unanswerable, which account for 75% 1 of all problematic examples based on our preliminary study. Note that the counterfactual examples are generated via modifying structured tables instead of natural language questions. The reason is that conditional modification on a structured table is more controllable than unstructured text. Finally, 23k problematic examples are obtained based on two text-to-SQL datasets, i.e., WikiSQL (Zhong et al., 2017) and WTQ (Shi et al., 2020). Next, we will introduce the details of our approach. ## 3.1 Our Approach Given an answerable text-to-SQL example that contains a question Q = (q1*, . . . , q*m), a DB schema (also a column set) C = {c1*, . . . , c*n} and a SQL query S, our goal is to generate problematic examples, denoting as (Q, C ′, S) triplets. By removing evidence supporting Q from C or adding ambiguous ones, a new DB schema C ′is generated. Unanswerable Examples Generation Specifically, we randomly sample a target column ctin the SQL query S. Then we delete ct from C to remove the supporting evidence for question spans Qs = (qi*, . . . , q*j ) that mentioned ct. At last, the question span Qs is labeled as UNK. For instance, in the unanswerable example of Figure 2, given an original question "What is the score where record is 0–2?", the question span "score" is grounded to the column "Score". By deleting the column "Score", we obtain an unanswerable example. 1Proposal for handling the remaining 25% questions can be found in Appendix | Labels | Description | Example(Token:Label) | |----------|-------------------|------------------------| | COL | Column Mention | sales: B-COL | | VAL | Value Mention | godfather: B-VAL | | AMB | Ambiguous Span | rating: B-AMB | | UNK | Unanswerable Span | model: B-UNK | | O | Nothing | the: O | Ambiguous Examples Generation Similar to unanswerable examples generation, we generate an ambiguous example by firstly deleting a column ct and then adding two new columns. The critical point is that newly added columns are expected to (1) fit nicely into the table context; (2) have high semantic associations with the target column ct yet low semantic equivalency (e.g. "opponent score" is semantically associated with "score", but it is not semantic equivalent). To achieve this, we leverage an existing contextualized table augmentation framework, CTA (Pi et al., 2022), tailored for better contextualization of tabular data, to collect new column candidates. We select target columns from within the SQL, and typically choose 2-3 nearsynonyms for each column candidate. After that, we rerank the column candidates by their length and similarity with the column ct, and keep the top 2 as our newly added columns. As shown in the ambiguous example of Figure 2, we first delete the original column "Score", then add two domainrelevant and semantically associated columns "Our Score" and "Opponent Score". ## 3.2 Dataset Statistic Leveraging our counterfactual examples generation approach, we obtain a dataset, called NOISYSP based on two cross-domain text-to-SQL datasets, i.e., WikiSQL (Zhong et al., 2017) and WTQ (Shi et al., 2020). Consistent with our preliminary study, we generate 20% of the original data count as problematic examples. Finally, we get 23k problematic examples. Detailed statistics can be seen in Table 2. To ensure the quality of the development set and test set, we hired 3 annotators to check the candidate set of newly added columns for ambiguous examples and then drop low-quality ones. Note that the rate of low quality is only 5%, demonstrating the effectiveness of our approach. ## 4 Model: Dte In this section, we introduce our Detecting-Then-Explaining (DTE) model to handle ambiguous and unanswerable questions. To generate a fine-grained explanation, we formulate it as a sequence labeling problem, where each token in the user question will be tagged as being related to an ambiguous label, an unanswerable label, or others. Concretely, DTE consists of three modules: concept prediction module, grounding module, and sequence labeling module. The grounding module generates pseudo-label information to guide the training of the sequence labeling module. The overall architecture of DTE is shown in Figure 3. ## 4.1 Task Definition Given an input question Q = (q1*, . . . , q*m), a data table (with a concept set C = c1*, . . . , c*k, containing columns and cell values), the goal of sequence labeling is to output a labeling sequence L = (l1*, . . . , l*m) for each token in Q. It can be represented by tagging each token in the question with a set of BIO labels (Tjong Kim Sang and Veenstra, 1999). Specifically, we define 5 kinds of labels for question tokens, namely COL, VAL, AMB, UNK, and O. Their descriptions and examples are shown in Table 3. ## 4.2 Preliminaries: Eta For Grounding In this work, we formulate problematic question detection as a sequence labeling task, whose training process requires large-scale label annotations as supervision. However, such annotations are expensive and time-consuming. To obtain label information in an efficient and cheap way, we propose to leverage the grounding result of the text-to-SQL task and transform it into a pseudo-labeling sequence. Particularly, we use ETA (Liu et al., 2021), a pretrained probing-based grounding model, as the backbone of our approach. The major advantage of ETA is that, compared with models relying on expensive annotations of grounding, it only needs supervision that can be easily derived from SQL queries. ## 4.3 Sequence Labeling Module To meet the requirements of detecting and locating ambiguous and unanswerable question spans, we design a sequence labeling module, which is intuitively suitable for our sequential modeling purpose. The sequence labeling module consists of a dropout layer, a linear layer, and a CRF layer, following best practices in previous work (Yang et al., 2018). Given a contextualized embedding sequence (eq1 , . . . , eqm), the goal of the sequence I-COL O B-UNK … B-AMB ![5_image_0.png](5_image_0.png) O O B-UNK … B-AMB **0 1 … 1** … … … Concept Prediction భ మ … cଵ ଶ … ## 5.1 Experimental Setup 4.4 Multi-Task Training Our multi-task training process involves three steps: (1) train the concept prediction module. (2) warmup grounding module to get alignment pairs. (3) train the sequence labeling module with the pseudo tag derived from the grounding module. ## 4.5 Response Generation At the inference step, given a question and the table information, DTE predicts labels for each question token and outputs grounding pairs between question tokens and table entities. If the AMB (or UNK) label occurs, it means it is an ambiguous (or unanswerable) question. To generate corresponding interpretations to end users, we carefully design two response templates. More details about the templates could be found in Appendix A.3. ## 5 Experiments In this section, we systematically evaluate the effectiveness of DTE. Specifically, we examine DTE's performance in two aspects: (1) the performance of the sequence labeling module in detecting ambiguous and unanswerable tokens; (2) the grounding performance for each label to provide evidence for generating explainable responses to end users. In addition, we report the evaluation results on textto-SQL tasks. Datasets We conduct experiments based on the following datasets: (1) NOISYSP with 23k automatically generated examples, (2) two cross-domain text-to-SQL datasets, i.e., WikiSQL (Zhong et al., 2017) and WTQ (Shi et al., 2020) 2, (3) 3,000 real-world examples collected by us (Sec. 2). All models are trained with the NOISYSP, WikiSQL, and WTQ datasets. Specifically, real-world examples are only used for testing. Dataset statistics are shown in Table 2. Evaluation Metric To evaluate sequence labeling performance, we report accuracy for each label category. For grounding performance evaluation, we report grounding accuracy for each label, except for UNK and O, which have no grounding results. Baseline Models We choose two types of representative models for comparison: (1) the heuristicbased method (Sorokin and Gurevych, 2018), which is widely used in entity linking and grounding tasks; (2) the learning-based method, ETA (Liu et al., 2021), which is a strong grounding baseline, leveraging the intrinsic language understanding ability of pretrained language models. We update them with a little modification to fit our task because their vanilla version is not directly applicable. More implementation details about the baseline and DTE could be found in Appendix A. Models COL VAL AMB UNK O Heuristic 61.7 66.8 57.8 60.7 72.1 ETA+BERT 83.4 87.9 75.6 70.2 80.9 ETA+BERTL 85.7 90.4 76.4 71.4 82.7 DTE +BERT 88.2 94.1 81.4 78.6 90.7 DTE +BERTL **89.4 95.7 83.2 80.3 92.4** Models COL VAL AMB Heuristic 55.9 67.2 56.2 ETA+BERT 71.4 75.3 60.7 ETA+BERTL 72.4 77.8 62.4 DTE +BERT 73.4 78.2 79.8 DTE +BERTL **75.1 80.7 82.4** ## 5.2 Experimental Results On N**Oisy**Sp Sequence Labeling Results As shown in Table 4, we compare the performances of DTE with various baselines on the test set of NOISYSP. DTE outperforms previous baselines across all label categories, which demonstrates the superiority of our DTE model. Compared with the heuristic-based method, DTE significantly improves performances by 25% average gains of all label categories, which shows that our NOISYSP dataset is challenging and the heuristic-based method is far from solving these questions. Besides, DTE consistently outperforms the ETA+BERT baseline by a large margin, not only improving the ambiguous and unanswerable label accuracy by 7% and 11%, respectively, but also improving column and value detecting accuracy, demonstrating the effectiveness of our approach for detection. Grounding Results In order to identify the specific reasons for ambiguous questions, we need to not only identify the target spans, but also establish grounding by finding the linked concept (column or value). As shown in Table 5, we compare DTE's grounding performance with various baselines. Note that the unanswerable span in question does not require grounding to any concept, thus we do not report the grounding result of it. We observe that DTE consistently outperforms baselines across three label categories on grounding performance, especially on the ambiguous grounding whose linked concepts are more varied and diverse. VAL AMB UNK O Heuristic ETA+BERT-Large **DTE+BERT-Large** ![6_image_0.png](6_image_0.png) 87.5 74.6 75.2 70.2 66.3 63.6 54.8 43.6 VAL AMB Heuristic ETA+BERT-Large **DTE+BERT-Large** ![6_image_1.png](6_image_1.png) 65.8 76.4 56.7 52.4 43.9 ## 5.3 Generalization On Realistic Data We verify the generalization ability of DTE by conducting *out-of-distribution* experiments on 3,000 realistic datasets that we collected from our commercial products. These datasets cover more than 30 data tables from multiple domains such as education, finance, government, etc. As shown in Figure 4, our DTE model still outperforms all baselines consistently and achieves promising performance in ambiguous and unanswerable detection with 75.2% and 70.2% sequence labeling accuracy, respectively. From Figure 5, we observe that our DTE model outperforms other baselines by a large margin in grounding accuracy of ambiguous spans. These results indicate the generalization ability of DTE for handling ambiguous and unanswerable questions and the effectiveness of realistic data. ## 5.4 Text-To-Sql Results To verify the influence of DTE on the text-toSQL task, we report the exact match accuracy (Ex.Match) and execution accuracy (Ex.Acc) on | Dev | Test | | | |------------|----------|--------|--------| | Model | Ex.Match | Ex.Acc | Ex.Acc | | ALIGN | 37.8 | 56.9 | 46.6 | | ALIGN+BERT | 44.7 | 63.8 | 51.8 | | ETA+BERT | 47.6 | 66.6 | 53.8 | | DTE+BERT | 48.1 | 66.5 | 54.2 | | Models | COL | VAL | O | |------------|-------|-------|------| | DTE w/ SL | 75.1 | 80.7 | 90.5 | | DTE w/o SL | 72.6 | 75.2 | 82.8 | WTQ dataset. As shown in Table 6, compared with ALIGN (Lei et al., 2020) and ETA (Liu et al., 2021), DTE shows slightly better performance in both exact match accuracy and execution accuracy. It should be noted that all questions in this experiment are normal questions, i.e., without ambiguous and unanswerable questions. The result shows that DTE can boost text-to-SQL performance instead of damaging it. ## 5.5 Discussion What are the remaining errors? We manually analyze 20% of the remaining errors in the NOISYSP dataset and summarize four main error types: (1) wrong detection (25%) - where our model either misses or over-predicts the ambiguous or unanswerable label. (2) widened span (30%) - where our model predicts a longer span than the golden result. (3) narrowed span (25%) - where the model infers a narrowed span than the golden result. (4) other errors (20%) are caused by the grounding module. This error analysis indicates the main challenge of DTE is precise localization rather than detection because the second and third errors (55%) are caused by the wrong span boundary. More detailed examples can be seen in Appendix B. Can sequence labeling module benefit grounding module? As a multi-task training approach, it is critical to determine the effect of introducing extra tasks on the models' performance on original tasks. To verify the influence of the sequence labeling module on grounding results, we conduct an ablation study with or without the sequence labeling module. As we can see in Table 7, the grounding module does achieve better performance on columns and values alignment with the sequence labeling module. Through our analysis, we find the performance gain mainly comes from long concept mention (with token length > 4). The reason is that the CRF layer in the sequence labeling module can strengthen the grounding module's ability to capture long-distance dependency. In summary, we can conclude that the sequence labeling task can fit in well with the grounding task. ## 6 Related Work Problematic Question Detection. Existing works on problematic question detection can be classified into two categories: (1) heuristic-based methods leverage elaborate rules to detect and locate problematic questions span (Sorokin and Gurevych, 2018; Li et al., 2020; Wu et al., 2020), suffering from heavy human efforts on feature engineering. Besides, some approaches (Dong et al., 2018; Yao et al., 2019) estimate the confidence of parsing results, relying on existing parsing models; (2) on the contrary, learning-based methods don't rely on heuristic rules and parsing models. For example, Arthur et al. (2015) jointly transforms an ambiguous query into both its meaning representation and a less ambiguous NL paraphrase via a semantic parsing framework that uses synchronous context-free grammars. Zeng et al. (2020) trains a question classifier to detect problematic questions and then employed a span index predictor to locate the position. However, the index predictor can only find one error span per example, which limits its usage. In this work, we propose a learning-based approach that could handle multiple errors in problematic questions. Uncertainty Estimation. Recent works on uncertainty estimation of neural networks explore diverse solutions, such as deep ensembles in prediction, calibration, and out-of-domain detection (Liu et al., 2020). However, these methods need network and optimization changes, generally ignore prior data knowledge (Loquercio et al., 2020), and can only provide uncertainty in predictions without identifying the reasons. In this work, we propose the counterfactual examples generation approach to adding prior knowledge to the training data and then propose a weakly supervised model for problematic span detection and give explainable reasons. ## 7 Conclusion We investigate the ambiguous and unanswerable questions in text-to-SQL and divide them into 6 categories, then we sufficiently study the characteristics and causes of each category. To alleviate the data shortage issue, we propose a simple yet effective counterfactual example generation approach for automatically generating ambiguous and unanswerable text-to-SQL examples. What's more, we propose a weakly supervised model for ambiguous and unanswerable question detection and explanation. Experimental results verify our model's effectiveness in handling ambiguous and unanswerable questions and demonstrate our model's superiority over baselines. ## Limitations Among the six ambiguous and unanswerable problem categories in Table 1, our counterfactual example generation approach can not cover the calculation unanswerable and out-of-scope examples generation. The reason is that our approach focuses on the table transformation ways while generating the calculation unanswerable and out-of-scope examples requires conditional NL modification techniques. We leave this as our future work. ## Ethics Statement Our counterfactual examples generation approach generates a synthesized dataset based on two mainstream text-to-SQL datasets, WikiSQL (Zhong et al., 2017) and WTQ (Shi et al., 2020), which are free and open datasets for research use. All claims in this paper are based on the experimental results. Every experiment can be conducted on a single Tesla V100. No demographic or identity characteristics information is used in this paper. ## Acknowledgements This work was supported in part by the National Natural Science Foundation of China (Grant Nos. 62276017, U1636211, 61672081), the 2022 Tencent Big Travel Rhino-Bird Special Research Program, and the Fund of the State Key Laboratory of Software Development Environment (Grant No. SKLSDE-2021ZX-18). We also would like to thank all the anonymous reviewers for their constructive feedback and insightful comments. ## References Philip Arthur, Graham Neubig, Sakriani Sakti, Tomoki Toda, and Satoshi Nakamura. 2015. Semantic parsing of ambiguous input through paraphrasing and verification. *Transactions of the Association for Computational Linguistics*, 3:571–584. Li Dong, Chris Quirk, and Mirella Lapata. 2018. Confidence modeling for neural semantic parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 743–753, Melbourne, Australia. Association for Computational Linguistics. Longxu Dou, Yan Gao, Xuqi Liu, Mingyang Pan, Dingzirui Wang, Wanxiang Che, Dechen Zhan, MinYen Kan, and Jian-Guang Lou. 2022. Towards knowledge-intensive text-to-SQL semantic parsing with formulaic knowledge. In *Proceedings of the* 2022 Conference on Empirical Methods in Natural Language Processing, pages 5240–5253, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Jiaqi Guo, Zecheng Zhan, Yan Gao, Yan Xiao, JianGuang Lou, Ting Liu, and Dongmei Zhang. 2019. Towards complex text-to-SQL in cross-domain database with intermediate representation. In *Proceedings of* the 57th Annual Meeting of the Association for Computational Linguistics, pages 4524–4535, Florence, Italy. Association for Computational Linguistics. Wenqiang Lei, Weixin Wang, Zhixin Ma, Tian Gan, Wei Lu, Min-Yen Kan, and Tat-Seng Chua. 2020. Re-examining the role of schema linking in text-toSQL. In *Proceedings of the 2020 Conference on* Empirical Methods in Natural Language Processing (EMNLP), pages 6943–6954, Online. Association for Computational Linguistics. Yuntao Li, Bei Chen, Qian Liu, Yan Gao, Jian-Guang Lou, Yan Zhang, and Dongmei Zhang. 2020. "what do you mean by that?" a parser-independent interactive approach for enhancing text-to-SQL. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6913–6922, Online. Association for Computational Linguistics. Jeremiah Liu, Zi Lin, Shreyas Padhy, Dustin Tran, Tania Bedrax Weiss, and Balaji Lakshminarayanan. 2020. Simple and principled uncertainty estimation with deterministic deep learning via distance awareness. Advances in Neural Information Processing Systems, 33:7498–7512. Qian Liu, Dejian Yang, Jiahui Zhang, Jiaqi Guo, Bin Zhou, and Jian-Guang Lou. 2021. Awakening latent grounding from pretrained language models for semantic parsing. In *Findings of the Association* for Computational Linguistics: ACL-IJCNLP 2021, pages 1174–1189, Online. Association for Computational Linguistics. Antonio Loquercio, Mattia Segu, and Davide Scaramuzza. 2020. A general framework for uncertainty estimation in deep learning. *IEEE Robotics and Automation Letters*, 5(2):3153–3160. Panupong Pasupat and Percy Liang. 2015. Compositional semantic parsing on semi-structured tables. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1470– 1480, Beijing, China. Association for Computational Linguistics. Xinyu Pi, Bing Wang, Yan Gao, Jiaqi Guo, Zhoujun Li, and Jian-Guang Lou. 2022. Towards robustness of text-to-SQL models against natural and realistic adversarial table perturbation. In *Proceedings of the* 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2007–2022, Dublin, Ireland. Association for Computational Linguistics. Ana-Maria Popescu, Oren Etzioni, and Henry A. Kautz. 2003. Towards a theory of natural language interfaces to databases. In *IUI '03*, pages 100–112. IEEE. Karthik Radhakrishnan, Arvind Srikantan, and Xi Victoria Lin. 2020. ColloQL: Robust text-to-SQL over search queries. In *Proceedings of the First Workshop on Interactive and Executable Semantic Parsing*, pages 34–45, Online. Association for Computational Linguistics. Tianze Shi, Chen Zhao, Jordan Boyd-Graber, Hal Daumé III, and Lillian Lee. 2020. On the potential of lexico-logical alignments for semantic parsing to SQL queries. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1849–1864, Online. Association for Computational Linguistics. Daniil Sorokin and Iryna Gurevych. 2018. Mixing context granularities for improved entity linking on question answering data across entity categories. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, pages 65–75, New Orleans, Louisiana. Association for Computational Linguistics. Erik F. Tjong Kim Sang and Jorn Veenstra. 1999. Representing text chunks. In *Ninth Conference of the* European Chapter of the Association for Computational Linguistics, pages 173–179, Bergen, Norway. Association for Computational Linguistics. Bailin Wang, Richard Shin, Xiaodong Liu, Oleksandr Polozov, and Matthew Richardson. 2020. RAT-SQL: Relation-aware schema encoding and linking for textto-SQL parsers. In *Proceedings of the 58th Annual* Meeting of the Association for Computational Linguistics, pages 7567–7578, Online. Association for Computational Linguistics. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Zhiyong Wu, Ben Kao, Tien-Hsuan Wu, Pengcheng Yin, and Qun Liu. 2020. *PERQ: Predicting, Explaining,* and Rectifying Failed Questions in KB-QA Systems, page 663–671. Association for Computing Machinery, New York, NY, USA. Jie Yang, Shuailong Liang, and Yue Zhang. 2018. Design challenges and misconceptions in neural sequence labeling. In *Proceedings of the 27th International Conference on Computational Linguistics*, pages 3879–3889, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Ziyu Yao, Yu Su, Huan Sun, and Wen-tau Yih. 2019. Model-based interactive semantic parsing: A unified framework and a text-to-SQL case study. In *Proceedings of the 2019 Conference on Empirical Methods* in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5447–5458, Hong Kong, China. Association for Computational Linguistics. Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, Zilin Zhang, and Dragomir Radev. 2018. Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-SQL task. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3911–3921, Brussels, Belgium. Association for Computational Linguistics. Jichuan Zeng, Xi Victoria Lin, Steven C.H. Hoi, Richard Socher, Caiming Xiong, Michael Lyu, and Irwin King. 2020. Photon: A robust cross-domain textto-SQL system. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 204–214, Online. Association for Computational Linguistics. Victor Zhong, Caiming Xiong, and Richard Socher. 2017. Seq2sql: Generating structured queries from natural language using reinforcement learning. arXiv preprint arXiv:1709.00103, 1:135–154. ## A Implementation Details A.1 Baselines Implementation We modified ETA since the vanilla version (Liu et al., 2021) does not support ambiguous and unanswerable span detection. For a fair comparison, we update the vanilla ETA in two ways. Firstly, in the original inference part, the vanilla version of ETA applies the greedy linking algorithm to only keep the top 1 confidence score-related schema item (column or value). We change the selection process by allowing the top 3 candidates to be chosen, whose confidence score should be greater than the threshold. We consider those spans with multigrounding results as ambiguous spans. Second, to enable ETA to handle unanswerable questions, we add a UNK column to the schema part and train the model to enable unanswerable span to get closer to the UNK column. We consider those linked with the UNK column as unanswerable spans. The heuristic-based baseline (Sorokin and Gurevych, 2018) is n-gram matching via enumerating all ngram (n ≤ 5) phrases in a natural language question and links them to schema items by fuzzy string matching. We consider a span as an ambiguous one when it can fuzzy match multiple results. Similarly, if a noun phrase span can match no results, it is considered to unanswerable span. ## A.2 Dte Implementation Our DTE model consists of a BERT encoder, and three task modules, namely the concept prediction module, grounding module, and sequence labeling module. We implement the first two modules following the implementation details mentioned in Liu et al. (2021) and use the same hyperparameters. In addition, the sequence labeling module is built by a dropout layer, a linear layer, and a CRF layer which is based on the open-source repository pytorch-crf3. The response template for ambiguous questions is "Oops, this question has multiple semantic meanings. X may refer to either "concept1", "concept2", or "c3"". What's more, we design the template for unanswerable questions as " Sorry, we can't find an answer for you since "X" cannot be mapped to any concepts in your table". Examples can be seen in Figure 1. ## A.3 Response Templates The response template for ambiguous questions is "Oops, this question has multiple semantic mean-3https://github.com/kmkurn/pytorch-crf ings. X may refer to either "concept1", "concept2", or "c3"". What's more, we design the template for unanswerable questions as " Sorry, we can't find an answer for you since "X" cannot be mapped to any concepts in your table". Examples can be seen in Figure 1. ## A.4 Training Hyper-Parameters For all experiments, we employ the AdamW optimizer and the default learning rate schedule strategy provided by Transformers library (Wolf et al., 2020). The learning rate of other non-BERT layers is 1 × 10−4. The max training step is 100,000 and our training batch size is 35. The training process last 6 hours on a single 16GB Tesla V100 GPU. ## B Examples Of Noisy**Sp Dataset** In this section, we demonstrate some good and bad cases of our DTE model prediction. Good case examples are shown in Table 8 and Table 9. Bad case examples are shown in Table 10 and Table 11. Q: What is the minimum population **of the parish with a 750.51 km area ?** Gold: O O O O B-AMB **O O O O O B-VAL I-VAL I-VAL O** Pred: O O O O B-AMB **O O O O O B-VAL I-VAL I-VAL O** Schema: Official Name || Area km 2 || foreign-born population || total estimated **population** ![10_image_0.png](10_image_0.png) Gold: O B-AMB **O O B-COL O B-VAL O** Pred: O B-AMB **O O B-COL O B-VAL O** Schema: State || Type || born name || first name **|| Title || Royal house || From** Table 8: Ambiguous good cases by DTE on the NOISYSP data. ![10_image_1.png](10_image_1.png) Gold: O B-UNK **B-COL O B-VAL O** Pred: O B-UNK **B-COL O B-VAL O** Q: What prefecture **is listed in the map as number 39 ?** Gold: O B-UNK **O O O O B-COL O B-COL B-VAL O** Pred: O B-UNK **O O O O B-COL O B-COL B-VAL O** Schema**: Number in map || Area (km²) || Population (2001) || Pop. density (/km²)** O B-AMB **O O O B-COL I-COL I-COL O B-VAL** ![11_image_0.png](11_image_0.png) O B-UNK I-UNK I-UNK **O B-COL I-COL I-COL O B-VAL** Gold: O O O O B-AMB **O B-VAL I-VAL I-VAL I-VAL O** Pred: O O O **B-AMB I-AMB O B-VAL I-VAL I-VAL I-VAL O** Q: I want the date **of appointment for manner of departure being sacked** Table 10: Ambiguous bad cases by DTE on the NOISYSP data. Q**: How many hectars of land is in Kaxholmen ?** Gold: O O B-UNK I-UNK I-UNK **O O B-VAL O** Pred: O O **O O B-UNK O O B-VAL O** Schema**: Urban area (locality) || Municipality || Population || Density (inh./km²) || Code** ![11_image_1.png](11_image_1.png) Q**: What was the elimination number of the fighter who fought within 26:15 ?** O **B-UNK I-UNK O O O O O O B-VAL O** O O O **O O O O O O B-VAL O** Table 11: Unanswerable bad cases by DTE on the NOISYSP data. ## C Proposal For Remaining 25% Problematic Questions The remaining categories are (1) Value Ambiguity (10%); (2) Value Unanswerable (7%); (3) Calculation Unanswerable (6%); and (4) Out of Scope (2%). Although our data generation method does not cover these categories, our DTE model trained with our generated dataset NoisySP can generalize to questions of these categories (results are shown in Sec. 5.3), especially for Value Ambiguity and Value Unanswerable. This is because columns and values are treated as concepts that share the same pattern. For the Calculation Unanswerable category, formulaic knowledge is needed to inject the model with the necessary background information. Existing works, such as KnowSQL (Dou et al., 2022), are intended to solve this kind of problem. As for the Out of Scope problem, which usually calls for graphic operation or other unsupported operations, it can be easily handled with a blacklist or a simple classifier. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitations ✗ A2. Did you discuss any potential risks of your work? Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? abstract, introduction ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** 4.2 ✓ B1. Did you cite the creators of artifacts you used? 4.2 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Ethics Statement B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? 3 ✗ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Left blank. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Left blank. ## C ✓ **Did You Run Computational Experiments?** 5 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? A The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 5, A ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 5.2, 5.3, 5.4 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 5 ## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** 3 ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? 3 ✗ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? China ✗ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? no ✓ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? 3 ✗ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? no
li-etal-2023-rethinking
Rethinking Document-Level Relation Extraction: A Reality Check
https://aclanthology.org/2023.findings-acl.353
Recently, numerous efforts have continued to push up performance boundaries of document-level relation extraction (DocRE) and have claimed significant progress in DocRE. In this paper, we do not aim at proposing a novel model for DocRE. Instead, we take a closer look at the field to see if these performance gains are actually true. By taking a comprehensive literature review and a thorough examination of popular DocRE datasets, we find that these performance gains are achieved upon a strong or even untenable assumption in common: all named entities are perfectly localized, normalized, and typed in advance. Next, we construct four types of entity mention attacks to examine the robustness of typical DocRE models by behavioral probing. We also have a close check on model usability in a more realistic setting. Our findings reveal that most of current DocRE models are vulnerable to entity mention attacks and difficult to be deployed in real-world end-user NLP applications. Our study calls more attentions for future research to stop simplifying problem setups, and to model DocRE in the wild rather than in an unrealistic Utopian world.
# Rethinking Document-Level Relation Extraction: A Reality Check Jing Li1, Yequan Wang2, Shuai Zhang3 **and Min Zhang**1∗ 1Harbin Institute of Technology, Shenzhen, China 2Beijing Academy of Artificial Intelligence, Beijing, China 3ETH Zurich, Switzerland {li.jing, zhangmin2021}@hit.edu.cn, [email protected], [email protected] ## Abstract Recently, numerous efforts have continued to push up performance boundaries of documentlevel relation extraction (DocRE) and have claimed significant progress in DocRE. In this paper, we do not aim at proposing a novel model for DocRE. Instead, we take a closer look at the field to see if these performance gains are actually true. By taking a comprehensive literature review and a thorough examination of popular DocRE datasets, we find that these performance gains are achieved upon a strong or even untenable assumption in common: all named entities are perfectly localized, normalized, and typed in advance. Next, we construct four types of entity mention attacks to examine the robustness of typical DocRE models by behavioral probing. We also have a close check on model usability in a more realistic setting. Our findings reveal that most of current DocRE models are vulnerable to entity mention attacks and difficult to be deployed in real-world end-user NLP applications. Our study calls more attentions for future research to stop simplifying problem setups, and to model DocRE in the wild rather than in an unrealistic Utopian world. ## 1 Introduction Document-level relation extraction (DocRE), aiming at identifying semantic relations between a head entity and a tail entity in a document (Yao et al., 2019), plays an essential role in a variety of downstream applications, such as question answering (Xu et al., 2016) and knowledge base construction (Trisedya et al., 2019). Recently, there are two flourishing branches for DocRE. First, graph-based approaches consider entities (Velickovic et al., 2018; Nan et al., 2020), mentions (Christopoulou et al., 2019; Li et al., 2020) and sentences (Xu et al., 2021c) as nodes ∗ Corresponding author. Juan Guzmán (born Hans Gutmann Guster, also known as "Juanito", 28 October 1911 - 1982) was a German born Mexican photojournalist. He was known as a war photographer of the Spanish Civil War and later on his work with Mexican painters Frida Kahlo and Diego Rivera. Hans Gutmann was born in Cologne. In 1936 he joined the Spanish Civil War as a volunteer of the International Brigades. Gutmann later became a Spanish citizen and changed his name to Juan Guzmán. There are more than 1,300 photographs from the Spanish Civil War in the archive ![0_image_0.png](0_image_0.png) to construct a document-level graph and perform reasoning through some advanced neural graph techniques. Second, sequence-based approaches leverage BiLSTM (Huang et al., 2021; Li et al., 2021b) or Transformers (Tan et al., 2022; Zhong and Chen, 2021; Zhou and Chen, 2022) as encoders to learn document-level representations. However, all these models have one thing in common that they are based on **a strong or even untenable assumption** as shown in Figure 1: all entity mentions are (i) correctly localized; (ii) perfectly normalized; (iii) correctly typed.1 Then, the task of modeling DocRE is usually simplified as a pairwise **classification** problem. 1More illustrating examples can be found in Appx. §A Although these pairwise classification approaches have claimed significant progress in DocRE performance, we are still interested in taking a closer look at the field to see if this is actually true. In particular, many research papers have reported very decent leaderboard scores for the DocRE task. Does this mean the task of DocRE has been almost completely solved? Can the current approaches be widely used in real-world DocRE scenarios? To answer these questions, we first take a closer look at data annotations in commonly-used DocRE datasets to check the strong data assumption (§3). We focus specifically on the annotations of named entity recognition (NER) and normalization (*i.e.,* entity linking) in detecting relations. By answering three research questions (**RQ1-3**), we find that current problem setups for DocRE are greatly simplified and unrealistic. If the data assumption is too strict, it is not clear whether current DocRE models are robust in a variety of loose assumptions. Therefore, we construct four types of attacks regarding entity mention annotations to investigate the model robustness (§4, RQ4) using behavioral probing (Lasri et al., 2022; Chen et al., 2022). To further have a look at the limitations of data assumptions, it is important to investigate the usability of existing DocRE models in real-world scenarios. Hence, we examine the capability of widely-used NER systems and entity linking systems on preparing model input formats from raw text for DocRE model deployment (§5, RQ5). Finally, we discuss our empirical findings and call special attentions for future research in developing DocRE models (§6). In short, our contributions and findings are: - We present a comprehensive literature review on recent advances for DocRE and identify a strong or even untenable assumption in modeling DocRE. - We take a thorough examination of data annotation on three popular DocRE datasets. Detecting relations in text commonly involves multiple mentions and aliases of paired entities (*i.e.,* head and tail entities) which are currently assumed to be perfectly typed, localized and normalized before modeling DocRE. - We construct four types of entity mention attacks to check the robustness for typical DocRE models. Most of current DocRE mod- els are vulnerable to mention attacks (F1 drops from 7.93% to 85.51%). - We have a close check on the usability of typical DocRE models. Under the identified data assumption, current DocRE models are very difficult to be deployed in real-world end-user NLP applications because of the need of input preparation for each pipeline module (*i.e.,* the reproduction rate of input format is only from 34.3% to 58.1%). - We discuss our findings, and call attentions for future research to stop simplifying problem setups, and to model DocRE in the wild rather than in an unrealistic Utopian world. ## 2 A Quick Literature Review In this section, we have a quick literature review of DocRE models to shed light on a global review for recent evolutions. Table 1 summarizes recent studies in anti-chronological order. Graph-based Approaches. Graph-based approaches first construct a document-level homogeneous graph where words (Zhang et al., 2020), mentions (Christopoulou et al., 2019), entities (Zhou et al., 2020), sentences (Li et al., 2020; Xu et al., 2021a) or meta dependency paths (Nan et al., 2020) are considered as nodes and some semantic dependencies (*e.g.,* mention-mention (Christopoulou et al., 2019), mention-entity (Zeng et al., 2020), mention-sentence (Wang et al., 2020), entitysentence (Li et al., 2020), sentence-sentence (Wang et al., 2020; Xu et al., 2021b), sentence-document (Zeng et al., 2021)) as edges. One key advantage of these approaches is that some advanced graph techniques can be used to model inter- and intra-entity interactions and perform multi-hop reasoning. Sequence-based Approaches. Instead of introducing complex graph structures, some approaches typically model a document as a sequence of tokens and leverage BiLSTM (Huang et al., 2021; Li et al., 2021b) or Transformers (Tan et al., 2022) as encoder to capture the contextual semantics. In particular, some studies have already contributed effort to integrating entity structures (Xu et al., 2021c), concept view (Li et al., 2021a), deep probabilistic logic (Zhang et al., 2021b), U-shaped Network (Zhang et al., 2021a), relation-specific attentions (Yu et al., 2022), logic rules (Ru et al., 2021), augmenting intermediate steps (Xiao et al., 2022), sentences importance estimation (Xu et al., 2022), evidence extraction (Xie et al., 2022) and knowledge dis- | References | Venue | Claim | Performed | Annotation Assumption | Aggregation | | | |------------------------------|----------|------------|----------------|-------------------------|---------------|----|-------------| | Localization | Linking | Typing | | | | | | | (Zhang et al., 2022) | EMNLP22 | Extraction | Classification | ✓ | ✓ | ✗ | LogSumExp | | (Xie et al., 2022) | ACL22 | Extraction | Classification | ✓ | ✓ | ✗ | LogSumExp | | (Tan et al., 2022) | ACL22 | Extraction | Classification | ✓ | ✓ | ✗ | LogSumExp | | (Xiao et al., 2022) | NAACL22 | Extraction | Classification | ✓ | ✓ | ✓ | LogSumExp | | (Xu et al., 2022) | NAACL22 | Extraction | Classification | ✓ | ✓ | ✓ | Average | | (Yu et al., 2022) | NAACL22 | Extraction | Classification | ✓ | ✓ | ✗ | Average | | (Zeng et al., 2021) | ACL21 | Extraction | Classification | ✓ | ✓ | ✓ | Average | | (Li et al., 2021b) | ACL21 | Extraction | Classification | ✓ | ✓ | ✓ | Max-pooling | | (Xu et al., 2021b) | ACL21 | Extraction | Classification | ✓ | ✓ | ✓ | Average | | (Huang et al., 2021) | ACL21 | Extraction | Classification | ✓ | ✓ | ✗ | Average | | (Makino et al., 2021) | ACL21 | Extraction | Classification | ✓ | ✓ | ✓ | Max-pooling | | (Ru et al., 2021) | EMNLP21 | Extraction | Classification | ✓ | ✓ | ✓ | Average | | (Zhang et al., 2021b) | EMNLP21 | Extraction | Classification | ✓ | ✓ | ✓ | [CLS] | | (Zhang et al., 2021a) | IJCAI21 | Extraction | Classification | ✓ | ✓ | ✗ | LogSumExp | | (Xu et al., 2021c) | AAAI21 | Extraction | Classification | ✓ | ✓ | ✗ | Average | | (Xu et al., 2021a) | AAAI21 | Extraction | Classification | ✓ | ✓ | ✓ | Average | | (Li et al., 2021a) | AAAI21 | Extraction | Classification | ✓ | ✓ | ✓ | Average | | (Zhou et al., 2021) | AAAI21 | Extraction | Classification | ✓ | ✓ | ✗ | Average | | (Nan et al., 2020) | ACL20 | Extraction | Classification | ✓ | ✓ | ✗ | Average | | (Zeng et al., 2020) | EMNLP20 | Extraction | Classification | ✓ | ✓ | ✓ | Average | | (Wang et al., 2020) | EMNLP20 | Extraction | Classification | ✓ | ✓ | ✓ | Average | | (Tran et al., 2020) | EMNLP20 | Extraction | Classification | ✓ | ✓ | ✓ | Average | | (Li et al., 2020) | COLING20 | Extraction | Classification | ✓ | ✓ | ✓ | Average | | (Zhang et al., 2020) | COLING20 | Extraction | Classification | ✓ | ✓ | ✓ | Average | | (Zhou et al., 2020) | COLING20 | Extraction | Classification | ✓ | ✓ | ✓ | Average | | (Christopoulou et al., 2019) | EMNLP19 | Extraction | Classification | ✓ | ✓ | ✓ | Average | | (Jia et al., 2019) | NAACL19 | Extraction | Classification | ✓ | ✓ | ✗ | LogSumExp | tillation (Tan et al., 2022) into transformer-based neural models. In addition, some studies (Soares et al., 2019; Zhou et al., 2021; Zhong and Chen, 2021; Zhou and Chen, 2022; Zhang et al., 2022) already verified that inserting special symbols (*e.g.,* [entity] and [/entity]) before and after named entities can significantly benefit relation representation encoding. Observations from Literature Review. From Table 1, we have following key observations: (1) The listed studies claim that they address the problem of "document-level relation **extraction**" 2, but the relation **classification** is actually performed. (2) All graph-based approaches build homogeneous or heterogeneous graphs based on the unrealistic precondition that accurate annotations of entity localization, entity linking and entity typing are available. (3) Some pooling strategies (*e.g.,* Max, Average and LogSumExp) are widely used in modeling DocRE when aggregating representations of multiple mentions of an entity. However, it is unclear how the wrongly-detected mentions affect the ## Performance Of Docre Models. 3 Check On Dataset Annotations To provide in-depth observations of the data assumption in most of DocRE models, we first take a thorough examination of data annotations on three commonly-used DocRE datasets. We will conduct quantitative and qualitative studies to analyze entity mentions and entity aliases which a relation instance involved.3 ## 3.1 Probing Datasets The summary of datasets is shown in Table 2. NAinstance means that there is no relation between head and tail entities. Non-NA instance means that there is at least one relation between head and tail entities. Note that the mention statistics in this Section are based on Non-NA instances. DocRED (Yao et al., 2019) is a human-annotated dataset from Wikipedia and Wikidata. DocRED has 5,053 documents, 97 relation classes, 132,275 3Entity Mentions: The words in text that refer to an entity. Entity Aliases: Unique mentions of an entity. Relation Instance: A piece of text involving head and tail entities to be classified. ![3_image_0.png](3_image_0.png) Datasets #Doc. #Rel. **#Non-NA** ![3_image_1.png](3_image_1.png) Train 3,053 97 385,272 Dev 1,000 97 11,518 Test 1,000 97 - Train 500 2 1,055 Dev 500 2 1,025 Test 500 2 1,087 Train 23,353 2 36,079 Dev 5,839 2 8,762 Test 1,000 2 1,502 entities, and 56,354 relational facts in total. The average length of documents in DocRED is around 8 sentences. Following previous studies (Yao et al., 2019; Wang et al., 2019), we use the standard split of the dataset: 3,053 documents for training, 1,000 for development and 1,000 for test. CDR (Li et al., 2016) consists of three separate sets of articles with diseases, chemicals and their relations annotated. There are two relation labels: None and Chemical-Disease. There are total 1,500 articles and 500 each for the training, development and test sets. GDA (Wu et al., 2019) is a Gene-Disease Association dataset from MEDLINE abstracts: 29,192 articles for training and 1,000 for testing. Following previous studies (Christopoulou et al., 2019; Li et al., 2021b), we further split the original training set into two sets: 23,353 for training and 5,839 for development. There are two relation labels: None and Gene-Disease. ## 3.2 Data Observations And Findings We organize our findings by answering following research questions (RQs): (RQ1): **How many entity mentions are involved** ## In A Relation Instance In Commonly-Used Docre Datasets? We define that a relation instance to be classified is a piece of text containing head and tail entities. Thus, it is natural that the head or tail entity may have multiple mentions in the document. Figure 2(a), 2(b) and 2(c) show entity mention statistics in DocRED, CDR and GDA, respectively. The horizontal axis shows number of mentions of a relation instance. The vertical axis shows the percentages of relation instances in datasets. In the DocRED dataset, 59.2% of relation instances have more than two mentions. For CDR, 96% of relation instances have more than two mentions and 21% of relation instances have more than 10 mentions. For GDA, 98% of relation instances have more than two mentions and 50% of relation instances have more than 10 mentions. Our this finding reveals the huge difference between the sentence-level and document-level RE. That is, document-level RE involves much more entity mentions than sentencelevel RE because of the longer text in documentlevel RE. One strong (almost untenable) assumption of existing DocRE models is that all entity mentions of a relation instance are successfully identified. ## (Rq2): **How Many Aliases Does An Entity Have** In Commonly-Used Docre Datasets? RQ1 already showed that a relation instance may have multiple entity mentions. A follow-up question is about the number of unique mentions. Given that an entity can appear multiple times in a document, we define that the aliases of an entity are unique mentions. We are interesting in how many aliases an entity has. Figure 3 plots the distribution of number of entities to number of aliases on three commonly-used datasets. For DocRED, we can observe that most of entities have only one alias and 4,745 entities ![4_image_0.png](4_image_0.png) have more than one alias. The maximum number of aliases is 10. For CDR, 650 entities (account for 48.95%) have more than one alias and the maximum number of aliases is 29. For GDA, 5,927 entities (account for 62.73%) have more than one alias and the maximum number of aliases is 778. CDR and GDA have more diverse aliases than DocRED, because DocRED is constructed from Wikipedia while CDR and GDA are constructed from biomedical text. Linking diverse aliases of an entity to its identifier is a challenging task in a long document. Our findings identify the strong (almost untenable) assumption of existing DocRE models that all the aliases (*i.e.,* unique mentions) of an entity are successfully normalized (*i.e.,* linked to its unique identifier). ## (Rq3): **Do The Aliases Of An Entity Vary Widely** In Commonly-Used Docre Datasets? RQ2 already showed that an entity may have multiple aliases. For example, an entity in GDA has 778 unique aliases. In this Section, we investigate whether the aliases of an entity vary widely. Table 3 shows details of entity aliases ranked by numbers of aliases in the three datasets. For DocRED, the variation of aliases is slight because the genre of text is from formal articles. Although DocRED is manually annotated by human beings, there are still some annotation errors on entity linking. As shown in Table 3, the entity (Q180611, Azpeitia) is linked to many wrong aliases such as "United States" and "Chile". This observation confirms that document-level entity linking is a very challenging task. For biomedical datasets, chemical entities have a slight variation of aliases while gene and disease entities have a huge variation of aliases. In addition, there are many abbreviations for biomedical entities. Thus, effective NER and entity linking are key preconditions in modeling DocRE. We will further investigate the model usability with NER and entity linking in Section § 5. ## 4 Check On Model Robustness Most of existing DocRE models are proposed based on the strong assumptions of mention annotations as shown in Section § 3. In this Section, we are interested in the following research question: ## (Rq4): **Are Neural Docre Models Robust To** Entity Mention Attacks? To answer RQ4, we adopt **behavioral probing** (Lasri et al., 2022; Chen et al., 2022) to observe a model's behaviors by studying the model's predictions on attacking datasets. That is, attacks are only added at test time and are not available during model training. ## 4.1 Attacking Target Models We investigate three typical DocRE models: (1) BiLSTM-Sum (Yao et al., 2019) which uses BiLSTM to encode the document and computes the representation of an entity by summing the representations of all mentions. (2) **GAIN-Glove** (Zeng et al., 2020) which constructs a heterogeneous mentionlevel graph and an entity-level graph to capture document-aware features and uses GloVe (Pennington et al., 2014) as word embeddings. (3) **BERTMarker** (Zhou et al., 2021; Zhou and Chen, 2022) which takes BERT as the encoder and inserts special entity symbols before and after entities. More details of attacking target models can be found in Appx. § B.1. ## 4.2 Attack Construction In this work, we focus on entity mention attacks which add data perturbations by taking into ac- | Rank | IDs | #Aliases | Details of Entity Aliases DocRED Dataset | | | |-----------|----------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------|--------------------------------|---------| | 1 | Q180611 (LOC) | 10 | Azpeitia, Guipuzcoa, Cuba, Mexico, Azkoitia, Basque Country, United States, Argentina, Chile, Spain | | | | 2 | Q544565 (LOC) | 10 | Qu, Yuxi River, Jiuxi River, Zhuji River, Ni River, Eshan River, Liucun River, Huaxi River, Qu River, Zhou River | | | | 3 | Q3738980 | 8 | Toding, Tsanda, Tsada, Tholing, Zanda, Toling, Zada, Tuolin | | | | (LOC) | | | | | | | 4 | Q12274473 (MISC) | 8 | waz¯ırwala, Waziri, Maseedwola, Wazirwola, Dawarwola, Wazir, of the Wazirs, Waziri ¯ Pashto CDR Dataset | | | | 1 | D016572 (Chemical) | 7 | cyclosporine, cyclosporin, CsA, Cyclosporine, CyA, cyclosporin A, cyclosporine A | | | | 2 | D014635 (Chemical) | 7 | divalproex sodium, VPA, sodium valproate, Valproic acid, Valproate, valproic acid, valproate | | | | D007674 | renal damage, CAN, nephrotoxic, renal dysfunctio, renal injury, Nephrotoxicity, kidney | | | | | | 1 | (Disease) | 29 | diseases, liver or kidney disease, cardiac and renal lesions, glomerular injury, kidney damage, ...omit... | | | | D056486 | Hepatitis, drug-induced hepatitis, acute hepatitis-like illness, liver damage, hepatotoxicity, cholestatic hepatitis, hepatic damage, hepatocellular injury, Toxic hepatitis, | | | | | | 2 | (Disease) | 21 | Granulomatous hepatitis, ...omit... GDA Dataset | | | | 348 | apolipoprotein e4, | APOE*4, | ApoE2, | apolipoprotein gene E4 allele, | ApoE-4, | | 1 | (Gene) | 114 | apoE 4, apolipoprotein-E gene, Apolipoprotein E-epsilon4, factor–apolipoprotein E, Apolipoprotein (apo)E, ...omit... | | | | 7124 | tumor necrosis factor alpha, tumor necrosis factor beta, Interleukin-1 and tumor necrosis factor-alpha, tumor necrosis factor alpha, TNF-)a, Tumor Necrosis Factor, TNF308G/A, miR-21, IL6 and TNF, ...omit... | | | | | | 2 | (Gene) | 83 | | | | | D030342 | inherited defect of fatty acid oxidation, genetic haemochromatosis, inherited skin | | | | | | (Disease) | 778 | disorders, A-related disorders, autosomal-recessive pleiotropic disorder, autosomal | | | | | 1 | dominant juvenile ALS, ...omit... | | | | | | D009369 | mammary tumors, tumor suppressor genes, MSI-H cancers, rectal cancers, predominant in lung tumour, early-stage prostate cancer, Tumour-necrosis, Cervical cancer, | | | | | | (Disease) | 668 | | | | | | 2 | Malignant tumors, distal tumors, ...omit... | | | | | count different types of wrongly-detected mentions. The ultimate goal is to test the model robustness under different mention attacks. Therefore, we construct four types of attacks: (1) DrpAtt: we simply drop 50% of mentions of an entity if the entity has more than one mention. This attack is designed to simulate the case of missed detections in NER systems. (2) BryAtt: we slightly move the ground boundaries of 50% of mentions of an entity if the entity has more than one mention (*e.g.,* "JSpanish Civil WarKMISC in" is changed to "Spanish JCivil WarKMISC in"). (3) CorAtt: we intentionally make the coreference (*i.e.,* entity linking) of an entity wrong (*i.e.,* 50% of mentions of an entity are wrongly coreferential if the entity has more than one mention). (4) MixAtt: this attack is the mix of aforementioned three attacks. More attack details can be found in Appx. § B.2. ## 4.3 Attacking Results And Analysis Table 4 reports the performance on various entity mention attacks for three attacking target models. ## We Have The Following Observations: First, all target models are significantly affected by the four attacks, with relative F1 drops from 7.93% to 85.51%. Overall, GAIN-Glove and BERT-Marker are more vulnerable than BiLSTMSum. This is because BERT-Marker requires accurate mention positions for inserting entity markers and GAIN-Glove needs the information of mention positions and normalization for constructing heterogeneous graphs. More specifically, BERT-Marker averagely suffers drops of 44.42%, 71.58%, and 71.72% across all attacks on DocRED, CDR and GDA, respectively. BiLSTM-Sum averagely suffers drops of 23.22%, 35.40%, and 40.67% across all attacks on DocRED, CDR and GDA, respectively. Second, the MixAtt attack leads to more significant drops in performance for all attacking target models. CorAtt is more significant to impact robustness than BryAtt and DrpAtt. For instance, CorAtt leads to relative drops of 40.81%, 56.76% and 64.48% across three target models on DocRED, | Model | Attack | DocRED | CDR | GDA | | | | |-------------|-----------|----------|-------|--------|-------|--------|----| | F1% | ∆% | F1% | ∆% | F1% | ∆% | | | | No Attack | 49.32 | - | 53.67 | - | 75.87 | - | | | DrpAtt | 42.55 | -13.73 | 48.34 | -9.93 | 66.55 | -12.28 | | | BryAtt | 39.04 | -20.84 | 39.23 | -26.91 | 57.30 | -24.48 | | | CorAtt | 37.21 | -24.55 | 28.86 | -46.23 | 31.32 | -58.72 | | | MixAtt | 32.67 | -33.76 | 22.26 | -58.52 | 24.88 | -67.21 | | | BiLSTM-Sum | No Attack | 54.91 | - | 55.13 | - | 78.65 | - | | DrpAtt | 48.17 | -12.27 | 50.76 | -7.93 | 59.61 | -24.21 | | | BryAtt | 41.82 | -23.84 | 36.33 | -34.10 | 45.92 | -41.61 | | | CorAtt | 32.34 | -41.11 | 27.40 | -50.30 | 33.24 | -57.74 | | | MixAtt | 28.56 | -47.99 | 18.34 | -66.73 | 23.52 | -70.10 | | | GAIN-Glove | No Attack | 59.82 | - | 64.47 | - | 82.71 | - | | DrpAtt | 47.34 | -20.86 | 25.57 | -60.34 | 36.43 | -55.95 | | | BryAtt | 41.45 | -30.71 | 21.46 | -66.71 | 24.55 | -70.32 | | | CorAtt | 25.86 | -56.77 | 16.93 | -73.74 | 19.04 | -76.98 | | | MixAtt | 18.34 | -69.34 | 9.34 | -85.51 | 13.55 | -83.62 | | | BERT-Marker | | | | | | | | CDR and GDA, respectively. DrpAtt leads to relative drops of 15.62%, 26.07% and 30.81% across three target models on DocRED, CDR and GDA, respectively. Our empirical results clearly show that the information of entity coreference, boundary and position plays an important role in DocRE. Overall, based on the robustness evaluation in Table 4, we can answer RQ4: Most of neural DocRE models are far away from robustness to entity mention attacks. Therefore, it has some realistic significance to challenge current problem setups regarding data annotation assumptions in DocRE and to improve the robustness of DocRE models on entity mention attacks. ## 5 Check On Model Usability In this Section, we investigate this realistic situation: DocRE models are already trained and training data is unavailable. We want to extract same relations on unseen raw text using these models. The goal is to deploy the already-trained DocRE models in other NLP applications. Here, we are interested in the following research question: ## (Rq5): **Are Existing Docre Models Easily** Adopted In Real-World Docre Scenarios? To answer RQ5, a necessary step is that whether we can process the raw text with the format as DocRE models trained on. This preprocessing procedure involves two crucial systems: Named Entity Recognition (NER) and Entity Linking. ## 5.1 Check On Ner Setups. Assume that DocRE models are already trained and the training sets are unavailable. We take the raw text of development set of DocRED, and test sets of CDR and GDA as the unseen data. We use strict match metrics (*i.e.,* entity boundary and type are both correctly detected) to measure agreement between the annotations we preprocessed and existing ground truth annotations. NER Systems. For DocRED, we adopt three offthe-shelf NER systems: Flair (Akbik et al., 2019) and spaCy4and Stanza (Qi et al., 2020). For CDR and GDA, we adopt three biomedical NER systems: HunFlair (Weber et al., 2021), Stanza biomedical models (Zhang et al., 2021c) and Scispacy (Neumann et al., 2019). More details of NER systems can be found in Appx. § B.3. Results on NER. Table 5 reports experimental results of NER systems on the three datasets. For DocRED, Flair achieves the best performance by the F1 score of 63.47%. Although the genre of DocRED is the formal text (*i.e.,* Wikipedia), the stateof-the-art NER systems are still unable to achieve decent performance on DocRED. HunFlair gets the best performance on the biomedical datasets because it trained on harmonized versions of 31 biomedical datasets. | Dataset | NER | Strict Match (%) | | | | | | | |----------------|--------|--------------------|-------|--------------|----------------|------------------|------|------| | System | P | R | F1 | Dataset | Linking | Strict Match (%) | | | | System | P | R | F1 | | | | | | | Flair | 62.88 | 64.07 | 63.47 | | | | | | | DocRED | spaCy | 62.86 | 59.58 | 61.17 | | | | | | Stanza | 56.96 | 58.44 | 57.69 | TagMe, ρ=0.1 | 24.2 | 42.5 | 30.8 | | | TagMe, ρ=0.2 | 35.0 | 38.6 | 36.7 | | | | | | | TagMe, ρ=0.3 | 45.7 | 33.5 | 38.7 | | | | | | | TagMe, ρ=0.4 | 52.4 | 27.8 | 36.4 | | | | | | | TagMe, ρ=0.5 | 49.7 | 12.4 | 19.8 | | | | | | | DocRED | | | | | | | | | | HunFlair | 94.59 | 94.14 | 94.36 | | | | | | | CDR | Stanza | 86.80 | 87.94 | 87.37 | | | | | | ScispaCy | 84.93 | 80.32 | 82.56 | CDR | ScispaCy, mesh | 42.4 | 60.6 | 49.9 | | ScispaCy, umls | 53.7 | 63.3 | 58.1 | | | | | | | HunFlair | 79.11 | 84.74 | 81.83 | | | | | | | GDA | Stanza | 69.87 | 79.70 | 74.47 | | | | | | ScispaCy | 68.61 | 64.61 | 66.55 | GDA | ScispaCy, mesh | 31.5 | 28.4 | 29.8 | | ScispaCy, umls | 30.9 | 38.6 | 34.3 | | | | | | ## 5.2 Check On Entity Linking Setups. We examine the capability of entity linking systems on reproducing ground truth annotations for development/test sets of DocRED, CDR and GDA. We choose the strict match as the metric that a linking prediction is regarded as correct only if all mentions of an entity are correctly linked to the entity. Entity Linking Systems. Unlike NER systems, there are very few off-the-shelf linking systems available. We choose TagMe (Ferragina and Scaiella, 2010) as the linker for DocRED, and Scispacy (Neumann et al., 2019) for CDR and GDA. More details of entity linking systems can be found in Appx. § B.4. Results on Entity Linking. Table 6 reports experimental results of entity linking systems on the three datasets. For TagMe, the precision increases gradually with the increase of the value of ρ (confidence score), while the recall decreases as ρ increases. The best F1 on DocRED is only 38.7% with a confidence score of 0.3. Scispacy achieves F1 scores of 58.1% and 34.3% using umls for CDR and GDA, respectively. One key observation drawn from Table 6 is that document-level entity linking is a challenging task and existing linking systems commonly perform poorly on this task. Based on empirical results of Sections 5.1 and 5.2, we can answer RQ5: Most of existing DocRE models are difficult to be adopted in real-world DocRE scenarios due to the need of input preparation for each pipeline module and the accumulation of errors in NER and entity linking systems. ## 6 Discussion Let's Stop Simplifying Problem Setups. As summarized in Table 1, recent advances from the past four years have claimed significant progress in DocRE performance. However, our study shows that the actual improvements are attributable to a strong or even untenable assumption where all entities are perfectly typed, localized and normalized. Therefore, high F1 scores on leaderboards do not mean that the task of DocRE has been solved. Based on our findings (§4 and §5), the simplified problem setups cannot cover realistic scenarios. Even worse, the problem simplification significantly hurts the usability of deploying DocRE models in real-world end-user NLP applications. We call attentions on the community to address the real DocRE problem under the open-world assumption, rather than to push up the boundaries of simplified benchmarks for leading leaderboards. Let's Model DocRE in the Wild. As shown in Section § 5, it is very difficult to produce accurate data formats as existing DocRE models trained on. Thus, given a new document, we are still unable to easily deploy existing trained DocRE models to extract same types of relations, let alone unseen relations. Recently, some studies (Cabot and Navigli, 2021; Eberts and Ulges, 2021; Giorgi et al., 2022) have started exploring the direction of jointly extracting entities and relations at document level. However, the end-to-end performance at document level is much worse than the performance at sentence level. Our empirical findings call more attentions on developing high-performance end-to-end DocRE models and more attentions on modeling DocRE in the wild, rather than in an unrealistic Utopian world. ## 7 Conclusion In this paper, we try to answer whether the performance gains recent DocRE models claimed are actually true. We took a comprehensive literature review of DocRE models and a thorough examination of popular DocRE datasets. We investigated the model robustness under four types of mention attacks and the model usability under a more realistic setting. Our findings call future efforts on modeling DocRE in the wild. ## Limitations We have discussed the implications of our research in Section 6. In this Section, we further discuss the threats to validity of our study. - **Threats to Internal Validity**: The main internal threat to the validity of our research comes from **(RQ3)** where we present a qualitative study on the variation of aliases. We are unable to cover all cases in the qualitative study. For example, the entity of D030342 (Disease) in Table 3 has 778 unique aliases. It is impossible to show all aliases to readers. To help mitigate this threat, we try to show as many examples as possible in a limited space. - **Threats to External Validity**: The main threat to external validity arises from the potential bias in the selection of experimental datasets, attacking target models and off-theshelf NER and Entity Linking tools. To mitigate this threat, we experiment with multiple datasets, models and tools. For experimental datasets, we choose the three most popular DocRE datasets (*i.e.,* DocRED, CDR, and GDA). We believe that these three datasets are broadly representative in this research community. For attacking target models, we choose three typical models ranging from non-contextualized sequence-based to graphbased, and to contextualized Transformers models. For off-the-shelf NER/Linking tools, we comprehensively investigate five state-ofthe-art NER taggers and two entity linkers. ## Ethical Considerations As our goal of this study is to challenge current problem setups of DocRE, we heavily rely upon existing well-known datasets, models and NLP tools. We only claim that our findings may hold on similar datasets or domains. We acknowledge the risk of generalizability of our findings on other privacysensitive datasets or specific domains. In general, we suggest that practitioners repeat all experiments following our procedures when using other corpora. ## References Alan Akbik, Tanja Bergmann, Duncan Blythe, Kashif Rasul, Stefan Schweter, and Roland Vollgraf. 2019. FLAIR: An easy-to-use framework for state-of-theart NLP. In *The Conference of the North American* Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACLHLT demo), pages 54–59. Pere-Lluís Huguet Cabot and Roberto Navigli. 2021. REBEL: relation extraction by end-to-end language generation. In Findings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2370–2381. Howard Chen, Jacqueline He, Karthik Narasimhan, and Danqi Chen. 2022. Can rationalization improve robustness? In The Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACLHLT), pages 3792–3805. Fenia Christopoulou, Makoto Miwa, and Sophia Ananiadou. 2019. Connecting the dots: Document-level neural relation extraction with edge-oriented graphs. In The Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4924–4935. Markus Eberts and Adrian Ulges. 2021. An end-toend model for entity-level relation extraction using multi-instance learning. In *The 16th Conference of* the European Chapter of the Association for Computational Linguistics (EACL), pages 3650–3660. Paolo Ferragina and Ugo Scaiella. 2010. TAGME: on-the-fly annotation of short text fragments (by wikipedia entities). In *The 19th ACM Conference on* Information and Knowledge Management (CIKM), pages 1625–1628. John Giorgi, Gary D. Bader, and Bo Wang. 2022. A sequence-to-sequence approach for document-level relation extraction. In *The 21st Workshop on Biomedical Language Processing, (BioNLP)*, pages 10–25. Quzhe Huang, Shengqi Zhu, Yansong Feng, Yuan Ye, Yuxuan Lai, and Dongyan Zhao. 2021. Three sentences are all you need: Local path enhanced docu- ment relation extraction. In The 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL/IJCNLP), pages 998–1004. Robin Jia, Cliff Wong, and Hoifung Poon. 2019. Document-level n-ary relation extraction with multiscale representation learning. In *The Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 3693–3704. Karim Lasri, Tiago Pimentel, Alessandro Lenci, Thierry Poibeau, and Ryan Cotterell. 2022. Probing for the usage of grammatical number. In The 60th Annual Meeting of the Association for Computational Linguistics (ACL), pages 8818–8831. Bo Li, Wei Ye, Canming Huang, and Shikun Zhang. 2021a. Multi-view inference for relation extraction with uncertain knowledge. In *The Thirty-Fifth AAAI* Conference on Artificial Intelligence (AAAI), pages 13234–13242. Bo Li, Wei Ye, Zhonghao Sheng, Rui Xie, Xiangyu Xi, and Shikun Zhang. 2020. Graph enhanced dual attention network for document-level relation extraction. In *The 28th International Conference on Computational Linguistics (COLING)*, pages 1551–1560. Jiao Li, Yueping Sun, Robin J. Johnson, Daniela Sciaky, Chih-Hsuan Wei, Robert Leaman, Allan Peter Davis, Carolyn J. Mattingly, Thomas C. Wiegers, and Zhiyong Lu. 2016. Biocreative V CDR task corpus: a resource for chemical disease relation extraction. Database J. Biol. Databases Curation. Jingye Li, Kang Xu, Fei Li, Hao Fei, Yafeng Ren, and Donghong Ji. 2021b. MRN: A locally and globally mention-based reasoning network for document-level relation extraction. In The 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL/IJCNLP), pages 1359– 1370. Kohei Makino, Makoto Miwa, and Yutaka Sasaki. 2021. A neural edge-editing approach for documentlevel relation graph extraction. In Findings of The 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL/IJCNLP), pages 2653–2662. Guoshun Nan, Zhijiang Guo, Ivan Sekulic, and Wei Lu. 2020. Reasoning with latent structure refinement for document-level relation extraction. In The 58th Annual Meeting of the Association for Computational Linguistics (ACL), pages 1546–1557. Mark Neumann, Daniel King, Iz Beltagy, and Waleed Ammar. 2019. Scispacy: Fast and robust models for biomedical natural language processing. In The 18th BioNLP Workshop and Shared Task (BioNLP@ACL), pages 319–327. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In *The Conference on Empirical* Methods in Natural Language Processing (EMNLP), pages 1532–1543. Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020. Stanza: A python natural language processing toolkit for many human languages. In *The 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations (ACL demo)*, pages 101–108. Dongyu Ru, Changzhi Sun, Jiangtao Feng, Lin Qiu, Hao Zhou, Weinan Zhang, Yong Yu, and Lei Li. 2021. Learning logic rules for document-level relation extraction. In The Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1239–1250. Livio Baldini Soares, Nicholas FitzGerald, Jeffrey Ling, and Tom Kwiatkowski. 2019. Matching the blanks: Distributional similarity for relation learning. In The 57th Conference of the Association for Computational Linguistics (ACL), pages 2895–2905. Qingyu Tan, Ruidan He, Lidong Bing, and Hwee Tou Ng. 2022. Document-level relation extraction with adaptive focal loss and knowledge distillation. In Findings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL), pages 1672–1681. Hieu Minh Tran, Trung Minh Nguyen, and Thien Huu Nguyen. 2020. The dots have their values: Exploiting the node-edge connections in graph-based neural models for document-level relation extraction. In The Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4561–4567. Bayu Distiawan Trisedya, Gerhard Weikum, Jianzhong Qi, and Rui Zhang. 2019. Neural relation extraction for knowledge base enrichment. In The 57th Conference of the Association for Computational Linguistics (ACL), pages 229–240. Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. 2018. Graph attention networks. In The 6th International Conference on Learning Representations (ICLR). Difeng Wang, Wei Hu, Ermei Cao, and Weijian Sun. 2020. Global-to-local neural networks for documentlevel relation extraction. In The Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3711–3721. Hong Wang, Christfried Focke, Rob Sylvester, Nilesh Mishra, and William Yang Wang. 2019. Finetune bert for docred with two-step process. *arXiv*, abs/1909.11898. Leon Weber, Mario Sänger, Jannes Münchmeyer, Maryam Habibi, Ulf Leser, and Alan Akbik. 2021. Hunflair: an easy-to-use tool for state-of-the-art biomedical named entity recognition. *Bioinform.*, 37(17):2792–2794. Ye Wu, Ruibang Luo, Henry C. M. Leung, Hing-Fung Ting, and Tak Wah Lam. 2019. RENET: A deep learning approach for extracting gene-disease associations from literature. In *The 23rd Annual International Conference Research in Computational Molecular Biology (RECOMB)*, volume 11467, pages 272– 284. Yuxin Xiao, Zecheng Zhang, Yuning Mao, Carl Yang, and Jiawei Han. 2022. SAIS: supervising and augmenting intermediate steps for document-level relation extraction. In The Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT). Yiqing Xie, Jiaming Shen, Sha Li, Yuning Mao, and Jiawei Han. 2022. Eider: Empowering document-level relation extraction with efficient evidence extraction and inference-stage fusion. In *Findings of the 60th* Annual Meeting of the Association for Computational Linguistics (ACL), pages 257–268. Benfeng Xu, Quan Wang, Yajuan Lyu, Yong Zhu, and Zhendong Mao. 2021a. Entity structure within and throughout: Modeling mention dependencies for document-level relation extraction. In *The ThirtyFifth AAAI Conference on Artificial Intelligence* (AAAI), pages 14149–14157. Kun Xu, Siva Reddy, Yansong Feng, Songfang Huang, and Dongyan Zhao. 2016. Question answering on freebase via relation extraction and textual evidence. In *The 54th Annual Meeting of the Association for* Computational Linguistics (ACL). Wang Xu, Kehai Chen, Lili Mou, and Tiejun Zhao. 2022. Document-level relation extraction with sentences importance estimation and focusing. In The Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 2920– 2929. Wang Xu, Kehai Chen, and Tiejun Zhao. 2021b. Discriminative reasoning for document-level relation extraction. In The 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL/IJCNLP), pages 1653–1663. Wang Xu, Kehai Chen, and Tiejun Zhao. 2021c. Document-level relation extraction with reconstruction. In *The Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI)*, pages 14167–14175. Yuan Yao, Deming Ye, Peng Li, Xu Han, Yankai Lin, Zhenghao Liu, Zhiyuan Liu, Lixin Huang, Jie Zhou, and Maosong Sun. 2019. Docred: A large-scale document-level relation extraction dataset. In The 57th Conference of the Association for Computational Linguistics (ACL), pages 764–777. Jiaxin Yu, Deqing Yang, and Shuyu Tian. 2022. Relation-specific attentions over entity mentions for enhanced document-level relation extraction. In The Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT). Shuang Zeng, Yuting Wu, and Baobao Chang. 2021. SIRE: separate intra- and inter-sentential reasoning for document-level relation extraction. In The 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL/IJCNLP), pages 524–534. Shuang Zeng, Runxin Xu, Baobao Chang, and Lei Li. 2020. Double graph based reasoning for documentlevel relation extraction. In *The Conference on Empirical Methods in Natural Language Processing* (EMNLP), pages 1630–1640. Liang Zhang, Jinsong Su, Yidong Chen, Zhongjian Miao, Zijun Min, Qingguo Hu, and Xiaodong Shi. 2022. Towards better document-level relation extraction via iterative inference. In Findings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8306–8317. Ningyu Zhang, Xiang Chen, Xin Xie, Shumin Deng, Chuanqi Tan, Mosha Chen, Fei Huang, Luo Si, and Huajun Chen. 2021a. Document-level relation extraction as semantic segmentation. In *The Thirtieth* International Joint Conference on Artificial Intelligence (IJCAI), pages 3999–4006. Sheng Zhang, Cliff Wong, Naoto Usuyama, Sarthak Jain, Tristan Naumann, and Hoifung Poon. 2021b. Modular self-supervision for document-level relation extraction. In The Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5291–5302. Yuhao Zhang, Yuhui Zhang, Peng Qi, Christopher D. Manning, and Curtis P. Langlotz. 2021c. Biomedical and clinical english model packages in the stanza python NLP library. *Journal of the American Medical Informatics Association*, 28(9):1892–1899. Zhenyu Zhang, Bowen Yu, Xiaobo Shu, Tingwen Liu, Hengzhu Tang, Yubin Wang, and Li Guo. 2020. Document-level relation extraction with dual-tier heterogeneous graph. In *The 28th International Conference on Computational Linguistics (COLING)*, pages 1630–1641. Zexuan Zhong and Danqi Chen. 2021. A frustratingly easy approach for entity and relation extraction. In The 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 50–61. Huiwei Zhou, Yibin Xu, Weihong Yao, Zhe Liu, Chengkun Lang, and Haibin Jiang. 2020. Global context-enhanced graph convolutional networks for ![11_image_0.png](11_image_0.png) document-level relation extraction. In *The 28th International Conference on Computational Linguistics* (COLING), pages 5259–5270. Wenxuan Zhou and Muhao Chen. 2022. An improved baseline for sentence-level relation extraction. In The 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (AACL-IJCNLP), pages 161– 168. Wenxuan Zhou, Kevin Huang, Tengyu Ma, and Jing Huang. 2021. Document-level relation extraction with adaptive thresholding and localized context pooling. In *The Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI)*, pages 14612–14620. ## Appendix A Additional Examples Of Data Annotations Our study identifies a strong or even untenable assumption in DocRE. To give more intuitive sense, Figure 4 shows additional examples of data assumption in three popular DocRE datasets. Specifically, the eight entity mentions (i.e., Tlaxcalan Range, Matlalcueitl, [ Lady of the ] Blue Skirt, Malintzin, Sierra de Tlaxcala, Malinche, La Malinche, Matlalcueye) are annotated with types and positions, then linked to a unique identifier in the DocRED corpus. The ten entity mentions (*mania, bipolar* II, bipolar I, bipolar depression, hypomanic, hypomania, DSM-IV bipolar I, bipolar, manic, bipolar disorder) are typed, localized and normalized in the CDR corpus. The eight entity mentions (deltaEF1, zfhx1a, zfhep, AREB6, Nil-2-a, BZP, TCF8, ZEB1) are typed, localized and normalized in the GDA corpus. Most of existing DocRE models are developed based on the assumption that all entity mentions are perfectly typed, localized and normalized. ## B More Experimental Details B.1 Attacking Target Models BiLSTM-Sum. BiLSTM-Sum (Yao et al., 2019) uses a bidirectional LSTM to encode documents and computes the representation of an entity by summing the representations of all mentions. The embeddings from glove.840B.300d5are used to initialize model vocabularies for DocRED, CDR and GDA. All word embeddings and model parameters are learnable during training. Hyperparameters are tuned on the development set for each dataset respectively. GAIN-Glove. GAIN-Glove (Zeng et al., 2020) constructs a heterogeneous mention-level graph to model complex interaction among different mentions across the document. Then a path reasoning mechanism is proposed to infer relations between entities based on another constructed entity-level graph. We implement GAINGlove with 2 layers of GCN and the dropout rate of 0.6 based on the codes6. The embeddings from glove.840B.300d7are used for DocRED, CDR and GDA. BERT-Marker. BERT-Marker (Zhou et al., 2021; Zhong and Chen, 2021; Zhou and Chen, 2022) first inserts special entity symbols (*i.e.,* [ent] and [/ent]) before and after entities, then encodes the whole document using the pretrained BERT. The representation of token [CLS] is used for classification. In particular, we use the checkpoint bert-base-uncased8 5https://nlp.stanford.edu/projects/ glove/ 6https://github.com/DreamInvoker/GAIN 7https://nlp.stanford.edu/projects/ glove/ 8https://huggingface.co/ for DocRED, and the checkpoint allenai/scibert_scivocab_uncased9 for CDR and GDA. All attacking target models are implemented with PyTorch10 and Accelerate11, and trained on one DGX machine, totally equipped with 80 Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz processor cores, 400 GB of RAM, and 8 NVIDIA Tesla V100-32GB GPUs. ## B.2 Attack Details In total, we construct four types of attacks, *i.e.,*, DrpAtt, BryAtt, CorAtt and MixAtt, to check the robustness of attacking target models. DrpAtt. Missing some entities is a very common phenomenon for most of NER systems. DrpAtt is constructed to investigate the effect of missed mentions. If an entity has more than one mention, we simply drop 50% of mentions of the entity. BryAtt. Some entities are complex and nested in natural language. Detecting boundaries precisely is not a trivial task. BryAtt is constructed to investigate the effect of wrongly-detected entity boundaries. If an entity has more than one mention, we slightly move the ground boundaries of 50% of mentions of the entity. CorAtt. The document-level coreference resolution is a challenging task in DocRE. Most of existing DocRE models are developed on benchmark datasets where entity coreference is manually annotated. BryAtt is constructed to investigate the effect of wrongly-coreferential mentions. We intentionally make the coreference results (*i.e.,* entity linking) of an entity wrong (*i.e.,* 50% of mentions of an entity are wrongly coreferential if the entity has more than one mention). MixAtt. This type of attack is the mix of aforementioned three attacks. ## B.3 Ner Systems In Section 5.1, we adopt five off-the-shelf NER systems in our experiments. Flair. Flair12 is a very simple framework for state-of-the-art NLP and developed by Humboldt University of Berlin and friends. We use the ner-english-ontonotes-large13 model for DocRED. spaCy. spaCy14 is a library for advanced Natural Language Processing in Python and Cython. We use the en_core_web_trf15 model for DocRED. Stanza. Stanza16 is a collection of accurate and efficient tools for the linguistic analysis of many human languages, developed by Stanford NLP Group. General domain, biomedical & clinical models are available in Stanza. We use the ontonotes17 for DocRED, bc5cdr18 for CDR, bc5cdr and bionlp13cg for GDA. HunFlair. HunFlair19 is a state-of-the-art NER tagger for biomedical texts. It contains harmonized versions of 31 biomedical NER datasets. We use hunflair-chemical and hunflair-disease for CDR, hunflair-gene and hunflair-disease for GDA.20 ScispaCy. ScispaCy21 is a Python package containing spaCy models for processing biomedical, scientific or clinical text. We use en_ner_bc5cdr_md for CDR. We use en_ner_bc5cdr_md, and en_ner_bionlp13cg_md for GDA.22 ## B.4 Entity Linking Systems an unstructured text and link each of them to a pertinent Wikipedia page in an efficient and effective way. We use the official Python TagMe API wrapper24 for DocRED. The confidence scores (annotations that are below the threshold will be discarded) are experimented among [0.1, 0.2, 0.3, 0.4, 0.5]. Entity Linker in ScispaCy. Entity Linker in ScispaCy25 is a spaCy component which performs linking to a knowledge base. The linker simply performs a string overlap - based search (char-3grams) on named entities, comparing them with the concepts in a knowledge base using an approximate nearest neighbours search. For CDR and GDA datasets, we explore the following two knowledge bases: - umls: Links to the Unified Medical Language System, levels 0,1,2 and 9. This has 3 million concepts. - mesh: Links to the Medical Subject Headings. This contains a smaller set of higher quality entities, which are used for indexing in Pubmed. MeSH contains 30k entities. ## C License DocRED is released under The MIT license. GDA is released uner The GNU Affero General Public License. GAIN is released under The MIT License. Flair is released under The MIT License. spaCy is released under The MIT License. Stanza is Licensed under The Apache License 2.0. HunFlair is Licensed under The MIT License. ScispaCy is Licensed under The Apache License 2.0. TagMe is Licensed under The Apache License 2.0. PyTorch is with The Copyright (c) 2016 - Facebook, Inc (Adam Paszke). Huggingface Transformer models are released under The Apache License 2.0. All the scientific artifacts are consistent with their intended uses. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitations on Page 9. ✓ A2. Did you discuss any potential risks of your work? Ethical Considerations on Page 9. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract on Page 1. 1 Introduction Section on Page 2. ✗ A4. Have you used AI writing assistants when working on this paper? No AI writing assistant used. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4.1 Section 5.1 Section 5.2 ✓ B1. Did you cite the creators of artifacts you used? Section 4.1 Section 5.1 Section 5.2 Appx. B3 Appx. B4 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Appx. C ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Appx. C ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Experiments are conducted on well-known benchmarks and follow previous studies. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 3.1 on Page 3. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Table 2 on Page 3. ## C ✓ **Did You Run Computational Experiments?** Section 4.3 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appx. B on Page 12. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Appx. B on Page 12. ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Appx. B on Page 12. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Appx. B on Page 12. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
sung-etal-2023-optimizing
Optimizing Test-Time Query Representations for Dense Retrieval
https://aclanthology.org/2023.findings-acl.354
Recent developments of dense retrieval rely on quality representations of queries and contexts from pre-trained query and context encoders. In this paper, we introduce TOUR (Test-Time Optimization of Query Representations), which further optimizes instance-level query representations guided by signals from test-time retrieval results. We leverage a cross-encoder re-ranker to provide fine-grained pseudo labels over retrieval results and iteratively optimize query representations with gradient descent. Our theoretical analysis reveals that TOUR can be viewed as a generalization of the classical Rocchio algorithm for pseudo relevance feedback, and we present two variants that leverage pseudo-labels as hard binary or soft continuous labels. We first apply TOUR on phrase retrieval with our proposed phrase re-ranker, and also evaluate its effectiveness on passage retrieval with an off-the-shelf re-ranker. TOUR greatly improves end-to-end open-domain question answering accuracy, as well as passage retrieval performance. TOUR also consistently improves direct re-ranking by up to 2.0{\%} while running 1.3{--}2.4x faster with an efficient implementation.
# Optimizing Test-Time Query Representations For Dense Retrieval Mujeen Sung1 Jungsoo Park1 Jaewoo Kang1 Danqi Chen2 **Jinhyuk Lee**3∗ Korea University1 Princeton University2 Google Research3 {mujeensung,jungsoo_park,kangj}@korea.ac.kr [email protected] [email protected] ## Abstract Recent developments of dense retrieval rely on quality representations of queries and contexts from pre-trained query and context encoders. In this paper, we introduce TOUR (TestTime Optimization of Query Representations), which further optimizes *instance-level* query representations guided by signals from testtime retrieval results. We leverage a crossencoder re-ranker to provide fine-grained pseudo labels over retrieval results and iteratively optimize query representations with gradient descent. Our theoretical analysis reveals that TOUR can be viewed as a generalization of the classical Rocchio algorithm for pseudo relevance feedback, and we present two variants that leverage pseudo-labels as hard binary or soft continuous labels. We first apply TOUR on phrase retrieval with our proposed phrase re-ranker, and also evaluate its effectiveness on passage retrieval with an off-the-shelf reranker. TOUR greatly improves end-to-end open-domain question answering accuracy, as well as passage retrieval performance. TOUR also consistently improves direct re-ranking by up to 2.0% while running 1.3–2.4× faster with an efficient implementation.1 ## 1 Introduction Recent progress in pre-trained language models gave birth to dense retrieval, which typically learns dense representations of queries and contexts in a contrastive learning framework. By overcoming the term mismatch problem, dense retrieval has been shown to be more effective than sparse retrieval in open-domain question answering (QA) (Lee et al., 2019; Karpukhin et al., 2020; Lee et al., 2021a) and information retrieval (Khattab and Zaharia, 2020; Xiong et al., 2020). Dense retrieval often uses a dual encoder architecture, which enables the pre-computation of con- ∗Work partly done while visiting Princeton University. 1Our code is available at https://github.com/ dmis-lab/TouR. ![0_image_0.png](0_image_0.png) Figure 1: An overview of test-time optimization of query representations (TOUR). Given the initial representation of a test query q0, TOUR iteratively optimizes its representation (e.g., q0 → q1 → q2 → q3) based on top-k retrieval results. The figure shows how each query vector retrieves new context vectors and updates its representation to find the gold answer (e.g., 1983). Our cross-encoder re-ranker provides a relevance score for each top retrieval result making the query representation closer to the final answer. text representations while the query representations are directly computed from the trained encoder during inference. However, directly using trained query encoders often fails to retrieve the relevant context (Thakur et al., 2021; Sciavolino et al., 2021) as many test queries are unseen during training. In this paper, we introduce TOUR, which further optimizes instance-level query representations at test time for dense retrieval. Specifically, we treat each test query as a single data point and iteratively optimize its representation. This resembles the query-side fine-tuning proposed for phrase retrieval (Lee et al., 2021a), which finetunes the query encoder over *training* queries in a new domain. Instead, we fine-tune query representations for each *test* query. Cross-encoders are known to exhibit better generalization ability in unseen distributions compared to dual encoders (Rosa et al., 2022). Accordingly, we leverage cross-encoder re-rankers (Nogueira and Cho, 2019; Fajcik et al., 2021) to provide *pseudo relevance labels* on intermediate retrieval results and then iteratively optimize query representations using gradient descent. For phrase retrieval, we also develop a cross-encoder phrase re-ranker, which has not been explored in previous studies. We theoretically show that our framework can be viewed as a generalized version of the Rocchio algorithm for pseudo relevance feedback (PRF; Rocchio, 1971), which is commonly used in information retrieval to improve query representations. While most PRF techniques assume that the topranked results are equally pseudo-relevant, our method dynamically labels the top results and updates the query representations accordingly. We leverage our pseudo labels as either hard binary or soft continuous labels in two instantiations of our method, respectively. Lastly, to reduce computational overhead, we present an efficient implementation of TOUR, which significantly improves its runtime efficiency. We apply TOUR on phrase (Lee et al., 2021a) and passage retrieval (Karpukhin et al., 2020) for open-domain QA. Experiments show that TOUR consistently improves performance in both tasks, even when the query distribution changes greatly. Specifically, TOUR improves the end-to-end opendomain QA accuracy by up to 10.7%, while also improving the accuracy of the top-20 passage retrieval by up to 8.3% compared to baseline retrievers. TOUR requires only a handful of top-k candidates to perform well, which enables TOUR to run up to 1.3–2.4× faster than the direct application of re-ranker with our efficient implementation while consistently improving the performance by up to 2.0%. The ablation study further shows the effectiveness of each component, highlighting the importance of fine-grained relevance signals. ## 2 Background 2.1 Dense Retrieval Dense retrieval typically uses query and context encoders—Eq(·) and Ec(·)—for representing queries and contexts, respectively (Lee et al., 2019; Karpukhin et al., 2020). In this work, we focus on improving phrase or passage retrievers for opendomain QA. The similarity of a query q and a context c is computed based on the inner product between their dense representations: $$\operatorname{sim}(q,c)=E_{q}(q)^{\mathsf{T}}E_{c}(c)=\mathbf{q}^{\mathsf{T}}\mathbf{c}.$$ $\int_{\alpha}^{\alpha}$ ⊺c. (1) Dense retrievers often use the contrastive learning framework to train encoders Eq and Ec. After training the encoders, top-k results are retrieved from a set of contexts C: $${\mathcal{C}}_{1:k}^{q}=[c_{1},\ldots,c_{k}]=\mathrm{top-}k_{c\in{\mathcal{C}}}\mathrm{sim}(q,c),$$ where the top-k operator returns a sorted list of contexts by their similarity score sim(*q, c*) in descending order, i.e., sim(q, c1) *≥ · · · ≥* sim(*q, c*k). Dense retrievers aim to maximize the probability that a relevant context c∗exists (or is highly ranked) in the top results. ## 2.2 Query-Side Fine-Tuning After training the query and context encoders, the context representations {c | c *∈ C}* are typically pre-computed for efficient retrieval while the query representations q are directly computed from the query encoder during inference. However, using the dense representations of queries as is often fails to retrieve relevant contexts, especially when the test query distribution is different from the one seen during training. To mitigate the problem, Lee et al. (2021a) propose to fine-tune the query encoder on the retrieval results of training queries {q | q ∈ Qtrain} over the entire corpus C. For phrase retrieval (i.e., c denotes a phrase), they maximize the marginal likelihood of relevant phrases in the top-k results: $$\mathcal{L}_{\text{query}}=-\sum_{q\in\mathcal{Q}_{\text{min}}}\log\sum_{c\in\mathcal{C}_{1:k}^{q},c=c^{*}}P_{k}(c|q),\tag{3}$$ where $P_{k}(c|q)=\frac{\exp(\text{sim}(q,c))}{\sum_{i=1}^{k}\exp(\text{sim}(q,c_{i}))}$ and $c=c^{*}$ checks whether each context matches the gold con checks whether each context matches the gold context c∗ or not. Note that c∗is always given for training queries. The query-side fine-tuning significantly improves performance and provides a means of efficient transfer learning when there is a query distribution shift. In this work, compared to training on entire training queries as in Eq. (3), we treat each test query q ∈ Qtest as a single data point to train on and optimize instance-level query representations at test time. This is in contrast to distillation-based passage retrievers (Izacard and Grave, 2020; Ren et al., 2021), which fine-tune the parameters of the retrievers directly on all training data by leveraging signals from cross-encoders. ## 2.3 Pseudo Relevance Feedback Pseudo relevance feedback (PRF) techniques in information retrieval (Rocchio, 1971; Lavrenko and Croft, 2001) share a similar motivation to ours in that they refine query representations for a single test query. Unlike using the true relevance feedback provided by users (Baumgärtner et al., 2022), PRF relies on heuristic or model-based relevance feedback, which can be easily automated. Although most previous work uses PRF for sparse retrieval (Croft et al., 2010; Zamani et al., 2018; Li et al., 2018; Mao et al., 2021), recent work has begun to apply PRF for dense retrieval (Yu et al., 2021; Wang et al., 2021; Li et al., 2021). PRF aims to improve the quality of the retrieval by updating the initial query representation from the query encoder (i.e., Eq(q) = q0): $${\bf q}_{t+1}\gets g({\bf q}_{t},{\mathcal{C}}_{1:k}^{q_{t}}),$$ ), (4) where g is an update function and qt denotes the query representation after t-th updates over q0. The classical Rocchio algorithm for PRF (Rocchio, 1971) updates the query representation as: $$\begin{array}{l}{{g(\mathbf{q}_{t},\mathcal{C}_{1:k}^{q_{t}})=}}\\ {{\alpha\mathbf{q}_{t}+\beta\frac{1}{|\mathcal{C}_{r}|}\sum_{c_{r}\in\mathcal{C}_{r}}\mathbf{c}_{r}-\gamma\frac{1}{|\mathcal{C}_{n r}|}\sum_{c_{n r}\in\mathcal{C}_{n r}}\mathbf{c}_{n r},}}\end{array}\tag{5}$$ where Cr and Cnr denote *relevant* and *non-relevant* sets of contexts, respectively. α, β, and γ determine the relative contribution of the current query representation qt, relevant context representations cr, and non-relevant context representations cnr, respectively, when updating to qt+1. A common practice is to choose top-k′contexts as pseudorelevant among top-k (k′ < k), i.e., Cr = C qt 1:k′: $$\begin{array}{l}{{g(\mathbf{q}_{t},\mathcal{C}_{1:k}^{q_{t}})=}}\\ {{\alpha\mathbf{q}_{t}+\beta\frac{1}{k^{\prime}}\sum_{i=1}^{k^{\prime}}\mathbf{c}_{i}-\gamma\frac{1}{k-k^{\prime}}\sum_{i=k^{\prime}+1}^{k}\mathbf{c}_{i}.}}\end{array}\quad\mathrm{(6)}$$ In this work, we theoretically show that our testtime query optimization is a generalization of the Rocchio algorithm. While Eq. (6) treats the positive (or negative) contexts equally, we use crossencoder re-rankers (Nogueira and Cho, 2019) to provide fine-grained pseudo labels and optimize the query representations with gradient descent. ## 3 Methodology In this section, we provide an overview of our method (§3.1) and its two instantiations (§3.2, §3.3). We also introduce a relevance labeler for phrase retreival (§3.4) and simple techniques to improve efficiency of TOUR (§3.5). ## 3.1 Optimizing Test-Time Query Representations We propose TOUR (Test-Time Optimization of Query Representations), which optimizes query representations at the instance level. In our setting, the query and context encoders are fixed after training, and we optimize the query representations solely based on their retrieval results. Figure 1 illustrates an overview of TOUR. First, given a single test query q ∈ Qtest, we use a cross-encoder re-ranker ϕ(·) to provide a score of how relevant each of the top-k contexts c ∈ Cq 1:k is with respect to a query: $$\left(7\right)$$ $$s=\phi(q,c),$$ s = ϕ(*q, c*), (7) $$(4)$$ where ϕ(·) is often parameterized with a pretrained language model, which we detail in §3.4. Compared to simply setting top-k′results as pseudo-positive in PRF, using cross-encoders enables more fine-grained judgments of relevance over the top results. In addition, it allows us to label results for *test* queries as well without access to the gold label c∗. ## 3.2 Tour With Hard Labels : Tour**Hard** First, we explore using the scores from the crossencoder labeler ϕ and selecting a set of pseudopositive contexts C q hard ⊂ Cq 1:k defined as the smallest set such that: $$\begin{array}{c}{{P_{k}(\tilde{c}=\!c^{*}|q,\phi)=\frac{\exp(\phi(q,\tilde{c})/\tau)}{\sum_{i=1}^{k}\exp(\phi(q,c_{i})/\tau)}}}\\ {{\sum_{\tilde{c}\in C_{\mathrm{hard}}^{q}}P_{k}(\tilde{c}=c^{*}|q,\phi)\geq p,}}\end{array}\tag{8}$$ where τ is a temperature parameter and c˜ ∈ Cq hard denotes a pseudo-positive context selected by ϕ. Intuitively, we choose the smallest set of contexts as C q hard whose marginal relevance with respect to a query under ϕ is larger than the threshold p. This is similar to Nucleus Sampling for stochastic decoding (Holtzman et al., 2020). Then, TOUR optimizes the query representation with the gradient descent algorithm based on the relevance judgment C q hard made by ϕ: $$\mathcal{L}_{\text{hard}}(q,\mathcal{C}_{1:k}^{q})=-\log\sum_{\tilde{c}\in\mathcal{C}_{\text{hard}}^{q}}P_{k}(\tilde{c}|q),\tag{9}$$ where $P_{k}(\tilde{c}|q)=\frac{\exp(\sin(q,\tilde{c}))}{\sum_{i=1}^{k}\exp(\sin(q,c_{i}))}$. Similar to the query-side fine-tuning in Eq. (3), we maximize the marginal likelihood of (pseudo) positive contexts C q hard. We denote this version as TOURhard. Unlike query-side fine-tuning that updates the model parameters of Eq(·), we directly optimize the query representation q itself. TOURhard is also an instance-level optimization over a single test query q ∈ Qtest without access to the gold label c∗. For optimization, we use gradient descent: $${\bf q}_{t+1}\leftarrow{\bf q}_{t}-\eta{\frac{\partial{\mathcal{L}}_{\mathrm{hard}}({\bf q}_{t},{\mathcal{C}}_{1:k}^{q_{t}})}{\partial{\bf q}_{t}}},\qquad(10)$$ where η denotes the learning rate for gradient descent and the initial query representation is used as q0. Applying gradient descent over the test queries shares the motivation with dynamic evaluation for language modeling (Krause et al., 2019), but we treat each test query independently unlike the series of tokens for the evaluation corpus of language modeling. For each iteration, we perform a single step of gradient descent followed by another retrieval with qt+1 to update C qt 1:k into C qt+1 1:k. Relation to the Rocchio algorithm Eq. (10) could be viewed as performing PRF by setting the update function g(qt, C qt 1:k ) = qt − η ∂Lhard(qt,C qt 1:k ) ∂qt. In fact, our update rule Eq. (10) is a generalized version of the Rocchio algorithm as shown below: $$g(\mathbf{q}_{t},\mathcal{C}_{1:k}^{q_{t}})$$ $$=\mathbf{q}_{t}+\eta\sum_{\tilde{c}}P(\tilde{c}|q_{t})(1-P_{k}(\tilde{c}|q_{t}))\tilde{c}\tag{11}$$ $$-\eta\sum_{\tilde{c}}\left[P(\tilde{c}|q_{t})\sum_{c\in\mathcal{C}_{1:k}^{q_{t}},c\neq\tilde{c}}P_{k}(c|q_{t})\mathbf{c}\right],$$ where $\tilde{c}\in\mathcal{C}_{\text{hard}}^{q_{t}}$ and $P(\tilde{c}|q_{t})=\frac{\exp(\sin(q_{t},\tilde{c}))}{\sum_{\tilde{c}^{\prime}}\exp(\sin(q_{t},\tilde{c}^{\prime}))}$ (proof in Appendix A). Although our update rule seems to fix α in Rocchio to 1, it can be dynamically changed by applying weight decay during gradient descent, which sets α = 1−ηλdecay multiplied by qt. Then, the equality between Eq. (6) and Eq. (11) holds when C qt hard = C qt 1:k′ with Pk(c|qt) being equal for all c ∈ Cqt 1:k , namely Pk(c|qt) = 1/k. This reflects that the Rocchio algorithm treats all top-k′results equally (i.e., P(˜c|qt) = 1/k′). Then, β = γ = η k−k′ kholds (Appendix C). In practice, C qt hard would be different from C qt 1:k′ if some re-ranking happens by ϕ. Also, each pseudopositive context vector c˜ in the second term of the RHS of Eq. (11) has a different weight. The contribution of c˜ is maximized when it has a larger probability mass P(˜c|qt) among the pseudo-positive contexts, but a smaller probability mass Pk(˜c|qt) among the top-k contexts; this is desirable since we want to update qt a lot when the initial ranking of pseudo-positive context in top-k is low. For instance, if there is a single pseudo-positive context c˜ (i.e., P(˜c|qt)) = 1) ranked at the bottom of top-k with a large margin with top-1 (i.e., Pk(˜c|qt) = 0), then P(˜c|qt)(1 − Pk(˜c|qt)) = 1 is maximized. ## 3.3 Tour With Soft Labels : Tour**Soft** From Eq. (11), we observe that it uses pseudopositive contexts C qt hard sampled by the crossencoder labeler ϕ, but the contribution of c˜ (the second term in RHS) does not directly depend on the scores from ϕ. The scores are only used to make a hard decision in pseudo-positive contexts. Another version of TOUR uses the normalized scores of a cross-encoder over the retrieved results as soft labels. We can simply change the maximum marginal likelihood objective in Eq. (9) to reflect the scores from ϕ in g. Specifically, we change Eq. (9) to minimize Kullback-Leibler (KL) divergence loss as follows: as follows: $$\mathcal{L}_{\text{soft}}(\mathbf{q}_{t},\mathcal{C}_{1:k}^{qt})=$$ $$-\sum_{i=1}^{k}P(c_{i}|q_{t},\phi)\log\frac{P_{k}(c_{i}|q_{t})}{P(c_{i}|q_{t},\phi)},\tag{12}$$ where $P(c_{i}|q_{t},\phi)=P(c_{i}=c^{*}|q_{t},\phi)$ defined in Eq. (8). We call this case in Eq. **The** **not** Eq. (8). We call this version TOURsoft. The update rule g for TOURsoft changes as follows: $$g(\mathbf{q}_{t},{\mathcal{C}}_{1:k}^{q_{t}})$$ $$g(\mathbf{q}_{t},\mathbf{c}_{1:k}^{*})$$ $$=\mathbf{q}_{t}+\eta\sum_{i=1}^{k}P(c_{i}|q_{t},\phi)\mathbf{c}_{i}-\eta\sum_{i=1}^{k}P_{k}(c_{i}|q_{t})\mathbf{c}_{i}.\tag{13}$$ Eq. (13) shows that qt+1 reflects ci weightaveraged by the cross-encoder (i.e., P(ci|qt, ϕ)) while removing ci weight-averaged by the current retrieval result (i.e., Pk(ci|qt)) (proof in Appendix B). ## 3.4 Relevance Labeler For Phrase Retrieval In the previous section, we used a cross-encoder reranker ϕ to provide a relevance score si over a pair of a query q and a context c. While it is possible to use an off-the-shelf re-ranker (Fajcik et al., 2021) for passage retrieval, no prior work has introduced a re-ranker for phrase retrieval (Lee et al., 2021b). In this section, we introduce a simple and accurate phrase re-ranker for TOUR. Inputs for re-rankers For phrase retrieval, sentences containing each retrieved phrase are considered as contexts, following Lee et al. (2021b). For each context, we also prepend the title of its document and use it as our context for re-rankers. To train our re-rankers, we first construct a training set from the retrieved contexts of the phrase retriever given a set of training queries Qtrain. Specifically, from the top retrieved contexts C1:k for every q ∈ Qtrain, we sample one positive context c + q and one negative context c− q . In open domain QA, it is assumed that a context that contains a correct answer to each q is relevant (positive). Our re-ranker is trained on a dataset Dtrain = {(*q, c*+ q, c− q)|q ∈ Qtrain}. Architecture We use the RoBERTa-large model (Liu et al., 2019) as the base model for our re-ranker. Given a pre-trained LM M, the cross-encoder re-ranker ϕ outputs a score of a context being relevant: $$s=\phi(q,c)=\mathbf{w}^{\top}{\mathcal{M}}(q\oplus c)[\mathbf{CLS}]$$ where {M, w} are the trainable parameters and ⊕ denotes a concatenation of q and c using a [SEP] token. Since phrase retrievers return both phrases and their contexts, we use special tokens [S] and [E] to mark the retrieved phrases within the contexts. Re-rankers are trained to maximize the probability of a positive context c + q for every (*q, c*+ q, c− q) ∈ Dtrain. We use the binary cross-entropy loss defined over the probability P + =exp(h +) exp(h+)+exp(h−) where h + = ϕ(*q, c*+ q) and h− = ϕ(*q, c*− q). We pre-train ϕ on reading comprehension datasets (Rajpurkar et al., 2016; Joshi et al., 2017; Kwiatkowski et al., 2019), which helped improve the quality of ϕ. For the ablation study of our phrase re-rankers, see Appendix D for details. Score aggregation After running TOUR, aggregating the reranking scores with the retreival scores provides consistent improvement. Specifically, we linearly interpolate the similarity score sim(*q, c*i) with the re-ranking score si and use this to obtain the final results: λsi + (1 − λ)sim(*q, c*i). ## 3.5 Efficient Implementation Of Tour TOUR aims to improve the recall of gold candidates by iteratively searching with updated query representations. However, it has high computational complexity, since it needs to label top-k retrieval results with a cross-encoder and perform additional retrieval. To minimize the additional time complexity, we perform up to t = 3 iterations with early stopping conditions. Specifically, at every iteration of TOURhard, we stop when the top-1 retrieval result is pseudo-positive, i.e., c1 ∈ Cqt hard. When using TOURsoft, we stop iterating when the top-1 retrieval result has the highest relevance score. Additionally, we cache ϕ(*q, c*i) for each query to skip redundant computation. ## 4 Experiments We test TOUR on multiple open-domain QA datasets. Specifically, we evaluate its performance on phrase retrieval and passage retrieval. ## 4.1 Datasets We mainly use six open-domain QA datasets: Natural Questions (Kwiatkowski et al., 2019), TriviaQA (Joshi et al., 2017), WebQuestions (Berant et al., 2013), CuratedTrec (Baudiš and Šedivy`, 2015), SQuAD (Rajpurkar et al., 2016), and EntityQuestions (Sciavolino et al., 2021). Following previous works, Entity Questions is only used for testing. See statistics in Appendix E. $$(14)$$ ## 4.2 Open-Domain Question Answering For end-to-end open-domain QA, we use phrase retrieval (Seo et al., 2019; Lee et al., 2021a) for TOUR, which directly retrieves phrases from the entire Wikipedia using a phrase index. Since a singlestage retrieval is the only component in phrase retrieval, it is easy to show how its open-domain QA performance can be directly improved with TOUR. We use DensePhrases (Lee et al., 2021a) for our base phrase retrieval model and train a crossencoder labeler as described in §3.4. We report exact match (EM) for end-to-end open-domain QA. We use k = {10, 40} for our phrase re-ranker and k = {10, 20} for TOUR on open-domain QA while k = 10 is used for both whenever it is omitted. For the implementation details of TOUR, see Appendix D. Baselines Many open-domain QA models take the retriever-reader approach (Chen et al., 2017; Lee et al., 2019; Izacard and Grave, 2021; Singh Model Top-k s/q (↓) NQ TRIVIAQA WQ TREC SQUAD Retriever + Extractive Reader DPRmulti (Karpukhin et al., 2020) 41.5 56.8 42.4 49.4 24.1 + Re-ranker (Iyer et al., 2021) 5 1.21 43.1 ↑1.6 59.3 ↑2.5 44.4 ↑2.0 49.3 ↓0.1 - GAR (Mao et al., 2021) 41.8 62.7 - - - DPRmulti (large) 44.6 60.9 44.8 53.5 - + Re-ranker 5 >1.21∗ 45.5 ↑0.9 61.7 ↑0.8 45.9 ↑1.1 55.3 ↑1.8 - ColBERT-QAlarge (Khattab et al., 2021) 47.8 **70.1** - - **54.7** UnitedQA-Elarge **51.8** 68.9 - - - Retriever-only DensePhrasesmulti (Lee et al., 2021a) 41.6 56.3 41.5 53.9 34.5 + PRFRocchio 10 0.09 41.6 0.0 56.5 ↑0.2 41.7 ↑0.2 54.0 ↑0.1 34.9 ↑0.4 + Phrase re-ranker (Ours) 10 0.24 47.0 ↑5.4 65.4 ↑9.1 45.9 ↑4.4 60.5 ↑6.6 43.1 ↑8.6 + Phrase re-ranker (Ours) 40 1.04 46.5 ↑4.9 66.0 ↑9.7 46.3 ↑4.8 61.5 ↑7.6 45.3 ↑10.8 + TOURhard (Ours) 10 0.44 48.6 ↑7.0 66.4 ↑10.1 46.1 ↑4.6 62.0 ↑8.1 45.2 ↑10.7 + TOURhard (Ours) 20 0.78 47.9 ↑6.3 66.8 ↑10.5 46.9 ↑5.4 62.5 ↑8.6 46.4 ↑**11.9** + TOURsoft (Ours) 10 0.43 47.9 ↑6.3 66.5 ↑10.2 46.3 ↑4.8 63.1 ↑9.2 44.9 ↑10.4 + TOURsoft (Ours) 20 0.78 47.6 ↑6.0 66.6 ↑10.3 46.9 ↑5.4 62.5 ↑8.6 46.0 ↑11.5 et al., 2021). As our baselines, we report extractive open-domain QA models, which is a fair comparison with retriever-only (+ re-ranker) models whose answers are always extractive. For re-ranking baselines of retriever-reader models, we report ReConsider (Iyer et al., 2021), which re-ranks the outputs of DPR + BERT. For a PRF baseline, GAR (Mao et al., 2021), which uses context generation models for augmenting queries in BM25, is reported. Results Table 1 shows the results on the five open-domain QA datasets in the in-domain evaluation setting where all models use the training sets of each dataset they are evaluated on. First, we observe that using our phrase re-ranker largely improves the performance of DensePhrasesmulti. Compared to adding a re-ranker on the retriever-reader model (DPRmulti + Re-ranker by Iyer et al., 2021), our phrase re-ranking approach performs 5× faster with a larger top-k due to the efficient retriever-only method. Furthermore, the performance gain is significantly larger possibly due to the high top-k accuracy of phrase retrievers. Unlike using the Rocchio algorithm, using TOURhard or TOURsoft greatly improves the performance of the base retriever. Compared to our phrase re-rankerk=40, TOURhard,k=20 runs 1.3× faster as well as outperforming it by up to 2.0%. Even TOURhard,k=10 often outperforms re-rankerk=40 with 2.4× faster inference. For this task, TOURhard and TOURsoft work similarly with ![5_image_0.png](5_image_0.png) ## Exceptions On Nq And Trec. Latency vs. performance Figure 2 compares the query latency and performance of TOUR and other baselines on the NQ development set. We vary the top-k value from 10 to 50 by 10 (left to right) to visualize the trade-off between latency and performance. The result shows that TOUR with only top-10 is better and faster than the re-ranker with the best top-k. Specifically, TOURhard,k=10 outperforms re-rankerk=40 by 1.0% while being 2.5× faster. This shows that TOUR requires a less number of retrieval results to perform well, compared to a re-ranker model that often requires a larger k. Model Top-k s/q (↓) NQ TRIVIAQA WQ TREC SQUAD ENTITYQ 2 DPRNQ ∗(Karpukhin et al., 2020) 39.4 29.4 - - 0.1 - DensePhrasesNQ (Lee et al., 2021a) 40.8 33.4 23.8 33.6 15.4 22.4 + Phrase re-ranker (Ours) 10 0.24 45.4 ↑4.6 40.9 ↑7.5 26.6 ↑2.8 37.8 ↑4.2 20.2 ↑4.8 26.8 ↑4.4 + Phrase re-ranker (Ours) 40 1.04 44.5 ↑3.7 41.7 ↑8.3 26.4 ↑2.6 37.8 ↑4.2 21.4 ↑6.0 27.1 ↑4.7 + TOURhard (Ours) 10 0.44 47.0 ↑6.2 42.6 ↑9.2 27.7 ↑3.9 38.3 ↑4.7 21.5 ↑6.1 27.9 ↑5.5 + TOURhard (Ours) 20 0.78 46.5 ↑5.7 42.9 ↑9.5 28.2 ↑4.4 39.8 ↑6.2 22.1 ↑6.7 28.3 ↑5.9 + TOURsoft (Ours) 10 0.43 46.2 ↑5.4 42.5 ↑9.1 27.4 ↑3.6 38.2 ↑4.6 21.2 ↑5.8 27.6 ↑5.2 + TOURsoft (Ours) 20 0.78 45.7 ↑4.9 42.7 ↑9.3 27.7 ↑3.9 39.8 ↑6.2 21.7 ↑6.3 27.9 ↑5.5 | Training | Unseen Query Distribution | 2 | |------------|-----------------------------|-----| NQ TRIVIAQA ENTITYQ† Query distribution shift In Table 2, we show open-domain QA results under query distribution shift from the training distribution. Compared to DensePhrasesmulti in Table 1, which was trained on all five open-domain QA datasets, we observe huge performance drops on unseen query distributions when using DPRNQ and DensePhrasesNQ. DPRNQ seems to suffer more (e.g., 0.1 on SQuAD) since both of its retriever and reader were trained on NQ, which exacerbates the problem when combined. | Model | (Acc@20/100) | (Acc@20/100) | (Acc@20/100) | |-----------------------------------|------------------|------------------|------------------| | DensePhrasesmulti | 79.8 / 86.0 | 81.6 / 85.8 | 61.0 / 71.2 | | + Re-ranker (Fajcik et al., 2021) | 83.2 / 86.0 ↑3.4 | 83.0 / 85.8 ↑1.4 | 65.3 / 71.2 ↑4.3 | | + TOURhard (Ours) | 84.0 / 86.9 ↑4.2 | 83.2 / 86.1 ↑1.6 | 66.2 / 72.4 ↑5.2 | | + TOURsoft (Ours) | 84.2 / 87.0 ↑4.4 | 83.2 / 86.1 ↑1.6 | 66.2 / 72.4 ↑5.2 | | DPRmulti | 79.4 / 86.5 | 79.0 / 84.8 | 57.9 / 70.8 | | + Re-ranker (Fajcik et al., 2021) | 83.6 / 86.5 ↑4.2 | 81.6 / 84.8 ↑2.6 | 64.4 / 70.8 ↑6.5 | | + TOURhard (Ours) | 84.0 / 87.0 ↑4.6 | 81.5 / 84.9 ↑2.5 | 65.6 / 71.9 ↑7.7 | | + TOURsoft (Ours) | 84.2 / 87.2 ↑4.8 | 81.6 / 85.1 ↑2.6 | 66.2 / 72.5 ↑8.3 | On the other hand, using TOUR largely improves the performance of DensePhrasesNQ on many unseen query distributions even though all of its component were still trained on NQ. Specifically, TOURhard,k=20 gives 6.5% improvement on average across different query distributions, which easily outperforms our phrase re-rankerk=40. Interestingly, TOURhard consistently performs better than TOURsoft in this setting, which requires more investigation in the future. ## 4.3 Passage Retrieval We test TOUR on the passage retrieval task for open-domain QA. We use DPR as a passage retriever and DensePhrases as a phrase-based passage retriever (Lee et al., 2021b). In this experiment, we use an off-the-shelf passage re-ranker (Fajcik et al., 2021) to show how existing re-rankers can serve as a pseudo labeler for TOUR. We report the top-k retrieval accuracy, which is 1 when the answers exist in top-k retrieval results. For passage retrieval, we use k = 100 for both the re-ranker and TOUR due to the limited resource budget. Results Table 3 shows the results of passage retrieval for open-domain QA. We find that using TOUR consistently improves the passage retrieval accuracy. Under the query distribution shift similar to Table 2, DPRmulti + TOURsoft improves the original DPR by 8.3% and advances the off-the-shelf re-ranker by 1.8% on EntityQuestions (Acc@20). Notably, Acc@100 always improves with TOUR, | Overlap | | | | | |----------------------------|-------|-------|------------|------| | NQ | Total | Query | Answeronly | None | | DensePhrasesmulti | 41.3 | 63.3 | 33.7 | 23.9 | | Re-ranker (Ours) | 46.8 | 66.7 | 39.0 | 31.0 | | TOURhard (Ours) | 48.6 | 70.1 | 40.3 | 33.7 | | TRIVIAQA DensePhrasesmulti | 53.8 | 76.5 | 46.2 | 32.6 | | Re-ranker (Ours) | 62.8 | 82.1 | 60.3 | 41.5 | | TOURhard (Ours) | 63.8 | 83.6 | 62.3 | 42.2 | | WQ DensePhrasesmulti | 41.5 | 70.8 | 39.5 | 27.5 | | Re-ranker (Ours) | 45.9 | 73.4 | 48.5 | 31.5 | | TOURhard (Ours) | 46.2 | 70.1 | 48.5 | 33.0 | | DensePhrasesNQ | 42.4 | |---------------------------------|--------| | DensePhrasesNQ + TOURhard | 48.4 | | q hard ⇒ C1:k ′ (k ′ = 3) | 46.1 | | C SGD ⇒ interpolation (β = 0.3) | 48.2 | | λ = 0.1 ⇒ λ = 0 | 48.1 | | λ = 0.1 ⇒ λ = 1 | 48.0 | | TOURhard ⇒ TOURsoft | 47.7 | which is not possible for re-rankers since they do not update the top retrieval results. Unlike the phrase retrieval task, we observe that TOURsoft is a slightly better option than TOURhard on this task. | NQ | |------| ## 5 Analysis 5.1 Train-Test Overlap Analysis Open-domain QA datasets often contain semantically overlapping queries and answers between training and test sets (Lewis et al., 2021), which overestimates the generalizability of QA models. Hence, we test our models on train-test overlap 2While the passage retrieval accuracy is mostly reported for Entity Questions (Ram et al., 2022; Lewis et al., 2022), we also report EM for open-domain QA. ![7_image_0.png](7_image_0.png) ![7_image_1.png](7_image_1.png) ![7_image_2.png](7_image_2.png) splits provided by Lewis et al. (2021). Table 4 shows that TOUR consistently improves the performance of test queries that do not overlap with training data (i.e., None). Notably, on WebQuestions, while the performance on the none overlap split has been improved by 1.5% from the re-ranker, the performance on query overlap is worse than the re-ranker since unnecessary exploration is often performed on overlapping queries. Our finding on the effectiveness of query optimization is similar to that of Mao et al. (2021), while our approach often improves performance on query overlap cases. ## 5.2 Ablation Study Table 5 shows an ablation study of TOURhard on end-to-end open-domain QA. We observe that using fine-grained relevance signals generated by our phrase re-ranker (i.e., C q hard) is significantly more effective than simply choosing top-k′as relevance Query: which type of wave requires a medium for transmission? ![8_image_0.png](8_image_0.png) signals (i.e., C1:k′). Using SGD or aggregating the final scores between the retriever and the re-ranker gives additional improvement. Figure 3 shows the effect of multiple iterations in TOURhard compared to the Rocchio algorithm. While PRFRocchio with t = 1 achieves slightly better performance than DensePhrases, it shows a diminishing gain with a larger number of iterations. In contrast, the performance of TOURhard benefits from multiple iterations until t = 3. Removing the score aggregation between the retriever and the re-ranker (i.e., λ = 0) causes a performance drop, but it quickly recovers with a larger t. Efficient implementation Simple techniques introduced in §3.5 such as early stopping and caching significantly reduce the run-time of TOUR. Figure 4 summarizes the effect of optimization techniques to improve efficiency of TOUR. Without each technique, the latency increases linearly with the number of iterations. By adding the caching mechanism for ϕ and the stop condition of c1 ∈ C qt hard, the latency is greatly reduced. Prediction sample Figure 5 shows a sample prediction of TOUR. We use DensePhrasesmulti + TOURhard with k = 10, from which the top-5 results are shown. While the initial result at t = 0 failed to retrieve correct answers in the top-10, the next round of TOURhard gives new results including the correct answer, which were not retrieved before. As the iteration continues, the correct answer starts to appear in the top retrieval results, and becomes the top-1 at t = 3. ## 6 Conclusion In this paper, we propose TOUR, which iteratively optimizes test query representations for dense retrieval. Specifically, we optimize instance-level query representations at test time using the gradientbased optimization method over the top retrieval results. We use cross-encoder re-rankers to provide pseudo labels where our simple re-ranker or off-theshelf re-rankers can be used. We theoretically show that gradient-based optimization provides a generalized version of the Rocchio algorithm for pseudo relevance feedback, which leads us to develop different variants of TOUR. Experiments show that our test-time query optimization largely improves the retrieval accuracy on multiple open-domain QA datasets in various settings while being more efficient than traditional re-ranking methods. ## Limitations In this paper, we focus on the end-to-end accuracy and passage retrieval accuracy for opendomain QA. We have also experimented on the BEIR benchmark (Thakur et al., 2021) to evaluate our method in the zero-shot document retrieval task. Overall, we obtained 48.1% macro-averaged NDCG@10 compared to 47.8% by the re-ranking method. For some tasks, TOUR obtains significant improvements with a pre-trained document retriever (Hofstätter et al., 2021). For example, TOUR improves the baseline retriever by 11.6% and 23.8% NDCG@10 on BioASQ and TRECCOVID, respectively, while also outperforming the re-ranker by 2.1% and 2.4% NDCG@10. We plan to better understand why TOUR performs better specifically on these tasks and further improve it. TOUR also requires a set of validation examples for hyperparameter selection. While we only used in-domain validation examples for TOUR, which were also adopted when training re-rankers, we observed some performance variances depending on the hyperparameters. We hope to tackle this issue with better optimization in the future. ## Acknowledgements We thank Zexuan Zhong, Mengzhou Xia, Howard Yen, and the anonymous reviewers for their helpful feedback. This work was supported in part by the ICT Creative Consilience program (IITP-20232020-0-01819) supervised by the IITP (Institute for Information & communications Technology Planning & Evaluation), National Research Foundation of Korea (NRF-2023R1A2C3004176), and the Hyundai Motor Chung Mong-Koo Foundation. ## References Petr Baudiš and Jan Šedivy. 2015. ` Modeling of the question answering task in the yodaqa system. In *International Conference of the cross-language evaluation Forum for European languages*, pages 222–228. Springer. Tim Baumgärtner, Leonardo F. R. Ribeiro, Nils Reimers, and Iryna Gurevych. 2022. Incorporating relevance feedback for information-seeking retrieval using fewshot document re-ranking. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 8988–9005. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In *Proceedings of the 2013* Conference on Empirical Methods in Natural Language Processing, pages 1533–1544. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer opendomain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870–1879. W Bruce Croft, Donald Metzler, and Trevor Strohman. 2010. *Search engines: Information retrieval in practice*, volume 520. Addison-Wesley Reading. Martin Fajcik, Martin Docekal, Karel Ondrej, and Pavel Smrz. 2021. R2-d2: A modular baseline for opendomain question answering. In *Findings of the Association for Computational Linguistics: EMNLP 2021*, pages 854–870. Sebastian Hofstätter, Sheng-Chieh Lin, Jheng-Hong Yang, Jimmy J. Lin, and Allan Hanbury. 2021. Efficiently teaching an effective dense retriever with balanced topic aware sampling. *Proceedings of the* 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degeneration. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. Srinivasan Iyer, Sewon Min, Yashar Mehdad, and Wentau Yih. 2021. RECONSIDER: Improved re-ranking using span-focused cross-attention for open domain question answering. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, pages 1280–1287. Gautier Izacard and Edouard Grave. 2020. Distilling knowledge from reader to retriever for question answering. In *International Conference on Learning* Representations. Gautier Izacard and Edouard Grave. 2021. Leveraging passage retrieval with generative models for open domain question answering. In *Proceedings of the 16th* Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 874–880. Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In *Proceedings of the 55th Annual Meeting of* the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601–1611. Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for opendomain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769–6781. Omar Khattab, Christopher Potts, and Matei Zaharia. 2021. Relevance-guided supervision for OpenQA with ColBERT. Transactions of the Association for Computational Linguistics, 9:929–944. Omar Khattab and Matei Zaharia. 2020. Colbert: Efficient and effective passage search via contextualized late interaction over BERT. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, SIGIR 2020, Virtual Event, China, July 25-30, 2020, pages 39–48. Ben Krause, Emmanuel Kahembwe, Iain Murray, and Steve Renals. 2019. Dynamic evaluation of transformer language models. arXiv preprint arXiv:1904.08378. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question answering research. *Transactions of the Association for Computational Linguistics*, 7:452–466. Victor Lavrenko and W Bruce Croft. 2001. Relevance based language models. In *Proceedings of the 24th* annual international ACM SIGIR conference on Research and development in information retrieval, pages 120–127. Jinhyuk Lee, Mujeen Sung, Jaewoo Kang, and Danqi Chen. 2021a. Learning dense representations of phrases at scale. In *Proceedings of the 59th Annual* Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6634–6647. Jinhyuk Lee, Alexander Wettig, and Danqi Chen. 2021b. Phrase retrieval learns passage retrieval, too. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 3661– 3672. Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6086–6096. Patrick Lewis, Barlas Oguz, Wenhan Xiong, Fabio Petroni, Scott Yih, and Sebastian Riedel. 2022. Boosted dense retriever. In *Proceedings of the 2022* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3102–3117. Patrick Lewis, Pontus Stenetorp, and Sebastian Riedel. 2021. Question and answer test-train overlap in opendomain question answering datasets. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1000–1008. Canjia Li, Yingfei Sun, Ben He, Le Wang, Kai Hui, Andrew Yates, Le Sun, and Jungang Xu. 2018. NPRF: A neural pseudo relevance feedback framework for ad-hoc information retrieval. In *Proceedings of the* 2018 Conference on Empirical Methods in Natural Language Processing, pages 4482–4491. Hang Li, Ahmed Mourad, Shengyao Zhuang, Bevan Koopman, and Guido Zuccon. 2021. Pseudo relevance feedback with deep language models and dense retrievers: Successes and pitfalls. *Journal of ACM* Transactions on Information Systems. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*. Yuning Mao, Pengcheng He, Xiaodong Liu, Yelong Shen, Jianfeng Gao, Jiawei Han, and Weizhu Chen. 2021. Generation-augmented retrieval for opendomain question answering. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4089–4100. Rodrigo Nogueira and Kyunghyun Cho. 2019. Passage re-ranking with bert. *arXiv preprint* arXiv:1901.04085. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In *Proceedings* of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392. Ori Ram, Gal Shachaf, Omer Levy, Jonathan Berant, and Amir Globerson. 2022. Learning to retrieve passages without supervision. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2687–2700. Ruiyang Ren, Yingqi Qu, Jing Liu, Wayne Xin Zhao, QiaoQiao She, Hua Wu, Haifeng Wang, and Ji-Rong Wen. 2021. RocketQAv2: A joint training method for dense passage retrieval and passage re-ranking. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 2825–2835. Joseph Rocchio. 1971. Relevance feedback in information retrieval. *The Smart retrieval systemexperiments in automatic document processing*, pages 313–323. Guilherme Rosa, Luiz Bonifacio, Vitor Jeronymo, Hugo Abonizio, Marzieh Fadaee, Roberto Lotufo, and Rodrigo Nogueira. 2022. In defense of crossencoders for zero-shot retrieval. *arXiv preprint* arXiv:2212.06121. Christopher Sciavolino, Zexuan Zhong, Jinhyuk Lee, and Danqi Chen. 2021. Simple entity-centric questions challenge dense retrievers. In *Proceedings of* the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6138–6148. Minjoon Seo, Jinhyuk Lee, Tom Kwiatkowski, Ankur Parikh, Ali Farhadi, and Hannaneh Hajishirzi. 2019. Real-time open-domain question answering with dense-sparse phrase index. In *Proceedings of the* 57th Annual Meeting of the Association for Computational Linguistics, pages 4430–4441. Devendra Singh, Siva Reddy, Will Hamilton, Chris Dyer, and Dani Yogatama. 2021. End-to-end training of multi-document reader and retriever for opendomain question answering. Advances in Neural Information Processing Systems, 34:25968–25981. Nandan Thakur, Nils Reimers, Andreas Rücklé, Abhishek Srivastava, and Iryna Gurevych. 2021. BEIR: A heterogeneous benchmark for zero-shot evaluation of information retrieval models. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2). Xiao Wang, Craig Macdonald, Nicola Tonellotto, and Iadh Ounis. 2021. Pseudo-relevance feedback for multiple representation dense retrieval. In *Proceedings of the 2021 ACM SIGIR International Conference on Theory of Information Retrieval*, pages 297– 306. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45. Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul N Bennett, Junaid Ahmed, and Arnold Overwijk. 2020. Approximate nearest neighbor negative contrastive learning for dense text retrieval. In *International Conference on Learning* Representations. HongChien Yu, Chenyan Xiong, and Jamie Callan. 2021. Improving query representations for dense retrieval with pseudo relevance feedback. In *Proceedings of* the 30th ACM International Conference on Information & Knowledge Management, pages 3592–3596. Hamed Zamani, Mostafa Dehghani, W. Bruce Croft, Erik G. Learned-Miller, and Jaap Kamps. 2018. From neural re-ranking to neural ranking: Learning a sparse representation for inverted indexing. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, 2018, pages 497–506. ## A Derivation Of The Gradient For Tour**Hard** Proof. We compute the gradient of Lhard(qt, C qt 1:k ) in Eq. (10) with respect to the query representation qt. Denoting Pc˜Pk(˜c|qt) as Z, the gradient is: ∂Lhard(qt, C qt 1:k ) ∂qt= ∂Lhard(qt, C qt 1:k ) ∂Z ∂Z ∂qt = − 1 Z X c˜ ∂Pk(˜c|qt) ∂qt = − 1 Z X c˜ Xk i=1 ∂Pk(˜c|qt) ∂q⊤ t ci ∂q ⊤ t ci ∂qt = − 1 Z X c˜ Xk i=1 (δ[ci = ˜c] − Pk(ci|qt))Pk(˜c|qt)ci = − X c˜ -P(˜c|qt) Xk i=1 (δ[ci = ˜c] − Pk(ci|qt))ci = − X c˜ P(˜c|qt) -(1 − Pk(˜c|qt))c˜ − X c∈Cqt 1:k ,c̸=˜c Pk(c|qt)c = − X c˜ P(˜c|qt)(1 − Pk(˜c|qt))c˜ + X c˜ -P(˜c|qt) X c∈Cqt 1:k ,c̸=˜c Pk(c|qt)c Then, we have: $$\begin{array}{c}{{g(\mathbf{q}_{t},\mathcal{C}_{1:k}^{q_{t}})=\mathbf{q}_{t}-\eta\frac{\partial\mathcal{L}_{\mathrm{hard}}(\mathbf{q}_{t},\mathcal{C}_{1:k}^{q_{t}})}{\partial\mathbf{q}_{t}}}}\\ {{=\mathbf{q}_{t}+\eta\sum_{\tilde{c}}P(\tilde{c}|q_{t})(1-P_{k}(\tilde{c}|q_{t}))\tilde{\mathbf{c}}}}\\ {{-\eta\sum_{\tilde{c}}\big[P(\tilde{c}|q_{t})\sum_{c\in\mathcal{C}_{1:k}^{q_{t}},c\neq\tilde{c}}P_{k}(c|q_{t})\mathbf{c}\big].}}\end{array}$$ ## B Derivation Of The Gradient For Tour**Soft** Proof. We compute the gradient of Lsoft(qt, C qt 1:k ) in Eq. (12) with respect to qt. Denoting P(ci = c∗|qt, ϕ) as Pi, we expand the loss term as: $\mathcal{L}_{\text{soft}}(\mathbf{q}_{t},\mathcal{C}_{1:k}^{q_{t}})=-\sum_{i=1}^{k}P_{i}\log\frac{P_{k}(c_{i}|q_{t})}{P_{i}}$ $=-\sum_{i=1}^{k}P_{i}(\mathbf{q}_{t}^{\top}\mathbf{c}_{i}-\log\sum_{j=1}^{k}\exp(\mathbf{q}_{t}^{\top}\mathbf{c}_{j})-\log P_{i})$. Then, the gradient is: $$\frac{\partial{\cal L}_{\rm soft}({\bf q}_{t},{\cal C}_{1:k}^{at})}{\partial{\bf q}_{t}}$$ $$=-\sum_{i=1}^{k}P_{i}\frac{\partial}{\partial{\bf q}_{t}}({\bf q}_{t}^{\top}{\bf c}_{i}-\log\sum_{j=1}^{k}\exp({\bf q}_{t}^{\top}{\bf c}_{j})-\log P_{i})$$ = − Xk i=1 Pi(ci −1 Pk j=1 exp(q⊤ t cj ) Xk j=1 cj exp(q ⊤ t cj )) = − Xk i=1 Pi(ci − Xk j=1 cjexp(q ⊤ t cj ) Pk l=1 exp(q⊤ t cl) ) = − Xk i=1 Pi(ci − Xk j=1 Pk(cj |qt)cj ) = − Xk i=1 Pici + Xk i=1 Pk(ci|qt)ci. Putting it all together: g(qt, C qt 1:k ) = qt − η ∂Lsoft(qt, C qt 1:k ) ∂qt = qt + η X k i=1 P(ci|qt, ϕ)ci − η X k i=1 Pk(ci|qt)ci. ## C Relation To The Rocchio Algorithm Proof. We derive Eq. (6) from Eq. (11). g(qt, C qt 1:k ) = qt + η X c˜ P(˜c|qt)(1 − Pk(˜c|qt))c˜ − η X c˜ -P(˜c|qt) X c∈Cqt 1:k ,c̸=˜c Pk(c|qt)c = qt + η k X′ i=1 1 k′ (1 − 1 k )ci − η k X′ i=1 - 1 k′ X k j=1,i̸=j 1 k cj k X′ = qt + η k − 1 k′k i=1 ci k X′ i=1 - k X′ j=1,i̸=j cj + X k − η 1 k′k j=k′+1 cj k X′ = qt + η k − 1 k′k i=1 ci − η 1 k′k -(k ′ − 1) k X′ i=1 ci + k ′X k i=k′+1 ci k X′ i=1 ci − η 1 k X k = qt + η k − k′ k′k j=k′+1 cj . (15) Then, the equality holds when α = 1, β = η k−k′ k, and γ = η k−k′ k. 5743 $$\square$$ | Phrase re-ranker | 45.4 | |---------------------------|--------| | Without prepending titles | 44.8 | | Rl ⇒ Rb | 43.2 | | 3 ⇒ 1 sentence | 43.6 | | 3 ⇒ Paragraph∗ | 45.6 | | RC ⇒ MNLI pre-training | 43.8 | | RC ⇒ No pre-training | 42.0 | | NQ | |------| ## D Implementation Details Phrase re-ranker To train a cross-encoder reranker for phrase retrieval (§3.4), we first annotate the top 100 retrieved results from DensePhrases. We use three sentences as our context, one that contains a retrieved phrase and the other two that surround it. This leads to faster inference than using the whole paragraph as input while preserving the performance. During the 20 epochs of training, we sample positive and negative contexts for every epoch while selecting the best re-reanker based on the validation accuracy of the re-ranker. We modified the code provided by the Transformers library3(Wolf et al., 2020) and used the same hyperparameters as specified in their documentation except for the number of training epochs. The ablation study in Table 6 shows that we can achieve stronger performance by prepending titles to inputs, using larger language models, using three sentences as our context, and pre-training over reading comprehension datasets. Using entire paragraphs as input contexts only slightly increases performance compared to using three sentences, but it doubles the query latencies of re-ranking. Table 6: Ablation study of our phrase re-ranker. Rl: RoBERTa-large. Rb: RoBERTa-base. RC: reading comprehension. *: using entire paragraphs as input doubles query latencies. Dense retriever We modified the official code of DensePhrases4(Lee et al., 2021a) and DPR5(Karpukhin et al., 2020) to implement TOUR on dense retrievers. While pre-trained models and indexes of DensePhrasesmulti and DPRNQ are publicly available, the indexes of DensePhrasesNQ and DPRmulti have not been released as of May 25th, 2022. When necessary, we reimplemented them to experiment with opendomain QA and passage retrieval in the query distribution shift setting. Hyperparamter When running TOUR , we use gradient descent with momentum set to 0.99 and use weight decay λdecay = 0.01. We also perform a linear learning rate scheduling per iteration. Both the threshold p and temperature τ for pseudo labels are set to 0.5. Table 7 lists the hyperparameters that are used differently for each task. All hyperparameters of TOUR were tuned using the in-domain development set. Table 7: Hyperparameters of TOURfor open-domain QA (ODQA) and passage retrieval. ## E Data Statistics | ODQA | Passage Retrieval | | | |-----------------|---------------------|--------------|-----| | Hyperparameter | DensePhrases | DensePhrases | DPR | | Learning rate η | 1.2 | 1.2 | 0.2 | | Max iterations | 3 | 1 | 1 | | Retrieval top-k | 10 | 100 | 100 | | Re-ranker top-k | 10 | 100 | 100 | | Re-ranker λ | 0.1 | 1 | 1 | | Dataset | Train | Dev | Test | |-------------------|---------|-------|--------| | Natural Questions | 79,168 | 8,757 | 3,610 | | TriviaQA | 78,785 | 8,837 | 11,313 | | WebQuestions | 3,417 | 361 | 2,032 | | CuratedTrec | 1,353 | 133 | 694 | | SQuAD | 78,713 | 8,886 | 10,570 | | EntityQuestions | - | - | 22,075 | Table 8: Statistics of open-domain QA datasets. Table 8 shows the statistics of the datasets used for end-to-end open-domain QA and passage retrieval tasks. For EntityQuestions, we only use its test set for the query distribution shift evaluation. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Yes, please see the limitations section. A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Yes, please see the abstract and introduction sections. ✓ A4. Have you used AI writing assistants when working on this paper? Grammarly for grammar checking. The contents are original work from the human authors. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Yes, Please See Section 4.1 Datasets. ✓ B1. Did you cite the creators of artifacts you used? Yes, please see section 4.1 Datasets. ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Yes, please see section 4.1 Datasets. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Yes, please see section 4.1 Datasets. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Yes, please see section 4.1 Datasets. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Yes, please see Appendix E Data Statistics. ## C ✓ **Did You Run Computational Experiments?** Yes, Please See Section 4 Experiments. ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Yes, please see section 4 Experiments. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Yes, please see appendix F Implementation Details. ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Yes, please see section 4 Experiments. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Yes, please see appendix F Implementation Details. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
chen-etal-2023-customized
A Customized Text Sanitization Mechanism with Differential Privacy
https://aclanthology.org/2023.findings-acl.355
As privacy issues are receiving increasing attention within the Natural Language Processing (NLP) community, numerous methods have been proposed to sanitize texts subject to differential privacy. However, the state-of-the-art text sanitization mechanisms based on a relaxed notion of metric local differential privacy (MLDP) do not apply to non-metric semantic similarity measures and cannot achieve good privacy-utility trade-offs. To address these limitations, we propose a novel Customized Text sanitization (CusText) mechanism based on the original $\epsilon$-differential privacy (DP) definition, which is compatible with any similarity measure.Moreover, CusText assigns each input token a customized output set to provide more advanced privacy protection at the token level.Extensive experiments on several benchmark datasets show that CusText achieves a better trade-off between privacy and utility than existing mechanisms.The code is available at \url{https://github.com/sai4july/CusText}.
# A Customized Text Sanitization Mechanism With Differential Privacy Huimin Chen1∗ , Fengran Mo2∗ , Yanhao Wang1**, Cen Chen**1† , Jian-Yun Nie2, Chengyu Wang3**, Jamie Cui**4 1East China Normal University 2Université de Montréal 3Alibaba Group 4Ant Group [email protected], [email protected] {yhwang,cenchen}@dase.ecnu.edu.cn, [email protected] [email protected], [email protected] ## Abstract As privacy issues are receiving increasing attention within the Natural Language Processing (NLP) community, numerous methods have been proposed to sanitize texts subject to differential privacy. However, the state-of-theart text sanitization mechanisms based on metric local differential privacy (MLDP) do not apply to non-metric semantic similarity measures and cannot achieve good trade-offs between privacy and utility. To address the above limitations, we propose a novel Customized Text (CusText) sanitization mechanism based on the original ϵ-differential privacy (DP) definition, which is compatible with any similarity measure. Furthermore, CusText assigns each input token a customized output set of tokens to provide more advanced privacy protection at the token level. Extensive experiments on several benchmark datasets show that CusText achieves a better trade-off between privacy and utility than existing mechanisms. The code is available at https://github. com/sai4july/CusText. ## 1 Introduction In many Natural Language Processing (NLP) applications, input texts often contain sensitive information that can infer the identity of specific persons (Jegorova et al., 2021), leading to potential privacy leakage that impedes privacy-conscious users from releasing data to service providers (Carlini et al., 2019, 2021; Song and Raghunathan, 2020). Moreover, legal restrictions such as CCPA1and GDPR2 may further limit the sharing of sensitive textual data. This makes NLP service providers difficult to collect training data unless the privacy concerns of data owners, including individuals and institutions, are well discoursed. ∗ Equal contribution. † Corresponding author. 1https://oag.ca.gov/privacy/ccpa 2https://data.europa.eu/eli/reg/2016/ 679/oj ![0_image_0.png](0_image_0.png) To address such privacy issues, great efforts (Lyu et al., 2020; Anil et al., 2022; Dupuy et al., 2022; Li et al., 2022; Mireshghallah et al., 2021) have been made to train language models (LMs) with *differential privacy* (Dwork et al., 2006) (DP), which has been regarded as the de facto standard for privacypreserving computation. These approaches mainly focus on adding calibrated noise to gradients or text representations during the training phase so that sensitive user data cannot be inferred from trained LMs. Nevertheless, they require service providers to collect the original data for LM training. As such, data owners may still have privacy concerns when service providers are not fully trusted. To solve the privacy problem from the root, a common paradigm is to let data owners sanitize their data *locally* before releasing them to the service provider, as shown in Figure 1. Generally, such privatization mechanisms (Feyisetan et al., 2019, 2020; Yue et al., 2021) generate a sanitized text document by replacing the original tokens (e.g., characters, words, or n-grams) in the original document sequentially with new tokens sampled from output token sets. Specifically, they adopt the Metric Local Differential Privacy (Chatzikokolakis et al., 2013) (MLDP, also known as dχ-privacy), a relaxation of the original DP definition, to provide the privacy and utility guarantees simultaneously. On the one hand, MLDP inherits the idea of DP to ensure that the outputs of any adjacent input tokens are indistinguishable to protect the original tokens from being inferred. On the other hand, MLDP also preserves the utility of sanitized texts by assigning higher sampling probabilities to tokens that are semantically closer to the original ones. In these mechanisms, any metric distance (e.g., Euclidean distance) can be used to measure the semantic similarities between tokens. However, the above text sanitization mechanisms suffer from two inherent limitations. First, since MLDP is specific for metric distances satisfying the triangle inequality, they do not apply to non-metric semantic similarity measures in NLP applications such as cosine similarity (Mrksic et al., 2016) and TF-IDF (Salton and Buckley, 1988). Second, they cannot achieve good privacy-utility trade-offs, i.e., either having high privacy costs with insufficient protections or resulting in low accuracy of models trained on sanitized data. We observe that the low accuracy arises as they treat each token in the text equally by assigning each input token with the same output set, which can be excessively large (e.g., the size of the output set is over 80,000). Such a huge output set leads to high costs for MLDP and thus impedes the model's utility when the privacy budget is tight. To this end, we propose a novel Customized Text (CusText) sanitization mechanism that provides more advanced privacy protection at the token level. Specifically, to generalize CusText to all similarity measures, we turn to a mechanism that satisfies the original ϵ-Differential Privacy (ϵ-DP), i.e., Exponential Mechanism (EM) (McSherry and Talwar, 2007), to sample the output for each input token. Meanwhile, we inherit the merit of MLDP by designing an appropriate *scoring function* for EM to take into account the semantic similarities between tokens for sampling. Then, to achieve a better trade-off between privacy and utility, we design a mapping scheme to assign each input token a customized output set of a much smaller size for token-level privacy protection. Here, we can adjust a customized parameter K that determines the size of the output set for each input token for different utility-privacy trade-offs. Using the mapping scheme, we exclude most of the tokens that are semantically irrelevant to the input token from consideration and reduce the privacy costs caused by excessive output set sizes. As the privacy risks of some tokens, e.g., stopwords, are low in practice, we further propose an improved CusText+ mechanism that skips the stopwords in the sampling process to achieve higher utility without incurring greater privacy losses. Finally, we conduct extensive experiments on three benchmark datasets to demonstrate that CusText achieves better privacy-utility trade-offs than the state-of-the-art text sanitization mechanisms in (Feyisetan et al., 2020; Yue et al., 2021). More particularly, with the same privacy parameter ϵ, the models trained on texts sanitized by CusText have significantly higher accuracy rates than those sanitized by SANTEXT (Yue et al., 2021). Furthermore, when the utilities of models are comparable, CusText provides better protection against two token inference attacks than SANTEXT. ## 2 Related Work There have been numerous studies on the vulnerability of deep learning models (Carlini et al., 2019; Song and Raghunathan, 2020), including language models (Carlini et al., 2021; Zhao and Chen, 2022) (LMs), against privacy attacks. In particular, such attacks can recover sensitive user attributes or raw texts from trained models. Therefore, incorporating privacy mechanisms with rigorous guarantees is vital to protect LMs from privacy attacks. A few attempts at applying anonymization techniques for i.i.d. data (Li et al., 2007; Machanavajjhala et al., 2007) fail to provide strong privacy protection for textual data (Zhao and Chen, 2022). Then, many efforts (Lyu et al., 2020; Anil et al., 2022; Dupuy et al., 2022; Hessel and Schofield, 2021; Li et al., 2022; Mireshghallah et al., 2021) have been made to preserve the utility of LMs on textual data with provable differential privacy (DP) guarantees. Following the application of DP in deep learning (Abadi et al., 2016), they mainly focus on adding calibrated noise to gradients or text representations during the training phase for both utility and privacy. However, they need a trustworthy server to collect original texts from data owners for model training and thus cannot be applied to the scenario without trusted servers. To address privacy issues from the root, different (customized) local differential privacy (Duchi et al., 2013; Chatzikokolakis et al., 2013) (LDP) mechanisms have been proposed to allow data owners to sanitize their data locally before releasing them to the server. Due to the high dimensionality and complicated features of textual data, compared with statistical analytics on i.i.d. data with LDP (Murakami and Kawamoto, 2019; Nie et al., 2019), it is much more challenging to achieve good utilityprivacy trade-offs for LMs with LDP. To improve the model utility, existing methods (Feyisetan et al., 2020; Qu et al., 2021; Yue et al., 2021) rely on a relaxed notion of metric local differential privacy (Chatzikokolakis et al., 2013) (MLDP, also known as dχ-privacy) for text sanitization. However, they either achieve reasonable accuracy only at a very low privacy protection level (e.g., with a privacy parameter ϵ > 10) or become unusable (around 50% accuracy rate for the benchmark binary classification tasks) with appropriate privacy guarantees (e.g., ϵ = 2). Thus, there remains much room for improvement in terms of utility-privacy trade-off for differentially private text sanitization, which is the goal of this work. ## 3 Preliminaries Before introducing our CusText mechanism, we briefly review the key concepts, including ϵ-DP and exponential mechanism (EM). Definition 1 (ϵ-differential privacy (Dwork et al., 2006)). For a given privacy parameter ϵ ≥ 0, all pairs of adjacent inputs x, x′ ∈ X , and every possible output y ∈ Y*, a randomized mechanism* M is ϵ*-differentially private (DP) if it holds that* $${\frac{\operatorname*{Pr}[{\mathcal{M}}(x)=y]}{\operatorname*{Pr}[{\mathcal{M}}(x^{\prime})=y]}}\leq e^{\epsilon}.$$ ϵ. (1) By definition, a smaller value of ϵ corresponds to a higher level of privacy protection. Conceptually, the notion of ϵ-DP means that an unlimited adversary cannot distinguish the two probabilistic ensembles with sufficiently small ϵ because the probabilities of adjacent tokens producing the same output token y are similar. In the context of NLP, we consider any pair of input tokens that share the same output set Y to be adjacent to each other. In the rest of this paper, we follow the above definition of adjacent inputs for ϵ-DP. Next, we define the Exponential Mechanism (EM) commonly used for differentially private item selection from a discrete domain, which naturally fits NLP applications due to the discrete nature of textual data. Definition 2 (Exponential Mechanism (McSherry and Talwar, 2007)). *For a given scoring function* u : X × Y → R*, an exponential mechanism* (EM) M(X , u, Y) satisfies ϵ*-differential privacy* if it samples an output token y ∈ Y *to perturb the* input token x ∈ X *with probability proportional* to e ϵ·u(x,y) 2∆u , where u(x, y) denotes the score of output token y for input token x*. In addition, we use* ∆u := maxy∈Y maxx,x′∈X |u(x, y) − u(x′, y)| to denote the sensitivity of u *for EM.* From Definition 2, we can see that smaller sensitivity makes it harder for adversaries to distinguish the original token from its adjacent tokens. In practice, for simplicity, we can normalize the scoring function u to scale its sensitivity ∆u to a specific real number (e.g., 1). As such, the sampling probability of each output token y for input token x is only related to u(*x, y*), as ϵ and ∆u are known beforehand, and a larger u(*x, y*) indicates a higher sampling probability. In an NLP task, we suppose that each document D = ⟨Ri⟩ m i=1 contains m records and each record R = ⟨tj ⟩ n j=1 contains n tokens. We formulate our text sanitization task as follows: Given an input document D containing sensitive information, a set X of all possible input tokens, a set Y of all possible output tokens, and a differentially private mechanism M (e.g., EM in this work), it performs the mechanism M on each input token tj ∈ D to replace it with an output token t′j from Y if tj ∈ X . All the tokens after replacement form the sanitized document, i.e., D′ = ⟨R′i⟩ m i=1 and R′ = ⟨t′j⟩ n j=1. Following the prior work on text sanitization (Qu et al., 2021; Feyisetan et al., 2020; Yue et al., 2021), we consider a *semi-honest threat model* under the LDP setting where data owners (e.g., individuals or institutions) only submit their sanitized documents to the service provider. Malicious service providers may try to infer sensitive information from their received data. We assume that adversaries only have access to sanitized texts, and all algorithms and mechanisms are publicly known. Moreover, adversaries have unlimited computation resources. $$(1)$$ ## 4 The Custext Mechanism An overview of our customized text (CusText) sanitization mechanism is presented in Figure 2. In general, it replaces each token in the original text document with a new token to achieve the privacy guarantee. It consists of two components: (1) a mapping function fmap : X → {Y′ *⊆ Y}* that determines the output set Y′j for each input token xj ∈ X based on semantic relevance; (2) a sampling function3 fsample : X′ → Y′ based on the exponential mechanism to sample a new token from 3For any Y ′ ⊆ Y, X ′ = {x *∈ X |* fmap(x) = Y ′}. 5749 ![3_image_0.png](3_image_0.png) ![3_image_2.png](3_image_2.png) an output set to sanitize the input token. Specifically, our CusText mechanism first obtains the output set Y′j for each tj ∈ D according to fmap, i.e., Y′j = fmap(tj ), then samples an output token t′j from Y′j according to fsample, i.e., t′j = fsample(tj ). Finally, after applying CusText on each input token tj in D, the sanitized document D′is formed by all output tokens. ## 4.1 Mapping Function In our CusText mechanism, the mapping function fmap : X → {Y′ *⊆ Y}* decides the output set for each input token. If a bunch of input tokens in X are mapped to the same output set Y′, we say that they belong to the same input set X′ ⊆ X and are adjacent to each other. For the SANTEXT mechanism (Yue et al., 2021), the function fmap : *X → Y* simply maps every input token x ∈ X to all tokens in the output set Y. Since the size of the output set is excessively large in SANTEXT, the chances that the output token is semantically irrelevant to the original token become higher if the privacy budget is tight, thus leading to poor model utility. To overcome the above problem, CusText customizes the output set of each input token. A comparison of the mapping schemes of CusText and SANTEXT is shown in Figure 3. Before introducing how to construct fmap, we first discuss the requirements for mapping generation. ![3_image_1.png](3_image_1.png) 9: Update *X ← X \ Y*′and *Y ← Y \ Y*′ 10: Perform Lines 2–9 for the remaining tokens in X and Y with customization parameter K′ = *|X |* 11: **return** fmap Mapping Strategy. According to the sizes of X′ and Y′as indicated by the mapping function fmap, we can categorize the token mappings into four types: 1-to-1, N-to-1, 1-to-N, and N-to-M, where 1, N, and M denote the size of the input/output token sets and *N, M >* 1. Theoretically, CusText can provide ϵ-differential privacy protection to all input tokens only if the mappings of all input tokens in the set X are N-to-M or N-to-1 mappings so that every input token in X has at least one adjacent token. This is because the goal of applying ϵ-DP is to make any two adjacent tokens indistinguishable so that the input token cannot be effectively inferred. Moreover, following prior work (Feyisetan et al., 2020; Yue et al., 2021), we consider that X is equal to Y (i.e., X = Y) in CusText, as they both correspond to the vocabulary of a specific language. Also, any input token x is always included in its output set because it must be the closest to itself. Next, we describe our mapping generation that can satisfy all the above requirements. Mapping Function Generation. The generation of the mapping function fmap : X → {Y′ *⊆ Y}* is to assign the customized output set for each input token based on semantic relevance. The semantic relevance can be defined by any similarity measure d : *X × Y →* R. In practice, we use the Euclidean distance or cosine similarity on the vector representations of tokens, such as Word2Vec (Mikolov et al., 2013), GloVe (Pennington et al., 2014), and Counter-Fitting (Mrksic et al., 2016) as the similarity measure. Then, we fix the sizes of all output sets to K. Specifically, we pick an arbitrary unmapped token x ∈ X , find the K tokens semantically closest to x, generate an K-to-K mapping from all the K tokens to themselves, and remove the mapped tokens from X and Y at each round until either all tokens are mapped or fewer than K tokens remain unmapped. In the latter case, the remaining tokens will constitute a K′-to-K′ mapping where K′ ∈ [1, K). The pseudocode of generating the mapping function fmap is presented in Algorithm 1. ## 4.2 Sampling Function Based on the mapping function fmap : X → {Y′ ⊆ Y}, a sampling function fsample : X′ → Y′is designed to sample the output token for each input token. CusText adopts the exponential mechanism (McSherry and Talwar, 2007) (EM) for sampling. We need to design an appropriate scoring function for EM to strike a good utility-privacy trade-off. We obey the following two rules when designing the scoring function u : X′ × Y′ → R. 1. The score of each pair of input and output tokens should be bounded, i.e., ∀x ∈ X ′, ∀y ∈ Y′, u(x, y) < B, so that the sensitivity ∆u of u is bounded for satisfying ϵ-DP. 2. The higher the semantic similarity between a pair of input and output tokens is, the higher the score is, i.e., ∀x ∈ X ′, ∀y, y′ ∈ Y′, if y is semantically closer to x than y′, u(*x, y*) > u(*x, y*′). This ensures the candidates semantically closer to x have higher probabilities of being sampled, which inherits the advantage of dχ-privacy (Chatzikokolakis et al., 2013). For the scoring function, we are based on the same similarity function as used in the mapping scheme, e.g., Euclidean distance or cosine similarity on the vector representations of tokens (Mikolov et al., 2013; Pennington et al., 2014; Mrksic et al., 2016). Generally, according to the correlation between scores and semantic closeness, all the similarity measures can be categorized into two types, i.e., *negative correlation* and *positive correlation*. For instance, Euclidean distance and cosine similarity are *negative* and *positive correlation* measures, respectively, as a smaller Euclidean distance and a larger cosine value between two vectors imply higher semantic closeness of their corresponding tokens. Next, we will design scoring functions for both types of similarity measures. Scoring Function for Negative Correlation Measures. We take Euclidean distance as an example to design the scoring function u : X′ × Y′ → R. For any input set X′and its corresponding output set Y′, we first compute the Euclidean distance d(*x, y*) Algorithm 2 Document Sanitization Input: Original document D = ⟨Ri⟩ m i=1, sampling function fsample, stopword list T Output: Sanitized document D ′ 1: Initialize the sanitized document D ′ = ∅ 2: **for all** record R ∈ D do 3: Initialize the sanitized record R ′ = ∅ 4: **for all** token x ∈ R do 5: if 'CusText+' is used and x ∈ T **then** 6: Append x to R ′ 7: **else** 8: x ′ ← fsample(x) and append x to R ′ 9: Add R ′to D ′ 10: **return** D ′ between each x ∈ X ′and y ∈ Y′. Specifically, ![4_image_0.png](4_image_0.png) ![4_image_1.png](4_image_1.png) we have d(*x, y*) = ∥Φ(x) − Φ(y)∥2, where Φ(x) and Φ(y) are the vector representations of x and y, respectively. Then, we normalize the distances of all pairs of tokens to the range [0, 1] as d′(*x, y*) = d(x,y)−dmin dmax−dmin , where dmin = minx∈X′,y∈Y′ d(*x, y*) and dmax = maxx∈X′,y∈Y′ d(*x, y*). Finally, we transform the normalized distance d′(*x, y*) into the score of output token y for input token x as u(*x, y*) = −d′(*x, y*). After the above transformation, a more similar pair *x, y* of input and output tokens has a higher score u(*x, y*). Finally, by repeating the above steps on all disjoint partitions of adjacent tokens with the same X′and Y′, we have obtained the scoring functions for all tokens. Scoring Function for Positive Correlation Measures. We take cosine similarity as another example to design the scoring function u. For any input set X′and its corresponding output set Y′, we also compute the cosine similarity cos(*x, y*) between each x ∈ X ′and y ∈ Y′, where cos(*x, y*) = ⟨Φ(x),Φ(y)⟩ ∥Φ(x)∥·∥Φ(y)∥ and Φ(x) and Φ(y) are the vector representations of x and y. Then, the normalization procedure is the same as that for Euclidean distance, but we use the normalized distance, instead of its additive inverse, in the score function, i.e., u(*x, y*) = d(x,y)−dmin dmax−dmin . Finally, we repeat the above steps on all disjoint partitions of adjacent tokens to obtain all scoring functions. Sampling Procedure. After acquiring the scoring function u for each input token x, the sampling function fsample is used to generate the sanitized token x′for x based on the exponential mechanism M({x}*, u,* Y′) with a privacy parameter ϵ > 0. The pseudocode of sanitizing a document based on fsample is provided in Algorithm 2. Theoretically, it guarantees that fsample satisfies ϵ-DP. For any input set X′and its corresponding output set Y′, 5751 the sensitivity ∆u between any two adjacent input tokens x, x′ ∈ X ′is bound by 1 according to the design of the scoring function u, i.e., $$\Delta u=\operatorname*{max}_{y\in{\mathcal{Y}}^{\prime}}\operatorname*{max}_{x,x^{\prime}\in{\mathcal{X}}^{\prime}}|u(x,y)-u(x^{\prime},y)|=1$$ Given a privacy parameter ϵ > 0, the probability of obtaining an output token y ∈ Y′for an input token x ∈ X ′is as follows: $$\operatorname*{Pr}[f_{\sf{s a m p l e}}(x)=y]={\frac{\exp({\frac{\epsilon u(x,y)}{2\Delta u}})}{\sum_{y^{\prime}\in\mathcal{Y}^{\prime}}\exp({\frac{\epsilon u(x,y^{\prime})}{2\Delta u}})}}$$ We can prove that the sampling function fsample satisfies ϵ-DP because, for any two input tokens x, x′ ∈ X ′and output token y ∈ Y′, it holds that exp( ϵu(x,y) 2∆u ) Py′∈Y′ exp( ϵu(x,y′) 2∆u) exp( ϵu(x′,y) 2∆u) Py′∈Y′ exp( ϵu(x′,y′) 2∆u) Pr[fsample(x) = y] Pr[fsample(x′) = y] = = e ϵ·(u(x,y)−u(x′,y)) 2∆u · Py′∈Y′ exp( ϵu(x′,y′) 2∆u) Py′∈Y′ exp( ϵu(x,y′) 2∆u) ≤ e ϵ 2 · e ϵ 2 · Py′∈Y′ exp( ϵu(x,y′) 2∆u) Py′∈Y′ exp( ϵu(x,y′) 2∆u) = e ϵ. ## 4.3 The Custext+ Mechanism Since not all tokens contain sensitive information, our CusText mechanism that replaces all tokens might be over-protective. Therefore, we can retain non-sensitive original tokens with low privacy risk (e.g., stopwords) to improve the utility of the sanitized text. In practice, we have a predefined list of stopwords T (e.g., the collection of stopwords in the NLTK library), check whether each token x is included in T, and keep x in the sanitized document if x ∈ T or replace x with x′ = fsample(x) otherwise. The above procedure is called the CusText+ mechanism and is also described in Algorithm 2. ## 5 Experiments 5.1 Experimental Setup Following (Feyisetan et al., 2020; Yue et al., 2021), we choose two datasets from the GLUE benchmark (Wang et al., 2019) and one medical dataset MedSTS (Wang et al., 2020), which all contain sensitive information, in our experiments. Detailed descriptions of the three datasets are as follows: - **SST-2** is a popular movie reviews dataset with 67k training samples and 1.8k test samples for sentiment classification, where *accuracy* is used as the evaluation metric. - **MedSTS** is a medical dataset with 1,642 training samples and 412 test samples for semantic similarity computation, where *Pearson correlation coefficient* is used for evaluation. - **QNLI** is a sentence dataset with 105k training samples and 5.2k test samples for sentencepair classification, where *accuracy* is used as the evaluation metric. In the experiments, we compare CusText with two existing text sanitization mechanisms, i.e., FBDD (Feyisetan et al., 2020) and SANTEXT (Yue et al., 2021). In the training phase, we perform each mechanism to sanitize the training data and then use the sanitized documents to fine-tune the pre-trained model. In the evaluation phase, we sanitize the test data by the same mechanism as used for training. When producing the sanitized documents, both the input set X and output set Y are assigned to the vocabulary of Counter-Fitting (Mrksic et al., 2016) (of size 65,713), and out-of-vocabulary (OOV) tokens except numbers are retained. For a fair comparison, we adopt the same vocabulary in GloVe (Pennington et al., 2014) as in CounterFitting. The Euclidean distance and cosine similarity are used as the similarity measures for GloVe and Counter-Fitting, respectively. We use the stopword list in NLTK for CusText+. For each downstream task, we set the maximum sequence length to 128 and the training epoch to 3. On the SST2 and QNLI datasets, we set the batch size to 64 and the learning rate to 2 × 10−5 using *bert-baseuncased*4as the pre-trained model. On the MedSTS dataset, we set the batch size to 8 and the learning rate to 5 × 10−5 using *ClinicalBERT* (Alsentzer et al., 2019) as the pre-trained model. Other hyperparameters are the same as those used in the default Transformer model (Wolf et al., 2020). All experiments were conducted on a server with two Intel Xeon Silver 4210R 2.40GHz CPUs and one NVIDIA Tesla V100 SXM2 (32GB). ## 5.2 Experimental Results Comparison of Different Mechanisms for Text Sanitization. In this experiment, we fix the customization parameter K to 20 in CusText and CusText+ and vary the privacy parameter ϵ = 1, 2, 3 4https://huggingface.co/ bert-base-uncased ![6_image_0.png](6_image_0.png) ![6_image_1.png](6_image_1.png) for DP. The evaluation of the effect of K on the performance of CusText will be presented later. Furthermore, we choose GloVe as the token embedding in CusText and CusText+ for a fair comparison since FBDD, SANTEXT, and SANTEXT+ cannot apply the Counter-Fitting embedding. This is because they only work with metric distances (e.g., Euclidean distance in GloVe) due to the inherent limitation of MLDP and thus cannot handle the non-metric cosine similarity in Counter-Fitting. Finally, because a mechanism will be ϵ-DP if it is ϵ′-MLDP (Chatzikokolakis et al., 2013), where ϵ = ϵ′·dmax and dmax = maxx∈X,y∈Y d(*x, y*), we re-scale the privacy parameter ϵ in FBDD, SANTEXT, and SANTEXT+ with dmax to align their privacy levels to be similar to our mechanisms. Table 1 presents the utilities of different text sanitization mechanisms with ϵ-DP (ϵ = 1, 2, 3) on three datasets. The results demonstrate the huge advantages of CusText compared with two existing mechanisms, i.e., FBDD and SANTEXT, which achieves over 20% improvements in accuracy on the SST-2 and QNLI datasets and more than 50% improvement in Pearson correlation coefficient on the MedSTS dataset. Compared with SANTEXT and CusText, their improved versions, i.e., SANTEXT+ and CusText+, exhibit significantly better performance because they keep some original tokens to preserve original semantics. Generally, the results indicate the superior performance of CusText by showing that using a customized, smaller output set for each input token can lead to better utilities at similar (theoretical) privacy levels. Privacy-Utility Trade-off. Subsequently, we compare SANTEXT and CusText in terms of privacyutility trade-offs. As shown in (Yue et al., 2021) as well as our previous results, FBDD has lower performance than SANTEXT and CusText and thus is not compared in the remaining experiments anymore. To alleviate the effects of different DP definitions in SANTEXT and CusText, we do not use ![6_image_2.png](6_image_2.png) the privacy parameter ϵ, which corresponds to the worst possible privacy leakage but may not reveal the privacy protection level in practice. Alternatively, we adopt two privacy attacks to evaluate the privacy protection levels: One is the *Mask Token* Inference Attack in (Yue et al., 2021), and the other is *Query Attack* proposed in this work. We first present the results for mask token inference attacks. To recover raw texts from sanitized texts, an adversary can use the pre-trained BERT model to help infer the original tokens since it is trained via masked language modeling. It replaces each token with a special token "[MASK]" in the sanitized text sequentially, inputs the masked text to BERT, and acquires the predicted output of "[MASK]" as the original token. Then, we consider the attack successful if the output token is the same as the input. Finally, we compute the success rate among all attacks, denoted as r*mask*, and define the privacy protection level as 1 − r*mask*. Figure 4 illustrates the privacy-utility trade-offs of CusText (based on GloVe and Counter-Fitting, respectively) and SANTEXT (based on GloVe) by varying the value of ϵ on the SST-2 dataset. The ![7_image_0.png](7_image_0.png) results confirm that CusText achieves better utilityprivacy trade-offs than SANTEXT and remains a relatively good utility (accuracy at around 0.7) when the privacy level approaches 1 (over 0.98). In comparison, SANTEXT degenerates to a random classifier (accuracy at around 0.5). Meanwhile, the results also imply that Counter-Fitting works better with CusText than GloVe. The higher performance of Counter-Fitting can be attributed to its better representations of synonyms. We then describe the results for query attacks. Since the input token is contained in its corresponding output set and always has the highest score, the probability that it is sampled by fsample is also the highest among all output tokens. An adversary can determine the input token by querying the data owner for the sanitized document multiple times, as the input token will have the highest frequency among all output tokens after a sufficiently large number of queries. Thus, we use the smallest number N of queries an adversary needs to infer the input token at a confidence level of 95% as a new measure of the privacy protection level. Here, the larger the value of N is, the higher the level of privacy protection is. In the experiment, we obtain the value of N by using the Monte Carlo method (Gentle, 2009) to sample the output tokens until the confidence level of determining the input token from the output distribution reaches 95%. Table 2 further confirms that CusText achieves better privacy-utility trade-offs than SANTEXT. Although SANTEXT achieves a good utility when ϵ′ = 3 (i.e., with 3-MLDP), it almost provides no privacy protection as input tokens can be inferred by performing only a few queries. CusText (with either GloVe or Counter-Fitting) remains relatively good privacy protection levels when ϵ = 3 (i.e., with 3-DP) while achieving high utilities. Generally, Counter-Fitting also outperforms GloVe for CusText. But the privacy protections for different tokens vary very much for Counter-Fitting: "she" and "alice" are more vulnerable than "car" and "happy". This is because "she" and "alice" are ![7_image_1.png](7_image_1.png) mapped with semantically less relevant tokens than themselves in the mapping function generation. Effect of K **on CusText.** To test the effect of K on CusText in practice, we study the privacy-utility trade-offs with different customization parameters K = 5, 20, 50 on the SST-2 dataset. We choose the mask token inference attack as the privacy metric since its performance is more semantically related. Then, we use Counter-Fitting for its better performance than GloVe, as depicted previously. The results for different K's are presented in Figure 5. We observe that the performance of CusText is generally stable for different K's. But it achieves slightly better utilities when K is smaller at relatively higher privacy protection levels (> 0.9). This is because, on the one hand, the semantic similarity of output tokens to the input token will be higher when K is smaller. However, on the other hand, a smaller K will also make it easier to infer the input token, thus lowering the privacy protection levels (e.g., for K = 5, it does not exceed 0.96 even when ϵ has been decreased to 0.001). ## 6 Concluding Remarks In this work, we study the problem of differentially private text sanitization. We propose a novel CusText mechanism consisting of a mapping scheme to assign each input token a customized output set and sampling function generation methods based on the mapping scheme and exponential mechanism to reduce privacy costs while improving the utilities of sanitized texts. Extensive experiments demonstrate that CusText achieves better privacyutility trade-offs than state-of-the-art text sanitization mechanisms. In the future, we will explore how to improve our mechanism by adaptively allocating privacy costs across tokens and find a better way to decide whether a token is sensitive than based on a pre-defined stopword list. ## Acknowledgements This work was supported by the National Natural Science Foundation of China (under Grant numbers 62202170, 62202169) and Alibaba Group through the Alibaba Innovation Research Program. ## Limitations First, as indicated in Table 2, different tokens are not equally vulnerable to privacy attacks. As such, assigning every token with the same output size K and privacy parameter ϵ might not be an ideal choice. An improved method would be to adaptively allocate privacy costs across tokens so that all of them are adequately protected. Second, we adopt two simple strategies to decide whether a token is sensitive: assuming all tokens are sensitive or based on a pre-defined stopword list. However, the prior might be over-protective, but the latter can lead to privacy leakage since stopwords might help infer other sanitized tokens. Therefore, a more flexible and practical way to decide the sensitivity of tokens is required. ## References Martín Abadi, Andy Chu, Ian J. Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. 2016. Deep learning with differential privacy. In CCS, pages 308–318. Emily Alsentzer, John Murphy, William Boag, WeiHung Weng, Di Jindi, Tristan Naumann, and Matthew McDermott. 2019. Publicly available clinical BERT embeddings. In Proceedings of the 2nd Clinical Natural Language Processing Workshop, pages 72–78. Rohan Anil, Badih Ghazi, Vineet Gupta, Ravi Kumar, and Pasin Manurangsi. 2022. Large-scale differentially private BERT. In *EMNLP (Findings)*, pages 6481–6491. Nicholas Carlini, Chang Liu, Úlfar Erlingsson, Jernej Kos, and Dawn Song. 2019. The secret sharer: Evaluating and testing unintended memorization in neural networks. In *USENIX Security Symposium*, pages 267–284. Nicholas Carlini, Florian Tramèr, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom B. Brown, Dawn Song, Úlfar Erlingsson, Alina Oprea, and Colin Raffel. 2021. Extracting training data from large language models. In *USENIX Security Symposium*, pages 2633–2650. Konstantinos Chatzikokolakis, Miguel E. Andrés, Nicolás Emilio Bordenabe, and Catuscia Palamidessi. 2013. Broadening the scope of differential privacy using metrics. In *Privacy Enhancing Technologies* (PETS), pages 82–102. John C. Duchi, Michael I. Jordan, and Martin J. Wainwright. 2013. Local privacy and statistical minimax rates. In *FOCS*, pages 429–438. Christophe Dupuy, Radhika Arava, Rahul Gupta, and Anna Rumshisky. 2022. An efficient DP-SGD mechanism for large scale NLU models. In *ICASSP*, pages 4118–4122. Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam D. Smith. 2006. Calibrating noise to sensitivity in private data analysis. In *Theory of Cryptography (TCC)*, pages 265–284. Oluwaseyi Feyisetan, Borja Balle, Thomas Drake, and Tom Diethe. 2020. Privacy- and utility-preserving textual analysis via calibrated multivariate perturbations. In *WSDM*, pages 178–186. Oluwaseyi Feyisetan, Tom Diethe, and Thomas Drake. 2019. Leveraging hierarchical representations for preserving privacy and utility in text. In *ICDM*, pages 210–219. James E. Gentle. 2009. Monte Carlo methods for statistical inference. In *Computational Statistics*, pages 417–433. Springer. Jack Hessel and Alexandra Schofield. 2021. How effective is BERT without word ordering? Implications for language understanding and data privacy. In ACL/IJCNLP (Short Papers), pages 204–211. Marija Jegorova, Chaitanya Kaul, Charlie Mayor, Alison Q. O'Neil, Alexander Weir, Roderick MurraySmith, and Sotirios A. Tsaftaris. 2021. Survey: Leakage and privacy at inference time. *arXiv:2107.01614*. Ninghui Li, Tiancheng Li, and Suresh Venkatasubramanian. 2007. t-closeness: Privacy beyond k-anonymity and l-diversity. In *ICDE*, pages 106–115. Xuechen Li, Florian Tramèr, Percy Liang, and Tatsunori Hashimoto. 2022. Large language models can be strong differentially private learners. In *ICLR*. Lingjuan Lyu, Xuanli He, and Yitong Li. 2020. Differentially private representation for NLP: Formal guarantee and an empirical study on privacy and fairness. In *EMNLP (Findings)*, pages 2355–2365. Ashwin Machanavajjhala, Daniel Kifer, Johannes Gehrke, and Muthuramakrishnan Venkitasubramaniam. 2007. L-diversity: Privacy beyond kanonymity. *ACM Trans. Knowl. Discov. Data*, 1(1):3:1–3:52. Frank McSherry and Kunal Talwar. 2007. Mechanism design via differential privacy. In *FOCS*, pages 94– 103. Tomás Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. *arXiv:1301.3781*. Fatemehsadat Mireshghallah, Huseyin A. Inan, Marcello Hasegawa, Victor Rühle, Taylor BergKirkpatrick, and Robert Sim. 2021. Privacy regularization: Joint privacy-utility optimization in LanguageModels. In *NAACL-HLT*, pages 3799–3807. Nikola Mrksic, Diarmuid Ó Séaghdha, Blaise Thomson, Milica Gasic, Lina Maria Rojas-Barahona, Pei-Hao Su, David Vandyke, Tsung-Hsien Wen, and Steve J. Young. 2016. Counter-fitting word vectors to linguistic constraints. In *NAACL-HLT*, pages 142–148. Takao Murakami and Yusuke Kawamoto. 2019. Utilityoptimized local differential privacy mechanisms for distribution estimation. In *USENIX Security Symposium*, pages 1877–1894. Yiwen Nie, Wei Yang, Liusheng Huang, Xike Xie, Zhenhua Zhao, and Shaowei Wang. 2019. A utilityoptimized framework for personalized private histogram estimation. *IEEE Trans. Knowl. Data Eng.*, 31(4):655–669. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word representation. In *EMNLP*, pages 1532–1543. Chen Qu, Weize Kong, Liu Yang, Mingyang Zhang, Michael Bendersky, and Marc Najork. 2021. Natural language understanding with privacy-preserving BERT. In *CIKM*, pages 1488–1497. Gerard Salton and Chris Buckley. 1988. Termweighting approaches in automatic text retrieval. Inf. Process. Manag., 24(5):513–523. Congzheng Song and Ananth Raghunathan. 2020. Information leakage in embedding models. In CCS, pages 377–390. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In *ICLR*. Yanshan Wang, Naveed Afzal, Sunyang Fu, Liwei Wang, Feichen Shen, Majid Rastegar-Mojarad, and Hongfang Liu. 2020. MedSTS: a resource for clinical semantic textual similarity. *Lang. Resour. Eval.*, 54(1):57–72. Thomas Wolf, Lysandre Debut, et al. 2020. Transformers: State-of-the-art natural language processing. In EMNLP (Demos), pages 38–45. Xiang Yue, Minxin Du, Tianhao Wang, Yaliang Li, Huan Sun, and Sherman S. M. Chow. 2021. Differential privacy for text analytics via natural text sanitization. In *ACL/IJCNLP (Findings)*, pages 3853– 3866. Ying Zhao and Jinjun Chen. 2022. A survey on differential privacy for unstructured data content. ACM Comput. Surv., 54(10s):207:1–207:28. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: A1. Did you describe the limitations of your work? Left blank. A2. Did you discuss any potential risks of your work? Left blank. A3. Do the abstract and introduction summarize the paper's main claims? Left blank. A4. Have you used AI writing assistants when working on this paper? Left blank. ## B **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? Left blank. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Left blank. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Left blank. ## C **Did You Run Computational Experiments?** Left blank. C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Left blank. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Left blank. C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Left blank. C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Left blank. D **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Left blank.
lu-etal-2023-labo
{LABO}: Towards Learning Optimal Label Regularization via Bi-level Optimization
https://aclanthology.org/2023.findings-acl.356
Regularization techniques are crucial to improving the generalization performance and training efficiency of deep neural networks. Many deep learning algorithms rely on weight decay, dropout, batch/layer normalization to converge faster and generalize. Label Smoothing (LS) is another simple, versatile and efficient regularization which can be applied to various supervised classification tasks. Conventional LS, however, regardless of the training instance assumes that each non-target class is equally likely. In this work, we present a general framework for training with label regularization, which includes conventional LS but can also model instance-specific variants. Based on this formulation, we propose an efficient way of learning LAbel regularization by devising a Bi-level Optimization (LABO) problem. We derive a deterministic and interpretable solution of the inner loop as the optimal label smoothing without the need to store the parameters or the output of a trained model. Finally, we conduct extensive experiments and demonstrate our LABO consistently yields improvement over conventional label regularization on various fields, including seven machine translation and three image classification tasks across various neural network architectures while maintaining training efficiency.
# Labo: Towards Learning Optimal Label Regularization Via Bi-Level Optimization Peng Lu1,2, Ahmad Rashid1,3**, Ivan Kobyzev**1 Mehdi Rezagholizadeh1**, Philippe Langlais**2 1Huawei Noah's Ark Lab, Canada 2 Department of Computer Science and Operations Research, Université de Montréal 3 Department of Statistics and Actuarial Science, University of Waterloo [email protected] ## Abstract Regularization techniques are crucial to improving the generalization performance and training efficiency of deep neural networks. Many deep learning algorithms rely on weight decay, dropout, batch/layer normalization to converge faster and generalize. Label Smoothing (LS) is another simple, versatile and efficient regularization which can be applied to various supervised classification tasks. Conventional LS, however, regardless of the training instance assumes that each non-target class is equally likely. In this work, we present a general framework for training with label regularization, which includes conventional LS but can also model instance-specific variants. Based on this formulation, we propose an efficient way of learning LAbel regularization by devising a Bi-level Optimization (LABO) problem. We derive a deterministic and interpretable solution of the inner loop as the optimal label smoothing without the need to store the parameters or the output of a trained model. Finally, we conduct extensive experiments and demonstrate our LABO consistently yields improvement over conventional label regularization on various fields, including seven machine translation and three image classification tasks across various neural network architectures while maintaining training efficiency. ## 1 Introduction Deep neural networks (DNNs) form the backbone of current state-of-the-art algorithms in various fields including natural language processing (Vaswani et al., 2017), computer vision (He et al., 2016; Dosovitskiy et al., 2021) and speech recognition (Schneider et al., 2019; Chen et al., 2021). However, heavily overparameterized models may incur overfitting and suffer from poor generalizations (Goodfellow et al., 2016). To address the issue, many regularization techniques have been developed in the literature: weight decay which constrains the optimization space (Krogh and Hertz, 1991), batch or layer normalization which speeds up the training of feed-forward NNs (Ioffe and Szegedy, 2015; Ba et al., 2016), and dropout which implicitly approximates the effect of averaging the predictions of all sparse subnetworks networks (Srivastava et al., 2014). Label smoothing (LS) is another simple regularization technique; it is widely applied to many applications including image classification (Szegedy et al., 2016) and token-level sequence generation (Pereyra et al., 2017) for enhancing the generalization, without suffering additional computational costs. It encourages a model to treat each non-target class as equally likely for classification by using a uniform distribution to smooth one-hot labels (He et al., 2019a; Vaswani et al., 2017). Although combining the uniform distribution with the original one-hot label is beneficial for regularization, conventional LS does not take into account the true relations between different label categories. More specifically, for token-level generation, uniformly allocating the probability mass on non-target words disregards the semantic relationship between non-target words and the context. On the other hand, the probability of the target is predefined and unchanged. However, the distribution of natural language exhibits remarkable variations in the per-token perplexity (Holtzman et al., 2020), which encourages us to adapt corresponding target probabilities for different contexts. One of the instance-dependent techniques of learning the relation between different target categories is Knowledge Distillation (KD) (Hinton et al., 2015; Bucilua et al. ˇ , 2006), which is a popular technique of transfer learning utilizing knowledge from related tasks or teacher models (Caruana, 1997; Pan and Yang, 2010; Lu et al., 2019). It is widely applied for model compression and ensembling across applications ranging from computer vision (He et al., 2019b; Xu et al., 2020; Lu et al., 2021) to natural language processing (Jiao et al., 2020; Sanh et al., 2019). However, KD requires 5759 training a separate model as a teacher for every new task. Besides, it either introduces an extra inference pass to get the teacher's prediction for each instance during the training or requires saving the teacher's prediction for all training samples to avoid the extra forward pass of teacher models. This greatly increases the time and space complexity in practice. Especially for token-level generation tasks, e.g. machine translation, to save all the output probabilities of teacher models costs O(NLV ) space, where N is the number of sequences, L is the averaged length of all sequences and V is the vocabulary size. Besides the empirical success of KD, it is unclear how student networks benefit from these smoothed labels. A series of investigations have looked at regularization and have demonstrated that the success of both KD and label smoothing is due to a similar regularization effect of smoothed targets (Yuan et al., 2020; Zhang and Sabuncu, 2020). Based on this finding and the low training overhead of LS, there is a significant interest, in the community, in algorithms that can enhance conventional LS. (Zhang and Sabuncu, 2020) demonstrates the importance of an instance specific LS regularization. They demonstrate better performance compared to LS, but use a trained model to infer prior knowledge on the label space and thereby sacrifice some of the efficiency of LS. In this work, we first revisit the conventional LS and generalize it to an instance-dependent label regularization framework with a constraint on overconfidence. Within this framework, we demonstrate that both LS and KD can be interpreted as instances of a smoothing distribution with a confidence penalty. Finally, we propose to learn the optimal smoothing function along with the model training, by devising a bi-level optimization problem. We solve the inner problem by giving a deterministic and interpretable solution and applying gradient-based optimization to solve the outer optimization. Our contributions can be summarized as follows: - We explicitly formulate a unified label regularization framework, in which the regularization distribution can be efficiently learnt along with model training by considering it as a bi-level optimization problem. - We derive a closed-form solution to solve the inner loop of the bi-level optimization, which not only improves efficiency but makes the algorithm interpretable. - We conducted extensive experiments on Machine Translation, IWSLT'14 (DE-EN, ENDE, EN-FR, FR-EN), WMT'14 (EN-DE, DEEN), IWLST'17 ({DE,FR}-EN), image classification (CIFAR10, CIFAR100 and ImageNet) and show that our method outperforms label smoothing consistently while maintaining the training efficiency. ## 2 Background We will provide a brief overview of existing label regularization methods. Label Smoothing. LS is a regularization technique to improve the generalization performance of neural networks by preventing the model from predicting the training examples overconfidently. It smoothes a one-hot target label with the uniform distribution U(·) = 1K , where K is the number of classes. As a result, LS training is equivalent to training with a smoothed label PˆU , where: $${\hat{P}}_{U}(j|x)={\begin{cases}1-\alpha+\alpha U(j),&j=k\\ \alpha U(j),&j\neq k\end{cases}},\quad(1)$$ and k is the ground-truth class. We note from Eq. 1 that a higher α can lead to a smoother label distribution and a lower α to a *peakier* label. Confidence Penalty. Another important technique similar to label smoothing is the confidence penalty (CP) (Pereyra et al., 2017), in which a regularization term H(pθ(x)) is introduced into the objective function to directly punish the overconfidence of predictions: $$\mathbb{H}(q(x),p_{\theta}(x))-\beta\mathbb{H}(p_{\theta}(x))$$ $$(2)^{\frac{1}{2}}$$ where q(x) is the one-hot label, pθ(x) is the output distribution of models, H(q(x), pθ(x)) is the crossentropy loss between the labels and the student's output, H(pθ(x)) is the entropy of the output and β controls the strength of the confidence penalty. Knowledge Distillation. Given access to a trained teacher model, assume that we want to train a student. Denote by PT (x) and pθ(x) the teacher's and student's predictions respectively. For a classification problem, the total KD loss is defined as: (1 − α)H(q(x), pθ(x)) + αKL(PT (x), pθ(x)), (3) where KL is the Kullback–Leibler (KL) divergence and α is the scaling parameter. Note that we assume a temperature of 1 and omit it without loss of generality. KD training is equivalent to training with a smoothed label Pˆ(x), where: $$\hat{P}_{T}(j|x)=\begin{cases}1-\alpha+\alpha P_{T}(j|x),&j=k\\ \alpha P_{T}(j|x),&j\neq k\end{cases},\tag{4}$$ where $k$ is the ground-truth class. Both LS and KD incorporate training a model with a smoothed label. KD tends to perform better as the teacher's predictions over non-target classes can capture the similarities between different classes (Müller et al., 2019; Shen et al., 2021). However, LS and CP techniques are more efficient for training, since they do not require training a separate teacher network for every new task. ## 3 Methodology In this section, we interpret our optimal label regularization from the bi-level optimization perspective. First, we provide a close look at the conventional uniform label regularization and show the generalized framework bridges the objectives of conventional LS and KD methods. Then, we introduce a closed-form solution to our optimal instance-dependent label smoothing and describe an online implementation under this formulation. ## 3.1 Generalized Label Regularization: A Close Look Suppose that we have a K-class classification task to be learned by a neural network, Sθ(·). Given a training set of {xi, yi} N i=1 samples where xiis a data sample and yiis the ground-truth label, the model Sθ(·) outputs the probability pθ(j|xi) of each class j ∈ {1*, . . . , K*}: $$p_{\theta}(j|x_{i})=\frac{\exp(z_{i,j})}{\sum_{j^{\prime}=1}^{K}\exp(z_{i,j^{\prime}})},\tag{5}$$ where $z_{i,j}=[S_{\theta}(x_{i})]_{j}$ is the logit for $j$-th class of input xi. A general label smoothing can be formally written as: $$\hat{P}_{l s}(j|x_{i})=\begin{cases}1-\alpha+\alpha\cdot P_{l s}(j|x_{i}),&j=k\\ \alpha\cdot P_{l s}(j|x_{i}),&j\neq k\end{cases},\tag{6}$$ where Pˆls is a smoothed label, Pls is a smoothing distribution, k is the ground-truth class of xi, and α is a hyperparameter that determines the amount of smoothing. If α = 0, we obtain the original onehot encoded label. For the original label smoothing method, the probability Pls(·|x) is independent on the sample x and is taken to be the uniform distribution Pls(j) = U(j), However, the training can benefit from instance-dependent regularization. (Yuan et al., 2020; Zhang and Sabuncu, 2020). In this work, we consider the general form of LS Pˆls(·|xi) which is instance-dependent and not necessarily uniform. Let us consider the Cross Entropy (CE) loss with the smoothed labels: $$\mathcal{L}_{\theta}(P_{l s})=\frac{1}{N}\sum_{i=1}^{N}\left[-\sum_{j=1}^{K}\hat{P}_{l s}(j|x_{i})\log p_{\theta}(j|x_{i})\right]$$ $$=\frac{1}{N}\sum_{i=1}^{N}\mathbb{E}_{\hat{P}_{l s}}\left[-\log p_{\theta}(j|x_{i})\right].\qquad\qquad(7)$$ Note that computing the weighted sum of the negative log-likelihood of each probability of label can be viewed as taking expectation of the negative log-likelihood over label space under a certain distribution Pˆls. We modify this loss by adding a KL divergence term KL(Pls(·|xi)∥U(·)) into Eq. 7 which encourages the sample-wise smoothing distribution Pls(·|xi) to be close to the uniform distribution to handle the over-confidence. $$\mathcal{R}_{\theta}(P_{l s})=\frac{1}{N}\sum_{i=1}^{N}\Big{[}\mathbb{E}_{\hat{P}_{l s}}\left[-\log p_{\theta}(j|x_{i})\right]+\tag{8}$$ $$\beta\mathbb{KL}(P_{l s}(\cdot|x_{i})\|U(\cdot))\Big{]},$$ where β is a hyper-parameter. The instancedependent smoothing Pls(·|xi) can be viewed as the prior knowledge over the label space for a sample xi. This KL divergence term can be understood as a measure of the 'smoothness' or 'overconfidence' of each training label. Specifically for token-level generation tasks, the over-confidence of model results in the output of repetitive or most frequent but irrelevant text, which is detrimental to the generalization of the model (Chorowski and Jaitly, 2017; Holtzman et al., 2020; Meister et al., 2020a). We choose a Uniform distribution to constrain Pls in the KL term because we would like Pls to contain more information on non-target classes. It plays a role similar to the temperature in KD ![3_image_0.png](3_image_0.png) and controls the sharpness of the distribution Pls. On the one hand side, in case of KD, it regulates the amount of prior knowledge we inject in the smoothed label. On the other hand side, it reduces the overconfidence of the trained network by increasing the entropy of smoothed labels (a phenomenon studied by Zhang and Sabuncu (2020)). Next, we show the post-hoc interpretability of this formulation. The following two Remarks discuss the relationship of this objective with conventional LS, confidence penalty and KD. Remark 1. When Pls(·|xi) is taken to be the uniform distribution U(·) for any xi, the objective in Eq. 8 reduces back to the one in Eq. 7 *since* KLU(·)∥U(·) = 0. Remark 2. This framework can include KD with temperature τ = 1 *as a special case. Suppose* PT (·|xi) *to be the output probability of a teacher* model T(·) for any xi, the objective of KD can be rewritten as an expectation of negative loglikelihood w.r.t. a transformed distribution PˆT plus a KL term between the teacher output and uniform distribution. $$\begin{array}{c}{{{\mathcal R}_{\theta}(P_{T})=\frac{1}{N}\sum_{i=1}^{N}\left[\mathbb{E}_{P_{T}}\left[-\log p_{\theta}(j|x_{i})\right]+\right.}}\\ {{\left.\alpha\mathbb{K}\mathbb{L}(P_{T}(\cdot|x_{i})\|U(\cdot))\right]+\Theta(\log K),}}\end{array}$$ where PˆT (j|xi) = (1 − α + α · PT (k|xi)) for j = k *the ground truth and* PˆT (j|xi) = α · PT (j|xi) for j ̸= k. This objective function Eq. 8 *bridges* the objective of label smoothing and knowledge distillation, since it is easy to convert our objective to KD or LS by replacing the Pls(·|xi) in Eq. 8 with PT (·|xi) or U(·)*, respectively. There is an* inherent relation among these methods. KD and LS, in particular, can be interpreted as instances of our method with a fixed smoothing distribution, which is consistent with recent work (Yuan et al., 2020; Zhang and Sabuncu, *2020).* ## 3.2 **Learning Label Regularization Via Bi-Level** Optimization The choice of the smoothing distribution Pls determines the label regularization method. As described above, in KD the smoothing distribution is an output of the pre-trained teacher model, and in LS it is a uniform distribution. Generally speaking, the optimal smoothing distribution is unknown, and ideally, we would like to learn the optimal Pls. In this regard, we set up the following two-stage optimization problem: $$\min_{\theta}{\cal R}(P_{ls}^{*}(\theta),\theta),$$ subject to $P_{ls}^{*}(\theta)=\arg\min{\cal R}_{\theta}(P_{ls})$. (10) This optimization setting, also called bilevel optimization (Colson et al., 2007), is strongly NP-hard (Jeroslow, 1985) so getting an exact solution is difficult. To solve this problem in our case, we first prove that the inner loop is a convex optimization problem by computing the Hessian matrix Hi of Rθ(Pls) w.r.t. Pls(·|xi) for each training instance xi. $$\mathbf{H}_{i}=\mathrm{diag}({\frac{\beta}{P_{l s}(1)}},{\frac{\beta}{P_{l s}(2)}},\cdots,{\frac{\beta}{P_{l s}(K)}})\quad(11)$$ When β is greater than zero, the Hessian is positive definite. Therefore, for the inner optimization loop we can derive the closed-form solution by using a Lagrangian multiplier. The following Theorem 1 gives the explicit formulation of Rθ(P∗ ls). For the details of this derivation please refer to Appendix A. Theorem 1. The solution to the inner loop of optimization in Eq. 10 *is given by:* $$P_{l_{s}}^{*}(j|x_{i})=\frac{p_{\theta}(j|x_{i})^{\frac{\alpha}{\beta}}}{\sum_{j^{\prime}=1}^{K}p_{\theta}(j^{\prime}|x_{i})^{\frac{\alpha}{\beta}}},\tag{12}$$ _where $p_{\theta}(j|x_{i})$ is the output probability of $j$-th where pθ(j|xi) is the output probability of j-th class of model Sθ(·), α *is the smoothing coefficient* defined in Eq. 6, and β *is defined in Eq.* 8. As a result, we reduced the two-stage optimization problem in Eq. 10 to a regular single-stage minimization: $$\operatorname*{min}_{\theta}{\frac{1}{N}}\sum_{i=1}^{N}{\mathcal{R}}_{\theta}(P_{l s}^{*}(\cdot|x_{i})),\qquad\qquad0$$ $$\mathrm{where}$$ $$\begin{split}\mathcal{R}_{\theta}(P^{*}_{ls}(\cdot|x_{i}))&=\sum_{j=1}^{K}\big{[}-\hat{P}^{*}_{ls}(j|x_{i})\log p_{\theta}(j|x_{i})\big{]}\\ &\quad+\beta P^{*}_{ls}(j|x_{i})\log(K\cdot P^{*}_{ls}(j|x_{i}))\big{]},\end{split}\tag{14}$$ $$\begin{array}{c}{{x_{i})}}\\ {{}}\end{array}$$ $$\begin{array}{c}{{\left|\right\rangle}}\\ {{\left(14\right)}}\end{array}$$ P∗ ls(j|xi) is given in Theorem 1, and Pˆ∗ ls(j|xi) is defined in Eq. 6. Note that the solution P∗ ls is deterministic and interpretable. Moreover, the two remarks below demonstrate the relation of our LABO with LS and KD methods. Remark 3. When β *is extremely large,* P∗ ls *will be* close to the Uniform distribution, and our objective function will be equivalent to optimizing the CE loss with a Uniform LS regularizer. Remark 4. *There is an intrinsic connection between the* P∗ ls *distribution and generating softmax* outputs with a temperature factor. Specifically, when β = α · τ *, we could have* $$P_{l s}^{*}(j|x_{i})=\frac{\exp(\frac{z_{i,j}}{\tau})}{\sum_{j^{\prime}=1}^{K}\exp(\frac{z_{i,j^{\prime}}}{\tau})}.\qquad(15)$$ The smoothing distribution in this case becomes the temperature smoothed output of the model, which is similar as the smoothed targets used in KD methods1. 1The derivation can be found in Appendix B. To summarize, our method can be expressed as an alternating two-stage process. We generate optimal smoothed labels P∗ ls using Theorem 1 in the first stage. Then, in the second phase, we fix the P∗ ls to compute loss Rθ(P∗ ls) and update the model parameters. ## 3.3 Implementation Details Two-stage training. Our solution provides the closed-form answer to the inner loop optimization (1st-stage) and for the outer loop (2nd-stage) the model f(θ) is updated by gradient-descent. LABO conducts a one-step update for the 2nd stage, namely, for each training step, we compute the optimal smoothing P∗ ls and update the model, which eliminates the need for additional memory or storage for the parameters or outputs of a prior, trained model. The training process is shown in Algorithm 1. $$(13)$$ Adaptive α. The value of α determines the probability mass to smooth. To get rid of hyperparameter searching, we provide an instancespecific α as a function of the entropy of the output. $$\alpha={\frac{\mathbb{H}(U)-\rho\mathbb{H}(P_{\theta})}{\mathbb{H}(U)}},\qquad\qquad(16)$$ where ρ ∈ [0.5, 1], in our experiments, we use ρ = 0.5. In the experiments, we fix the ratio of βα as a hyper-parameter, so the value of β will change accordingly. Hypergradient In the outer loop, the derivative of loss R(P∗ ls(θ), θ) w.r.t. θ consists of two components: $$\nabla_{P^{*}_{ls}}{\cal R}\nabla_{\theta}P^{*}_{ls}+\nabla_{\theta}{\cal R}\tag{17}$$ because $P^{*}_{ls}$ is the global solution of objective R in the inner loop, ∇P∗ lsR equals zero. Therefore, the hypergradient equals zero, we neglect this component in computation for efficiency. ## 4 Experiments We evaluate the performance of our proposed LABO method on both Machine Translation and Computer Vision. For machine translation, we evaluate on the IWSLT'14 (Cettolo et al., 2014), IWLST'17 (Cettolo et al., 2017) and WMT'14 (Bojar et al., 2014) datasets using transformer-base models and for image classification on CIFAR10, CIFAR-100 (Krizhevsky et al., 2009) and ImageNet2012 (Deng et al., 2009) using ResNet-based Method (DE-EN) (EN-DE) (EN-FR) (FR-EN) (EN-DE) (DE-EN) Transformer 33.9± 0.09 27.8± 0.11 40.0± 0.15 39.0± 0.14 27.1± 0.03 29.8± 0.10 w/ LS 34.5± 0.14 28.3± 0.16 40.5± 0.16 39.8± 0.05 27.7± 0.09 31.9± 0.09 w/ CP 34.2± 0.15 27.9± 0.07 40.4± 0.20 39.2± 0.13 27.4± 0.13 30.4± 0.08 w/ FL 33.0± 0.13 26.8± 0.16 39.2± 0.16 38.3± 0.05 26.6± 0.09 28.9± 0.09 w/ AFL 34.2± 0.14 27.9± 0.13 40.5± 0.09 39.5± 0.16 27.5±0.10 30.3±0.08 w/ LABO (ours) **35.2**± 0.07 **28.8**± 0.15 **40.9**± 0.08 **40.3**± 0.05 **28.3**± 0.05 **32.3**± 0.06 IWSLT'14 IWSLT'14 WMT'14 Algorithm 1 LABO: Two-stage training Input: Training set Dtrain, batch size n, number of steps T, learning rate η, Pˆls warm-up steps Tw; 1: for i ← 1 to T do 2: Sample a mini-batch S = {(xi, yi)}|n i=1 from Dtrain; 3: if *i < T*w **then** 4: Compute the Pˆls with Uniform Distribution for the mini-batch data; 5: **else** 6: Compute the Pˆls according to Pˆ∗ ls solution for the mini-batch data; 7: **end if** 8: Update θt+1 = θt − η∇θRθ(Pˆ∗ ls, S); 9: **end for** models of various sizes (parameters). All experiments were performed on one or more NVIDIA Tesla (V100) GPUs. ## 4.1 Experiments On Machine Translation We evaluate our method on six machine translation benchmarks including IWSLT'14 German to English (DE-EN), English to German (EN-DE), English to French (EN-FR), French to English (FREN), WMT'14 English to German (EN-DE) and German to English (De-EN) benchmark. We use the 6-layer encoder-decoder transformer as our backbone model for all experiments. We follow the hyper-parameter settings for the architecture and training reported in (Gehring et al., 2017; Vaswani et al., 2017). Specifically, we train the model with a maximum of 4,096 tokens per mini-batch for 150 or 50 epochs on IWSLT and WMT datasets, respectively. For optimization, we apply Adam optimizer with β1 = 0.9 and β2 = 0.98 and weight decay 1e-4. For LABO, we also perform explore {1.15, 1.25} for the only hyper-parameter τ = β/α. We report on the BLEU-4 (Papineni et al., 2002) metric to compare between the different models2. Baselines. We compare our methods with three baselines that try to handle the over-confidence problem. LS uses the combination of a one-hot vector and a uniform distribution to construct the target distribution. CP (Pereyra et al., 2017) punishes over-confidence by regularizing the entropy of model predictions. FL (Lin et al., 2017) utilizes the focal loss to assign smaller weights to well-learned tokens for every training iteration. AFL (Raunak et al., 2020) is a generalize Focal loss which establish a the trade-off for penalizing low-confidence predictions. Bilingual Translation. Tab. 1 shows the results of IWLST'14 DE-EN, EN-DE, EN-FR, FR-EN and WMT'14 EN-DE, DE-EN translation tasks. The backbone transformers showed close or better BLEU scores with the numbers reported in (Vaswani et al., 2017). All confidence penalizing techniques except FL can improve the performance of transformer models. The drop in performance of FL is consistent with the finding of long-tail phenomena in neural machine translation systems (Raunak et al., 2020). Our methods consistently outperform the baseline Transformer (w/ LS), which demonstrates its effectiveness. This is across different language pairs and dataset sizes. Multilingual Translation. We also evaluate our LABO method on IWLST'17 ({DE, FR}-EN) dataset by using multilingual transformers. We learn a joint BPE code for all three languages and use sacrebleu for evaluating the test set. Tab.2 shows LABO achieves consistent improvement over the original label smoothing on the multilingual translation dataset. 2Our experiments were conducted with Fairseq toolkit (github.com/pytorch/fairseq). Table 2: BLEU scores for method LS or LABO on multilingual Translation Tasks. | IWSLT'17 | BLEU | | |----------------|--------|-------| | Model | DE-EN | FR-EN | | Transformer | 26.9 | 35.4 | | w/ LS | 28.0 | 36.8 | | w/ LABO (ours) | 28.4 | 37.2 | ## 4.2 Experiments On Image Classification Setup for CIFAR experiments. We evaluated our method on different model architectures including MobileNetV2 (Sandler et al., 2018), and ResNet18 (He et al., 2016). We follow standard data augmentation schemes: random crop and horizontal flip to augment the original training images. We sampled 10% images from the training split as a validation set. The models are trained for 200 epochs with a batch size of 128. For optimization, we used stochastic gradient descent with a momentum of 0.9, and weight decay set to 5e-4. The learning rate starts at 0.1 and is then divided by 5 at epochs 60, 120 and 160. All experiments are repeated 5 times with random initialization. For the KD experiments, we used ResNeXt29 as the teacher. All teacher models are trained from scratch and picked based on their best accuracy. To explore the best hyper-parameters, we conduct a grid search over parameter pools. We explore {0.1, 0.2, 0.5, 0.9} for α and {5, 10, 20, 40} for KD temperature. Results. Next, we conduct a series of experiments on two models to compare our approach with other methods without requiring any pre-trained models on two CIFAR datasets. Note that KD is reported as a reference. All experiments are repeated 5 times with random initialization. The baseline methods include CS-KD which doesn't require training a teacher and constrains the smoothed output between different samples of the same class to be close (Yun et al., 2020), TF-reg which regularizes the training with a manually designed teacher (Yuan et al., 2020) and Beta-LS which leverages a similar capacity model to learn an instance-specific prior over the smoothing distribution (Zhang and Sabuncu, 2020). KR-LS utilizes a class-wise target table which captures the relation of classes (Ding et al., 2021). We follow their best hyper-parameter settings. The time for all baselines is measured based on the original implementations. 345 Tab. 3 shows the test accuracy and training time for 200 epochs of different methods on two models. It can be seen that our method consistently improves the classification performance on both lightweight and complex models, which indicates its general applicability. Besides, it shows the training time of our method is close to Base or LS methods, which show its stable efficiency over other baselines such as Beta-LS, which still requires a separate model to output a learned prior over the smoothing distribution. Our method achieves better performance than other strong baseline methods consistently. We have to mention that our computation for smoothing distribution is deterministic and excluded from the computation graph for the gradient calculation. Therefore, the implementation of our method requires less time and space during the training as we don't need to train a separate model. ## 5 Discussion One of important problems of neural language models is the large discrepancy of predicted probabilities between tokens with low and high frequency, In other words, the Long-Tailed Phenomena in the neural language models (Zhao and Marcus, 2012; Demeter et al., 2020). In this section, we study the impact of our method on the prediction of tokens with different frequencies. We first computed the averaged frequency of tokens in every source sentence x = [xi]| N i=1 for the validation and test sets of IWSLT 2014 (De-En). This Frequency Score (FS) is defined in (Raunak et al., 2020): $$F(x)={\frac{\sum_{i=1}^{N}f(x_{i})}{N}},\qquad\qquad{\mathrm{(18)}}$$ where f(xi) is the frequency of the token xiin the training corpus. Next, we divide each dataset into three parts of 2400 sentences in order of decreasing FS. Tab. 4 shows the results of LS and LABO on different splits. All models perform much better on split-most than split-medium and least. Our method first demonstrates consistent improvements on three splits for both validation and test datasets. Besides, our LABO provides at least the same magnitude of improvements on least and medium splits as the split-most. 3github.com/alinlab/cs-kd 4github.com/yuanli2333/Teacher-free-KnowledgeDistillation 5github.com/ZhiluZhang123/neurips_2020_distillation | CIFAR100 (Acc. & Time) | CIFAR 10 (Acc. & Time) | | | | | | | | |--------------------------|--------------------------|-------------|---------------|------|---------------|------|---------------|------| | MobileNetv2 | ResNet18 | MobileNetv2 | ResNet18 | | | | | | | Base | 67.98 | 1.0× | 76.92 | 1.0× | 90.55 | 1.0× | 94.82 | 1.0× | | KD | 70.99 (↑3.01) | 3.7× | 77.78 (↑0.86) | 3.8× | 91.52 (↑0.97) | 3.7× | 95.28 (↑0.46) | 3.9× | | LS | 68.69 (↑0.71) | 1.0× | 77.67 (↑0.75) | 1.0× | 90.82 (↑0.27) | 1.0× | 95.02 (↑0.20) | 1.0× | | CS-KD | 70.36 (↑2.38) | 1.1× | 77.95 (↑1.03) | 1.3× | 91.17 (↑0.62) | 1.1× | 94.90 (↑0.08) | 1.3× | | TF-reg | 70.08 (↑2.10) | 1.0× | 77.91 (↑0.99) | 1.2× | 90.97 (↑0.42) | 1.1× | 95.05 (↑0.23) | 1.2× | | Beta-LS | 70.45 (↑2.47) | 1.5× | 77.83 (↑0.91) | 1.4× | 90.89 (↑0.34) | 1.5× | 94.87 (↑0.05) | 1.6× | | KR-LS | 70.12 (↑2.14) | 1.0× | 77.82 (↑0.90) | 1.0× | 90.67 (↑0.12) | 1.0× | 94.76 (↓0.06) | 1.0× | | LABO | 71.05 (↑3.07) | 1.0× | 78.10 (↑1.18) | 1.0× | 91.53 (↑0.98) | 1.0× | 95.21 (↑0.39) | 1.0× | ## 5.1 Analysis On Predictions With Low-Frequency Tokens Next, We investigate the probability distribution of the selected prediction in each step of beam search, namely, the the probability of top hypothesis finally chosen during decoding. Figure 2 shows the histogram for LS and LABO on three splits. The probabilities of our LABO method concentrated on around 0.4 while the corresponding probabilities of LS concentrated on 0.9. Our method reduces the discrepancy between predicted probabilities of different tokens, which facilitates the inference process during beam search by avoiding creating extremely large probabilities. | different averaged token frequencies. IWSLT'14 (DE-EN) Validation Split Most Medium | Least | | | |---------------------------------------------------------------------------------------|---------|--------|-------| | LS | 40.3 | 35.2 | 33.1 | | LABO | 40.7 | 35.8 | 33.8 | | ∆ | +0.4 | +0.6 | +0.7 | | IWSLT'14 (DE-EN) | Test | | | | Split | Most | Medium | Least | | LS | 38.4 | 33.5 | 32.1 | | LABO | 39.0 | 34.3 | 32.7 | | ∆ | +0.6 | +0.8 | +0.6 | ## 6 Related Work There is a great body of works inspired by the KD technique. Some of them focus on boosting performance with explicitly regularized smoothed targets. Zhang and Sabuncu (2020) interpret studentteacher training as an amortized maximum aposteriori estimation and derive an equivalence between self-distillation and instance-dependent label ![7_image_0.png](7_image_0.png) smoothing. This analysis helped them to devise a regularization scheme, Beta Smoothing. However, they still use an extra model to infer a prior distribution on their smoothing technique during the training. Yun et al. (2020) introduce an additional regularization to penalize the predictive output between different samples of the same class. There are several works discussing the empirical impact of KD and giving different practical suggestions. Kobyzev et al. (2023) conducted extensive experiments to explore the effect of label regularization to the generalization ability of compact pre-trained language models. Other works propose to learn class-wise label smoothing or progressive refining of smoothed labels. Ding et al. (2021) propose to capture the relation of classes by introducing decoupled labels table which increases space complexity by O(K × K). The concurrent work (Kim et al., 2021) utilizes the model trained at i-th epoch as the teacher to regularize the training at (i + 1)-th epoch along with annealing techniques. However, contrary to our work, it either requires a separate model as teacher, or stores the labels generated at the i-th epoch for training the next epoch, whereby increasing space complexity by O(N × K). Meister et al. (2020b) investigate the relationship between generalized entropy regularization and label smoothing and provide empirical results on two text generation tasks. Chen et al. (2022) propose a masked Label Smoothing to remove the conflict by reallocating the smoothed probabilities based on the difference among languages. Lu et al. (2022) propose to learn confidence for NMT by jointly training a ConNet model to estimate the confidence of the prediction. We develop LABO motivated by a principled starting point: to generalize the smoothing distribution to a general form with neither time nor space complexity increase during the training and inference. ## 7 Conclusion Our aim in this work is to fill the accuracy gap between Label Smoothing and Knowledge Distillation techniques while maintaining the training efficiency of LS regularization. We proposed learning an instance-dependent label smoothing regularization simultaneously with training our model on the target. We began by generalizing the classical LS method and introduced our objective function by substituting the uniform distribution with a general, instance-dependent, discrete distribution. Within this formulation, we explained the relationship between the LS and KD. Then, using a bi-level optimization approach, we obtained an approximation for the optimal smoothing function. We conducted extensive experiments to compare our model with conventional LS, KD and various state-of-the-art label regularization methods on popular MT and V benchmarks and showed the effectiveness and efficiency of our technique. Finally, we analyze the impact of our methods on the prediction of neural machine translation systems under different averaged token frequency settings and show our methods can greatly reduce the discrepancy between predicted probabilities of different tokens. In practice, apart from general regularization techniques like dropout and weight decay, many advanced techniques are designed for specific tasks like FixNorm (Nguyen and Chiang, 2018), CutMix (Yun et al., 2019) and SwitchOut (Wang et al., 2018). We leave the future work to combine these methods with our technique. Moreover, we plan to explore the practical applications of our method for large-scale model training. One specific application is to improve the pre-training of large language models and vision transformers. ## Acknowledgments We thank Mindspore,6 which is a new deep learning computing framework, for partial support of this work. ## 8 Limitations In the current work, we adapt one-step gradient descent training for the outer loop based on our bi-level optimization framework. Since this outer loop optimization doesn't have a closed-form solution, determining how many steps to perform for the outer loop for better outer optimization is still important to explore. ## References Lei Jimmy Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. 2016. Layer normalization. *CoRR*, abs/1607.06450. Ondˇrej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Johannes Leveling, Christof Monz, Pavel Pecina, Matt Post, Herve SaintAmand, Radu Soricut, Lucia Specia, and Aleš Tamchyna. 2014. Findings of the 2014 workshop on statistical machine translation. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 12–58, Baltimore, Maryland, USA. Association for Computational Linguistics. Cristian Bucilua, Rich Caruana, and Alexandru ˇ Niculescu-Mizil. 2006. Model compression. In *Proceedings of the 12th ACM SIGKDD international* conference on Knowledge discovery and data mining, pages 535–541. Rich Caruana. 1997. Multitask learning. *Machine* learning, 28(1):41–75. Mauro Cettolo, Marcello Federico, Luisa Bentivogli, Jan Niehues, Sebastian Stüker, Katsuhito Sudoh, Koichiro Yoshino, and Christian Federmann. 2017. Overview of the IWSLT 2017 evaluation campaign. 6www.mindspore.cn/ In *Proceedings of the 14th International Conference* on Spoken Language Translation, pages 2–14, Tokyo, Japan. International Workshop on Spoken Language Translation. Mauro Cettolo, Jan Niehues, Sebastian Stüker, Luisa Bentivogli, and Marcello Federico. 2014. Report on the 11th iwslt evaluation campaign. In Proceedings of the 11th International Workshop on Spoken Language Translation: Evaluation Campaign. Liang Chen, Runxin Xu, and Baobao Chang. 2022. Focus on the target's vocabulary: Masked label smoothing for machine translation. In *Proceedings of the* 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 665– 671. Association for Computational Linguistics. Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, and Furu Wei. 2021. Wavlm: Large-scale self-supervised pre-training for full stack speech processing. *CoRR*, abs/2110.13900. Jan Chorowski and Navdeep Jaitly. 2017. Towards better decoding and language model integration in sequence to sequence models. In *Interspeech 2017,* 18th Annual Conference of the International Speech Communication Association, Stockholm, Sweden, August 20-24, 2017, pages 523–527. ISCA. Benoît Colson, Patrice Marcotte, and Gilles Savard. 2007. An overview of bilevel optimization. Annals of Operations Research, 153(1):235–256. David Demeter, Gregory Kimmel, and Doug Downey. 2020. Stolen probability: A structural weakness of neural language models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 2191–2197. Association for Computational Linguistics. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2009), 20-25 June 2009, Miami, Florida, USA, pages 248–255. IEEE Computer Society. Qianggang Ding, Sifan Wu, Tao Dai, Hao Sun, Jiadong Guo, Zhang-Hua Fu, and Shutao Xia. 2021. Knowledge refinery: Learning from decoupled label. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 7228–7235. AAAI Press. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. 2021. An image is worth 16x16 words: Transformers for image recognition at scale. In *9th International Conference* on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. 2017. Convolutional sequence to sequence learning. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of *Proceedings of Machine Learning Research*, pages 1243–1252. PMLR. Ian J. Goodfellow, Yoshua Bengio, and Aaron C. Courville. 2016. *Deep Learning*. Adaptive computation and machine learning. MIT Press. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In *2016 IEEE Conference on Computer Vision* and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 770–778. IEEE Computer Society. Tong He, Zhi Zhang, Hang Zhang, Zhongyue Zhang, Junyuan Xie, and Mu Li. 2019a. Bag of tricks for image classification with convolutional neural networks. In *IEEE Conference on Computer Vision and* Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pages 558–567. Computer Vision Foundation / IEEE. Tong He, Zhi Zhang, Hang Zhang, Zhongyue Zhang, Junyuan Xie, and Mu Li. 2019b. Bag of tricks for image classification with convolutional neural networks. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 558–567. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. *arXiv* preprint arXiv:1503.02531. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degeneration. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In *Proceedings* of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, volume 37 of *JMLR Workshop and Conference Proceedings*, pages 448–456. JMLR.org. R. G. Jeroslow. 1985. The polynomial hierarchy and a simple model for competitive analysis. *Mathematical* Programming, 32(2):146–164. Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2020. Tinybert: Distilling bert for natural language understanding. In *Proceedings of the 2020 Conference on* Empirical Methods in Natural Language Processing: Findings, pages 4163–4174. Kyungyul Kim, ByeongMoon Ji, Doyoung Yoon, and Sangheum Hwang. 2021. Self-knowledge distillation with progressive refinement of targets. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 6567–6576. Ivan Kobyzev, Aref Jafari, Mehdi Rezagholizadeh, Tianda Li, Alan Do-omri, Peng Lu, Pascal Poupart, and Ali Ghodsi. 2023. Do we need label regularization to fine-tune pre-trained language models? In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 166–177, Dubrovnik, Croatia. Association for Computational Linguistics. Alex Krizhevsky, Geoffrey Hinton, et al. 2009. Learning multiple layers of features from tiny images. Anders Krogh and John A. Hertz. 1991. A simple weight decay can improve generalization. In *Advances in Neural Information Processing Systems 4,* [NIPS Conference, Denver, Colorado, USA, December 2-5, 1991], pages 950–957. Morgan Kaufmann. Tsung-Yi Lin, Priya Goyal, Ross B. Girshick, Kaiming He, and Piotr Dollár. 2017. Focal loss for dense object detection. In *IEEE International Conference* on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017, pages 2999–3007. IEEE Computer Society. Peng Lu, Ting Bai, and Philippe Langlais. 2019. SC-LSTM: learning task-specific representations in multi-task learning for sequence labeling. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACLHLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 2396–2406. Association for Computational Linguistics. Peng Lu, Abbas Ghaddar, Ahmad Rashid, Mehdi Rezagholizadeh, Ali Ghodsi, and Philippe Langlais. 2021. RW-KD: sample-wise loss terms re-weighting for knowledge distillation. In Findings of the Association for Computational Linguistics: EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 1620 November, 2021, pages 3145–3152. Association for Computational Linguistics. Yu Lu, Jiali Zeng, Jiajun Zhang, Shuangzhi Wu, and Mu Li. 2022. Learning confidence for transformerbased neural machine translation. In *Proceedings* of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 2353–2364. Association for Computational Linguistics. Clara Meister, Elizabeth Salesky, and Ryan Cotterell. 2020a. Generalized entropy regularization or: There's nothing special about label smoothing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 6870–6886. Association for Computational Linguistics. Clara Meister, Elizabeth Salesky, and Ryan Cotterell. 2020b. Generalized entropy regularization or: There's nothing special about label smoothing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 6870–6886. Association for Computational Linguistics. Rafael Müller, Simon Kornblith, and Geoffrey E. Hinton. 2019. When does label smoothing help? In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 4696–4705. Toan Nguyen and David Chiang. 2018. Improving lexical choice in neural machine translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 334–343, New Orleans, Louisiana. Association for Computational Linguistics. Sinno Jialin Pan and Qiang Yang. 2010. A survey on transfer learning. *IEEE Trans. Knowl. Data Eng.*, 22(10):1345–1359. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311–318. Gabriel Pereyra, George Tucker, Jan Chorowski, Lukasz Kaiser, and Geoffrey E. Hinton. 2017. Regularizing neural networks by penalizing confident output distributions. In *5th International Conference on Learning* Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Workshop Track Proceedings. OpenReview.net. Vikas Raunak, Siddharth Dalmia, Vivek Gupta, and Florian Metze. 2020. On long-tailed phenomena in neural machine translation. In Findings of the Association for Computational Linguistics: EMNLP 2020, Online Event, 16-20 November 2020, volume EMNLP 2020 of *Findings of ACL*, pages 3088–3095. Association for Computational Linguistics. Mark Sandler, Andrew G. Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. 2018. Mobilenetv2: Inverted residuals and linear bottlenecks. In *2018 IEEE Conference on Computer Vision and* Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pages 4510–4520. Computer Vision Foundation / IEEE Computer Society. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. *arXiv* preprint arXiv:1910.01108. Steffen Schneider, Alexei Baevski, Ronan Collobert, and Michael Auli. 2019. wav2vec: Unsupervised pre-training for speech recognition. In *Interspeech* 2019, 20th Annual Conference of the International Speech Communication Association, Graz, Austria, 15-19 September 2019, pages 3465–3469. ISCA. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715–1725, Berlin, Germany. Association for Computational Linguistics. Zhiqiang Shen, Zechun Liu, Dejia Xu, Zitian Chen, Kwang-Ting Cheng, and Marios Savvides. 2021. Is label smoothing truly incompatible with knowledge distillation: An empirical study. In *9th International Conference on Learning Representations,* ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net. Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. *J. Mach. Learn. Res.*, 15(1):1929– 1958. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2818–2826. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998–6008. Xinyi Wang, Hieu Pham, Zihang Dai, and Graham Neubig. 2018. Switchout: an efficient data augmentation algorithm for neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 856–861. Association for Computational Linguistics. Kunran Xu, Lai Rui, Yishi Li, and Lin Gu. 2020. Feature normalized knowledge distillation for image classification. In *Computer Vision - ECCV 2020*, pages 664–680. Springer International Publishing. Li Yuan, Francis E. H. Tay, Guilin Li, Tao Wang, and Jiashi Feng. 2020. Revisiting knowledge distillation via label smoothing regularization. In *2020* IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, pages 3902–3910. Computer Vision Foundation / IEEE. Sangdoo Yun, Dongyoon Han, Sanghyuk Chun, Seong Joon Oh, Youngjoon Yoo, and Junsuk Choe. 2019. Cutmix: Regularization strategy to train strong classifiers with localizable features. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 - November 2, 2019, pages 6022–6031. IEEE. Sukmin Yun, Jongjin Park, Kimin Lee, and Jinwoo Shin. 2020. Regularizing class-wise predictions via selfknowledge distillation. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, pages 13873–13882. IEEE. Zhilu Zhang and Mert R. Sabuncu. 2020. Selfdistillation as instance-specific label smoothing. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 612, 2020, virtual. Qiuye Zhao and Mitch Marcus. 2012. Long-tail distributions and unsupervised learning of morphology. In *COLING 2012, 24th International Conference* on Computational Linguistics, Proceedings of the Conference: Technical Papers, 8-15 December 2012, Mumbai, India, pages 3121–3136. Indian Institute of Technology Bombay. ## A Derivation Of The Proof For Theorem 1 Proof. First, observe that the function Rθ defined in Eq. 8 is a convex combination of N nonnegative functions Ri = EPˆls [− log pθ(j|xi)] + βKL(Pls(·|xi)∥U(·)), for i = 1*, . . . , N*. We will show that each of Riis a convex function of the components of simplex Pls(·|xi) by computing the Hessian matrix of Ri with respect to Pls(·|xi): H(Ri(Pls(1), · · · , Pls(K)) = ∂ 2Ri ∂Pls(1)2 ,∂ 2Ri ∂Pls(1)∂Pls(2) , · · · ,∂ 2Ri ∂Pls(1)∂Pls(K) ∂ 2Ri ∂Pls(2)∂Pls(1) ,∂ 2Ri ∂Pls(2)2 , · · · ,∂ 2Ri ∂Pls(2)∂Pls(K) ...,...,... ,... ∂ 2Ri ∂Pls(K)∂Pls(1) ,∂ 2Ri ∂Pls(K)∂Pls(2) , · · · ,∂ 2Ri ∂Pls(K) 2 (19) = diag(β Pls(1), β Pls(2), · · · , β Pls(K) ) When β is greater than zero, the Hessian is positive definite. Therefore, each Riis a convex function of the components of Pls(·|xi). As a result, Rθ is a convex function of the collection of components of every simplex Pls(·|xi). For simplicity, we derive the global optimum solution for each Ri with a Lagrangian multiplier: $$L_{i}(P_{l s},\lambda_{L})=\sum_{j^{\prime}=1}^{K}\left[-\hat{P}_{l s}(j^{\prime})\log p(j^{\prime})+\beta\cdot P_{l s}(j^{\prime})\log\frac{P_{l s}(j^{\prime})}{1/K}\right]+\lambda_{L}(\sum_{j^{\prime}=1}^{K}P_{l s}(j^{\prime})-1),$$ $$(20)$$ $$(21)$$ $$(22)$$ ′) − 1), (20) where we omit the dependency on xito simplify the notation. We set the corresponding gradients equal to 0 to obtain the global optimum for j = 1, . . . , K. $$\frac{\partial L_{i}}{\partial P_{ls}(j)}=-\,\alpha\log p(j)+\beta\cdot\log P_{ls}(j)+\beta+\beta\log K+\lambda_{L}=0$$ $$P_{ls}^{*}(j)=\exp(\frac{\alpha}{\beta}\log p(j))\cdot\exp(\frac{-\beta-\beta\log K-\lambda_{L}}{\beta})$$ $$=\exp(\frac{\alpha}{\beta}\log p(j))\cdot C_{ls}$$ Since $\sum_{j'=1}^K P_{ls}(j')=1$, we have: . $$\sum_{j^{\prime}=1}^{K}P_{l s}(j^{\prime})=\sum_{j^{\prime}=1}^{K}\exp(\frac{\alpha}{\beta}\log p(j^{\prime}))\cdot\exp(\frac{-\beta-\beta\log K-\lambda_{L}}{\beta})=1$$ $$C_{l s}=\exp(\frac{-\beta-\beta\log K-\lambda_{L}}{\beta})=\frac{1}{\sum_{j^{\prime}=1}^{K}\exp(\frac{\alpha}{\beta}\log p(j^{\prime}))}$$ $\alpha$ is a real number. (23) $\binom{24}{2}$ . So the optimal P∗ ls(j) is given by the formula: $$P_{l s}^{*}(j)=\frac{\exp(\frac{\alpha}{\beta}\log p(j))}{\sum_{j^{\prime}=1}^{K}\exp(\frac{\alpha}{\beta}\log p(j^{\prime}))}=\frac{p(j)^{\alpha/\beta}}{\sum_{j^{\prime}=1}^{K}p(j^{\prime})^{\alpha/\beta}}.$$ B Derivation from optimal smoothing to softmax output with temperature When τ = β α , we could have We $$P_{ls}^{*}(c|x_i)=\frac{p_{\theta}(c|x_i)^{\frac{1}{\tau}}}{\sum_j p_{\theta}(j|x_i)^{\frac{1}{\tau}}}=\frac{\left(\frac{e^{x_i c}}{\sum_m e^{x_i m}}\right)^{\frac{1}{\tau}}}{\sum_j(\frac{e^{x_i j}}{\sum_m e^{x_i m}})^{\frac{1}{\tau}}}$$ $$=\frac{\left(e^{z_i c}\right)^{\frac{1}{\tau}}}{\sum_j(e^{z_i j})^{\frac{1}{\tau}}}=\frac{e^{\frac{z_i c}{\tau}}}{\sum_j e^{\frac{z_i c}{\tau}}}$$ (25) $\huge\square$ . $$(26)$$ $$5771$$ ## C Additional Results On Imagenet E Experimental Details For Mt Setup for ImageNet experiments. We evaluated our method on two model architectures including ResNet50 and ResNet152 with standard data augmentation schemes including random resize cropping and random horizontal flip. The models are trained for 90 epochs with a batch size of 256. We use SGD for optimization with a momentum of 0.9, and weight decay set to 1e-4. The learning rate starts at 0.1 and is then divided by 10 at epochs 30, 60 and 80. Table 5: Comparison between different Smoothed labels methods. Validation accuracy and training time are reported. The training time is measured on 4 NVIDIA V100 GPUs. | ImageNet (Acc. & Time) | | | | | |--------------------------|---------------|------|---------------|------| | ResNet50 | ResNet152 | | | | | Base | 75.81 | 1.0× | 77.92 | 1.0× | | LS | 76.17 (↑0.36) | 1.0× | 78.33 (↑0.41) | 1.0× | | TF-reg | 76.21 (↑0.40) | 1.1× | 78.12 (↑0.20) | 1.1× | | Beta-LS | 76.13 (↑0.32) | 1.5× | 78.56 (↑0.64) | 1.6× | | KR-LS | 76.32 (↑0.51) | 1.3× | 78.48 (↑0.56) | 1.3× | | LABO | 76.55 (↑0.74) | 1.1× | 78.62 (↑0.70) | 1.1× | Results. Tab. 5 shows the accuracy and training time for one epoch of different methods on two models. First, our method can consistently improve the classification performance, which indicates its robustness on the large-scale dataset. Again, our method achieves better performance with compared with other smoothing functions with a moderate training time increase over Base. Our computation overhead for smoothing distribution is introduced by computing Eq. 1 which is deterministic and excluded from the computation graph for the gradient calculation, hence, our method is more efficient than other latest advanced LS techniques. ## D Data Statistics | Dataset | Train | Validation | Test | |------------------|-----------|--------------|---------| | IWSLT'14 (DE-EN) | 160,239 | 7,283 | 6,750 | | IWSLT'14 (FR-EN) | 168,151 | 7,643 | 4,493 | | WMT'14 (EN-DE) | 3,900,502 | 39,414 | 3,003 | | IWSLT'17 (DE-EN) | 209,522 | 7,887 | 5,670 | | IWSLT'17 (FR-EN) | 236,652 | 8,277 | 7,275 | | CIFAR10 | 45,000 | 5000 | 10,000 | | CIFAR100 | 45,000 | 5,000 | 10,000 | | ImageNet | 1,281,167 | 50,000 | 100,000 | We conduct experiments by using the same hyperparameters for fair comparisons. Before training, we first apply BPE (Sennrich et al., 2016) to tokenize the corpus for each language pair. During the training, we set the label smoothing parameter to 0.1. We follow previous work to use Adam optimizer with betas to be (0.9,0.98) and the learning rate is 7e-4 for WMT and 5e-4 for the rest of tasks. During warming-up steps, the initial learning rate is 1e-7 and there are 1000 warm-up steps. For the warm-up steps of our smoothing, we use 10000 for WMT and 5000 for other tasks. Dropout rate is set to 0.3 and weight decay is set to 0.0001 for all experiments. We pick the checkpoint with the best performance on the validation set before inferring on the test set with beam size 5. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? section 7 A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B **Did You Use Or Create Scientific Artifacts?** Not applicable. Left blank. B1. Did you cite the creators of artifacts you used? Not applicable. Left blank. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. No response. ## C ✓ **Did You Run Computational Experiments?** Section 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? section 4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? section 4 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? section 4 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? section 4 ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
chen-etal-2023-frustratingly
Frustratingly Easy Label Projection for Cross-lingual Transfer
https://aclanthology.org/2023.findings-acl.357
Translating training data into many languages has emerged as a practical solution for improving cross-lingual transfer. For tasks that involve span-level annotations, such as information extraction or question answering, an additional label projection step is required to map annotated spans onto the translated texts. Recently, a few efforts have utilized a simple mark-then-translate method to jointly perform translation and projection by inserting special markers around the labeled spans in the original sentence. However, as far as we are aware, no empirical analysis has been conducted on how this approach compares to traditional annotation projection based on word alignment. In this paper, we present an extensive empirical study across 57 languages and three tasks (QA, NER, and Event Extraction) to evaluate the effectiveness and limitations of both methods, filling an important gap in the literature. Experimental results show that our optimized version of mark-then-translate, which we call EasyProject, is easily applied to many languages and works surprisingly well, outperforming the more complex word alignment-based methods. We analyze several key factors that affect the end-task performance, and show EasyProject works well because it can accurately preserve label span boundaries after translation. We will publicly release all our code and data.
# Frustratingly Easy Label Projection For Cross-Lingual Transfer Yang Chen, Chao Jiang, Alan Ritter, Wei Xu Georgia Institute of Technology {yang.chen, chao.jiang, alan.ritter, wei.xu}@cc.gatech.edu ## Abstract Translating training data into many languages has emerged as a practical solution for improving cross-lingual transfer. For tasks that involve span-level annotations, such as information extraction or question answering, an additional label projection step is required to map annotated spans onto the translated texts. Recently, a few efforts have utilized a simple mark-then-translate method to jointly perform translation and projection by inserting special markers around the labeled spans in the original sentence (Lewis et al., 2020; Hu et al., 2020). However, as far as we are aware, no empirical analysis has been conducted on how this approach compares to traditional annotation projection based on word alignment. In this paper, we present an extensive empirical study across 57 languages and three tasks (QA, NER, and Event Extraction) to evaluate the effectiveness and limitations of both methods, filling an important gap in the literature. Experimental results show that our optimized version of mark-then-translate, which we call EasyProject, is easily applied to many languages and works surprisingly well, outperforming the more complex word alignment-based methods. We analyze several key factors that affect the end-task performance, and show EasyProject works well because it can accurately preserve label span boundaries after translation. 1 ## 1 Introduction Zero-shot cross-lingual transfer, where models trained on a source language (e.g., English) are directly applied to other target languages, has the potential to extend NLP systems to many languages (Nooralahzadeh et al., 2020; Keung et al., 2020; Chen and Ritter, 2021; Niu et al., 2022; Huang et al., 2022a). Yet, its performance still lags behind models that are directly fine-tuned on labeled data (if available) from the target language. Recent work has shown that combining training data in a source language together with its automatic translation to the target language leads to consistent performance improvements (Xue et al., 2021; Hu et al., 2020). However, for NLP tasks that involve span-level annotations, an additional label projection step is needed to map the span annotations onto the translated texts (see Figure 1). Traditionally, this annotation projection step is performed based on word alignment after machine translation (Akbik et al., 2015a; Aminian et al., 2019). To avoid the use of complex word alignment models, several recent efforts (Lewis et al., 2020; Hu et al., 2020) directly translated sentences with span annotations wrapped between special markers (e.g., <a> and </a>). However, due to limited analysis presented in prior work, it is unclear (1) how well this approach works across different language families, (2) how robust MT systems are in handling special markers, as inserting markers inevitably degrades the translation quality, and (3) how well marker-based projection works in comparison to traditional alignment-based methods. In this paper, we present the first systematic study of the mark-then-translate annotation projection technique, which includes careful evaluation of the choice of markers, projection accuracy, impact on translation quality, robustness to different MT systems, as well as a comparison to traditional alignment-based method across 57 languages (including 18 from Africa) on 5 datasets and 3 NLP tasks. We also propose an improved variant of marker-based projection, EASYPRO-JECT, that consistently outperforms the alignmentbased approach, while being incredibly easy to use to project a variety of annotations (QA, entities, ![1_image_0.png](1_image_0.png) relations, events) across many languages. The key is to use language-agnostic square bracket markers, combined with an efficient fine-tuning strategy to encourage the multilingual MT system to better preserve the special markers during translation. Our main findings include (1) the marker-based method is surprisingly robust across different translation systems and languages, but the choice of markers matters (§3.2); (2) EasyProject can project annotated spans more accurately and is better at preserving span boundaries than the alignmentbased methods, which is key to its success (§5.1); (3) fine-tuning an MT system for only 200 steps is sufficient to improve its robustness in handling special markers during translation (§4); (4) the margin of improved cross-lingual transfer is related to the language/script family and amount of pre-training data included in the multilingual model (§5.2). We hope our work will inspire more research on robust models that better handle text markup for the purpose of generating span annotations. ## 2 Background And Related Work Alignment-based Projection. Projecting annotations via word alignment typically consists of the following steps: machine translate the available training data into the target language; run word alignment tools on the original and translated sentences; and finally, apply heuristics to map the span-level annotations from the original to translated texts. Statistical word alignment tools such as GIZA++ (Och and Ney, 2003) and fastalign (Dyer et al., 2013) have been widely adopted for projecting part-of-speech tags (Yarowsky et al., 2001; Eskander et al., 2020), semantic roles (Akbik et al., 2015b; Aminian et al., 2017; Daza and Frank, 2020; Fei et al., 2020), slot filling (Xu et al., 2020), semantic parser (Moradshahi et al., 2020; Nicosia et al., 2021), and NER labels (Ni et al., 2017; Stengel-Eskin et al., 2019). Recent progress on supervised neural aligners (Jalili Sabet et al., 2020; Nagata et al., 2020; Dou and Neubig, 2021; Lan et al., 2021) and multilingual contextualized embeddings (Devlin, 2018; Conneau et al., 2020) has further improved alignment accuracy. However, this pipeline-based method suffers from error propagation, translation shift (Akbik et al., 2015b), and non-contiguous alignments (Zenkel et al., 2020). Our analysis in §5.1 shows that the alignmentbased methods are more error-prone when projecting span-level annotations, compared to the marker-based approaches. Marker-based Projection. A few efforts have used mark-then-translate label projection method to translate question answering datasets into other languages (Lee et al., 2018; Lewis et al., 2020; Hu et al., 2020; Bornea et al., 2021). However, the focus of these papers was not the label projection task itself and there was no in-depth analysis on the effectiveness of the approach. For instance, Lewis et al. (2020) used quotation marks to translate the SQuAD training set into other languages but did not present empirical comparison to any other label projection methods. Similarly, Hu et al. (2020) used XML tags for the same purpose when creating the XTREME dataset, but this was only briefly mentioned in a few sentences in appendix. Besides QA, MulDA (Liu et al., 2021; Zhou et al., 2022) is a labeled sequence translation method that replaces entities with variable names for cross-lingual NER. However, no comparison with existing projection methods was presented, as the main focus is generating synthetic labeled data using language models. ## 3 Analysis Of Marker-Based Projection The idea of marker-based label projection is straightforward - wrap labeled spans with special marker tokens, then translate the modified sentence (see an example in Figure 1b). The projected spans can be directly decoded from the translation if the markers are retrained. However, inserting markers inevitably degrades the translation quality. In this section, we analyze several questions left open by prior work (Lewis et al., 2020; Hu et al., 2020), including (1) how well are the special markers being preserved in translation, (2) the impact of different marker choices on translation quality and the performance of cross-lingual transfer. ## 3.1 Experimental Setup We conduct experiments on three NLP tasks and 57 languages with five multilingual datasets to comprehensively evaluate the marker-based method. Most multilingual datasets are created by either (1) directly collecting and annotating data in the target language, or (2) translating English data with human or machine translation and then projecting labels manually or automatically. Four of our selected datasets were created with the first method, as evaluation on the translated datasets may overestimate performance on a target language, when in fact a model might only perform well on *translationese* (Riley et al., 2020). Datasets. Our experiments include NER via the WikiANN (Pan et al., 2017; Rahimi et al., 2019) and MasahkaNER 2.0 (Adelani et al., 2022) datasets (§5.1), in addition to CoNLL-2002/2003 multilingual NER datasets (Tjong Kim Sang, 2002; Tjong Kim Sang and De Meulder, 2003) for comparison with Liu et al. (2021) (§F.1). For event extraction, we use the ACE05 corpus (Walker et al., 2006), which consists of six sub-tasks: entity and relation extraction, event trigger/argument identification and classification. For QA, we use TyDiQAGoldP (Clark et al., 2020), which contains challenging questions written in eight languages. Data statistics are shown in Table 1 and 10 in Appendix. | WikiANN | MasakhaNER | ACE05 | TyDiQA | | |-----------------|---------------------------------------------------|------------------|-----------------------|-----------| | # of Lang. | 39 | 20 (from Africa) | 2 | 8 | | # of Docs | - | - | 526/31/40 3,696/440/– | | | # of Sent. | 20k/10k/10k 4.4k/638/1.2k 19k/901/676 17k/2,122/– | | | | | Avg. Length | –/8.0 | –/23.9 | 519.3/14.9 | 96.8/21.0 | | Avg. # of Spans | 1.4 | 1.8 | 2.9 | 1.0 | Table 1: The detailed statistics of train/dev/test sets for each dataset. **Avg. Length** represents the average number of tokens in each article/sentence, and **Avg. \# of** Spans denotes the average number of annotated spans in each sentence (in each article for TyDiQA). IE and QA Models. We use XLM-RoBERTalarge (Conneau et al., 2020) as the backbone model, except where noted.2 For NER and QA, we fine-tune XLM-R with standard tagging and SQuAD-style span prediction layers. For event extraction, we use the OneIE framework (Lin et al., 2020), a joint neural model for information extraction with global features. We report average F1 scores over three runs with different random seeds. More implementation details can be found in Appendix D. MT Systems. We experiment with two MT systems: (1) the Google Translation (GMT) API,3and (2) an open-sourced multilingual translation model NLLB (No Language Left Behind) (Costa-jussà et al., 2022) with 3.3 billion parameters, supporting the translation between any pair of 200 languages.4 ## 3.2 Choice Of Markers Ideally, a good span marker should minimize the impact on translation quality while having a high chance of being preserved during translation. However, prior works used quotation marks (" ") (Lee et al., 2018; Lewis et al., 2020) and XML/HTML tags (e.g., <a> or <PER>) (Hu et al., 2020; Ahmad et al., 2021) without much justification, which we address below. Preserved in Translation. In a pilot study, we experimented with several markers, including XML tags, [], " ", (), <>, and {}, etc. We found that both MT systems work reasonably well to retain square brackets ([]) and XML markers during the translation across many languages, while other markers that have language-specific formats are easily lost in translation. For example, quotation marks (" ") are often translated in a languagespecific way, e.g., «» in Russian, and are sometimes lost entirely in Arabic and Finnish, leading to low projection rates: 53% for Russian, 76% for Arabic, and 79% for Finnish based on TyDiQA dataset. The *projection rate* is measured by the percentage of data in which the numbers and type of special markers in the translations match with the source sentences. To improve the robustness of MT system in handling markers, we found further finetuning the MT system on synthetic data, where the special markers are inserted around name entities in parallel sentences, for only 200 steps is sufficient to boost the projection rate while maintaining translation quality (more details in §4). Impact on Translation Quality. After narrowing down the choices to XML tags and square brackets, we further measure the impact of adding markers on the translation quality by adopting the evaluation setup used by Fan et al. (2021). We compare translation quality, with and without markers inserted, from English to various target languages using BLEU score. Table 2 presents the experimental results with Google Translation. Examples of errors are shown in Table 3. We find that inserting special markers indeed degrades translation quality, but overall, square brackets have less negative impact compared to XML tags. We hypothesize this is because using [] introduces less number of extra subword tokens in the encoding and decoding of the text during translation, compared to XML tags. More results on 55 languages using the en → Lang. Corpus # **sent** GMT - **BLEU** Orig. XML [] Arabic (ar) TED18 1,997 **20.7** 14.0 15.1 German (de) TED18 1,997 **44.5** 33.9 41.9 Spanish (es) TED18 1,997 **45.9** 34.2 35.4 French (fr) TED18 1,997 **37.6** 31.0 31.9 Hindi (hi) TED18 1,070 **14.5** 12.8 13.0 Russian (ru) WMT 1,997 **36.4** 28.5 35.2 Vietnamese (vi) TED18 1,997 **32.8** 28.5 27.0 Chinese (zh) WMT 1,997 **40.6** 33.4 37.1 AVG 1,881 **34.1** 27.0 29.6 Table 2: Comparsion of translation quality with different span markers, where the **best** and second best are marked. Overall, square brackets ([]) have less negative impact compared to XML tags. "Orig." denotes the translation when no marker is inserted. English #1:The divorce settlement called for Giuliani to pay Hanover more than $6.8 million, according to the reporter . Orig.:据记者称,离婚协议要求朱利安尼向汉诺威 支付超过680万美元。 [ ] :据[记者]报道,[离婚]协议要求[朱利安尼][支付] [汉诺威]超过680万美元。 XML:据<e>记者<,<a>离婚</a>和解协议要求<b>朱利安尼 </b><c>支付</c><d>汉诺威</d>超过680万美元。/e> 。 English #2:The WTO is headquartered in Geneva . Orig.: . J Jk. ú ¯ éJ ÖÏAªË@ èPAj. JË@ éÒ ¢ JÖÏ ú æJ KQË@ Q®ÖÏ@ ©®K [ ] : . [ J Jk. ] éJ ÖÏAªË@ èPAj. JË@ éÒ ¢ JÖÏ ú ¯ ú æJ KQË@ Q®ÖÏ@ ©®K XML: . </b> J Jk. <b> ú ¯ </a> WTO <a> È ú æJ KQË@ Q®ÖÏ@ ©®K Table 3: Example errors and correctly projected markers with GMT. In #1, a necessary Chinese verb "报道 (report)" is lost in the XML-marked translation, while tags (<e>, /e>) are also mismatched due to the word reordering of "记者 (reporter)". In #2, []-marked translation fails to preserve the square brackets ([]) around the Arabic translation of "WTO" (marked by underline). Table 4: Comparison of different markers on TyDiQAGoldP by training on the translated *projected data only*. Overall, square brackets ([]) have the best transfer learning performance. | en → Lang. | Hu et al. | GMT - TyDiQA F1 XML [] " " | | | |-----------------|-------------|------------------------------|------|------| | Arabic (ar) | 68.8 | 68.4 | 71.7 | 66.5 | | Bengali (bn) | 58.6 | 64.8 | 64.1 | 69.3 | | Finnish (fi) | 69.4 | 69.6 | 70.8 | 68.0 | | Indonesian (id) | 75.5 | 76.0 | 78.6 | 77.3 | | Korean (ko) | 56.8 | 55.6 | 59.0 | 59.6 | | Russian (ru) | 49.5 | 65.7 | 66.1 | 52.3 | | Swahili (sw) | 69.1 | 70.4 | 70.1 | 70.1 | | Telugu (te) | 70.2 | 69.0 | 67.3 | 67.9 | | AVG | 64.7 | 67.4 | 68.5 | 66.4 | NLLB translation system and more details about the evaluation setup can be found in Appendix G.1. Impact on Transfer Learning. We next evaluate the impact of different marker choices on the performance of cross-lingual transfer. The results on the TyDiQA dataset are presented in Table 4. On average, square brackets ([]) have the best transfer learning performance. We also directly compare with the projection data released by Hu et al., which utilizes XML tags and a Google internal translation system in the year 2020 to translate QA datasets. More results on NER and event extraction tasks, and comparison with the alignment-based projection methods are presented in Table 8. ![4_image_0.png](4_image_0.png) ## 4 Easyp**Roject** Based on our analysis, we develop an optimized version of the mark-then-translate method, which we call EASYPROJECT. 5 Our improvements target the two weaknesses of the marker-based approach: (1) special markers may get lost during translation; and (2) although square brackets ([]) show strong performance, they don't carry the correspondence between original spans and the ones in the translation (e.g., [ Churchill ] was born in [ England ] in [ 1874 ].), as the XML tags (e.g., <a> Churchill </a> was born in <b> England </b> in <c> 1874 </c>.). If multiple annotated spans with different labels exist in one sentence, it is challenging to assign labels to the projected entities in the translation, as word order can change between languages. ## 4.1 Fine-Tuning Nllb To improve the robustness of the MT system in handling special markers, we further fine-tuned the NLLB model on synthetic data. We utilize parameter-efficient fine-tuning by only updating the last layer of the encoder and decoder, which take 4.2% of all parameters. We found fine-tuning 200 steps is sufficient to improve the projection rate on TyDiQA dataset from 70% to 96.4% while maintaining the translation quality. Creating Synthetic Data. We first construct a parallel corpus where the special markers are inserted around the corresponding name entities in source and target sentences, with following steps. 5This name was inspired by Daumé III (2007). ![4_image_1.png](4_image_1.png) 1. Detect named entities on the English side of the parallel corpus, using the SpaCy NER system,6 which covers 18 types of NER labels. 2. Translate the English entity names into the target language, and use string matching to find the corresponding entities from the target sentence. Given a pair of entities in the source and target sentences, we add square brackets ([]) around both of them. 3. Select all sentence pairs that contain more than one []-marked entity. We also sort the rest of the data based on length, and include the top-k sentence pairs. In total, we use 5,000 sentence pairs for each language pair. We utilize the training data of NLLB model7as the source of the parallel sentences, and use the sentence pairs from high-resource language pairs (en to {*de,es,nl,zh,ar*}), which are selected based on the CoNLL-2002/2003 ({*de,es,nl*}) and ACE ## Datasets ({Zh,Ar}). Parameter-efficient Fine-tuning. To save compute and preserve the translation ability of the model, we only update the weights in the last layer of the encoder and decoder for 200 steps, with a learning rate of 5e-5 and a batch size of 8, which takes around 2 minutes on an A40 GPU. The changes in projection rate and the translation quality during the fine-tuning process are shown in Figure 3. The fine-tuned NLLB model is able to improve the projection rate on TyDiQA from 70% to 96.4%. TyDiQA is particularly challenging due to its relatively long sentences, and the translation model sometimes may ignore the inserted markers. By fine-tuning on high-resource languages, we found the model is able to generalize well to the other language pairs. In pilot study, we notice that fine-tuning on low-resource languages, such as African languages in MasakhaNER corpus, doesn't generalize well and leads to lower translation quality. We will release all the fine-tuned models. Fuzzy String Matching. To identify the corresponding labels when more than one projected entity exists in the translation, we design a fuzzy string-matching method. We first translate each annotated span in the original sentence independently, resulting in a set of labeled mentions. To identify labels for the unlabeled spans in the []- marked translation, we compare each unlabeled span with the labeled mentions using the ratio() function in the difflib library. 8 Two strings are considered matched if they have >50% matched subsequences, and the associated label is assigned to the bracketed span. We also experiment with matching span labels left-to-right based on their relative position in the text. The results are shown in Table 5. Using fuzzy string-matching leads to overall better performance since it can assign the span labels more accurately. Putting all the improvements together, we call this improved variant of marker-based method EASYP**ROJECT** for easil**y project**ing labels. ## 5 Experiments In this section, we comprehensively evaluate the effectiveness of EASYPROJECT and analyze the 8https://docs.python.org/3/library/difflib. html\#difflib.SequenceMatcher.ratio Original NLLB Fine-tuned NLLB Proj. Rate F1 Proj. Rate F1 [ ] +Fuzzy String Match 87.0% 62.6 93.7% **62.9** [ ] +Match by Sequence 87.7% 62.6 **94.4%** 62.3 Table 5: A comparison of varied methods to translate sentences and assign labels on the devset of MasakhaNER 2.0 corpus. "Proj.Rate" denotes the projection rate, which is defined in §3.2. key factors that impact the performance of crosslingual transfer learning. ## 5.1 Comparison To Alignment-Based Method We first compare EasyProject with the traditional pipeline approach based on state-of-the-art bilingual word alignment models. For both methods, we apply a simple filtering rule that removes sentences with different numbers of annotations before and after projection. Bilingual Word Alignment. We experiment with two state-of-the-art neural word aligners: (1) the unsupervised version of **Awesome-align** (Dou and Neubig, 2021) and its supervised version, which we extended from 5 to 39 languages for this paper, and (2) **QA-align** (Nagata et al., 2020) which formulates the word alignment problem as SQuAD-style QA task. More details on the word alignment models can be found in Appendix C. Transfer Learning Results. As summarized in Figure 2, EasyProject outperforms alignmentbased projection for most languages, even though Table 6: Average results over 18 African languages on the MasaKhaNER 2.0 corpus, as two languages are not supported by NLLB model. The mDeBERTa model is used here sinces it has the strongest performance on African languages in the original paper (Adelani et al., 2022), where the "Ref" column is from. Full results on each languages are in Table 11 in Appendix B . Table 7: Average results for mT5 models over 39 target languages on WikiANN. The XLM-R model is listed for comparison. "Ref." column is the performance from prior work (He et al.; Xue et al.). | Fine-tuneen | NLLB+Aligner | NLLB+Markers | | | |-------------------------------------------------------|----------------|----------------|-------------|--------------| | Ref. | mDeBERTa | Awesome-align | XML | EasyProject. | | 56.9 | 55.0 | 63.2 (+8.2) | 63.8 (+8.8) | 64.3 (+9.3) | | Table 6: Average results over 18 African languages on | | | | | | Model | Ref. | Fine-tuneen | GMT+EasyProject | |------------|--------|---------------|-------------------| | mT5large | 58.2 | 61.2 | 68.5 (+7.3) | | mT5XL | 65.1 | 62.9 | 68.6 (+5.7) | | XLM-Rlarge | 63.3 | 64.3 | 68.9 (+4.6) | | en → Lang. | Fine-tuneen | NLLB+Word Align | NLLB+Markers | GMT+Word Align | GMT+Markers | | | | | | | | | |------------------|---------------|-------------------|----------------|------------------|-----------------|-----------------|--------------|--------------|-----------------|------|-----------------|--------------|-------------| | Ref. | XLMR | QAali. | Awes. | Awesft | XML | EProj. (∆XLMR ) | QAali. | Awes. | Awesft | XML | EProj. (∆XLMR ) | | | | yo | 41.3 | 37.1 | - | 73.2 | 78.0 | 68.7 | 77.7 (+40.6) | - | 72.1 | 66.1 | 71.8 | 73.8 (+36.7) | | | ja | 18.3 | 18.0 | 19.3 | 23.4 | 22.4 | 17.3 | 45.5 (+27.5) | 19.3 | 23.0 | 22.6 | 42.0 | 43.5 (+25.5) | | | zh | 25.8 | 27.1 | 47.6 | 36.0 | 34.0 | 46.2 | 46.6 (+19.5) | 45.2 | 43.3 | 39.6 | 43.8 | 45.9 (+18.8) | | | th | 1.5 | 0.7 | - | 2.6 | 2.5 | 8.8 | 14.0 (+13.3) | - | 1.2 | 1.3 | 14.7 | 15.1 (+14.4) | | | ur | 54.2 | 63.6 | - | 71.6 | 71.8 | 74.4 | 74.7 (+11.1) | - | 70.2 | 72.3 | 76.3 | 74.7 (+11.1) | | | he | 54.1 | 56.0 | - | 58.3 | 58.1 | 61.1 | 63.4 (+7.4) | - | 59.6 | 60.2 | 63.7 | 67.1 (+11.1) | | | ms | 69.8 | 64.1 | - | 69.4 | 72.7 | 74.6 | 73.9 (+9.8) | - | 73.0 | 73.8 | 73.2 | 74.1 (+10.0) | | | my | 51.3 | 53.5 | - | 61.6 | 62.9 | 56.2 | 60.1 (+6.6) | - | 60.2 | 60.1 | 57.0 | 62.0 (+8.5) | | | ar | 43.7 | 48.5 | 49.3 | 48.7 | 47.6 | 45.9 | 50.5 (+2.0) | 50.7 | 50.9 | 51.2 | 51.3 | 56.3 (+7.8) | | | jv | 58.4 | 62.3 | - | 64.8 | 61.6 | 67.7 | 67.0 (+4.7) | - | 64.6 | 68.8 | 69.2 | 69.8 (+7.5) | | | tl | 72.2 | 73.0 | - | 80.1 | 78.8 | 79.5 | 79.3 (+6.3) | - | 80.4 | 80.4 | 79.9 | 80.0 (+7.0) | | | hi | 71.0 | 69.5 | - | 73.9 | 73.4 | 73.8 | 74.4 (+4.9) | - | 75.6 | 76.0 | 75.9 | 75.7 (+6.2) | | | ka | 68.9 | 68.8 | - | 74.5 | 75.0 | 70.4 | 74.2 (+5.4) | - | 73.5 | 73.2 | 72.7 | 74.7 (+5.9) | | | bn | 76.3 | 75.1 | - | 80.5 | 80.2 | 80.1 | 80.7 (+5.6) | - | 82.0 | 81.7 | 80.6 | 80.9 (+5.8) | | | ta | 56.9 | 58.8 | - | 63.1 | 63.7 | 53.8 | 63.5 (+4.7) | - | 62.4 | 63.2 | 63.9 | 64.3 (+5.5) | | | eu | 62.1 | 63.6 | - | 70.3 | 70.0 | 64.7 | 68.7 (+5.1) | - | 69.8 | 66.5 | 67.5 | 69.0 (+5.4) | | | ko | 58.0 | 57.9 | - | 61.1 | 60.6 | 59.4 | 58.0 (+0.1) | - | 62.9 | 62.4 | 61.7 | 61.9 (+4.0) | | | mr | 64.1 | 63.9 | - | 63.6 | 64.0 | 62.9 | 64.9 (+1.0) | - | 62.6 | 61.2 | 64.0 | 67.1 (+3.2) | | | sw | 70.0 | 68.5 | - | 70.6 | 71.5 | 70.1 | 69.7 (+1.2) | - | 70.2 | 71.5 | 72.2 | 70.7 (+2.2) | | | vi | 77.2 | 74.2 | - | 70.4 | 65.8 | 77.8 | 77.5 (+3.3) | - | 70.4 | 67.2 | 77.5 | 76.0 (+1.8) | | | te | 52.3 | 55.6 | - | 57.7 | 56.3 | 51.8 | 55.9 (+0.3) | - | 57.4 | 56.8 | 57.6 | 57.4 (+1.8) | | | id | 52.3 | 52.4 | - | 53.5 | 55.3 | 52.7 | 53.1 (+0.7) | - | 52.7 | 55.0 | 57.3 | 53.9 (+1.5) | | | ml | 65.8 | 63.5 | - | 63.2 | 64.8 | 56.5 | 61.3 (-2.2) | - | 61.9 | 63.0 | 68.1 | 64.3 (+0.8) | | | es | 68.8 | 74.8 | - | 72.2 | 70.2 | 73.3 | 71.7 (-3.1) | - | 71.3 | 72.6 | 73.5 | 75.6 (+0.8) | | | de | 77.9 | 79.4 | 79.7 | 79.5 | 79.6 | 81.5 | 80.0 (+0.6) | 79.5 | 80.0 | 79.4 | 79.8 | 80.2 (+0.8) | | | kk | 49.8 | 53.5 | - | 53.5 | 53.9 | 40.4 | 54.0 (+0.5) | - | 53.2 | 55.1 | 51.3 | 54.2 (+0.7) | | | fr | 79.0 | 80.1 | 80.7 | 79.8 | 80.9 | 80.9 | 81.5 (+1.4) | 79.6 | 80.7 | 79.4 | 81.5 | 80.8 (+0.7) | | | af | 77.6 | 78.6 | - | 79.3 | 78.4 | 79.1 | 79.4 (+0.8) | - | 79.1 | 78.9 | 79.0 | 79.2 (+0.6) | | | et | 78.0 | 79.6 | - | 80.7 | 79.2 | 80.2 | 79.9 (+0.3) | - | 80.2 | 79.6 | 78.6 | 80.1 (+0.5) | | | hu | 79.3 | 81.0 | - | 80.3 | 79.8 | 79.7 | 80.4 (-0.6) | - | 79.9 | 79.7 | 80.6 | 80.7 (-0.3) | | | fi | 78.6 | 80.6 | - | 81.0 | 80.9 | 80.4 | 79.8 (-0.8) | - | 80.7 | 79.7 | 78.8 | 80.3 (-0.3) | | | it | 81.1 | 81.3 | - | 80.5 | 80.5 | 81.9 | 81.2 (-0.1) | - | 80.3 | 80.4 | 81.1 | 80.9 (-0.4) | | | tr | 78.9 | 80.3 | - | 80.6 | 81.0 | 80.1 | 79.5 (-0.8) | - | 80.1 | 80.2 | 81.5 | 79.6 (-0.7) | | | nl | 84.3 | 84.1 | - | 83.4 | 83.3 | 83.0 | 83.4 (-0.7) | - | 83.5 | 82.9 | 83.0 | 83.1 (-1.0) | | | bg | 81.2 | 82.1 | - | 80.2 | 78.8 | 81.9 | 82.5 (+0.4) | - | 80.9 | 79.7 | 82.5 | 80.6 (-1.5) | | | pt | 79.6 | 82.0 | - | 80.9 | 80.4 | 82.6 | 81.9 (-0.1) | - | 79.0 | 80.2 | 80.6 | 80.1 (-1.9) | | | ru | 71.5 | 71.1 | - | 68.9 | 68.1 | 70.0 | 70.3 (-0.8) | - | 67.4 | 66.8 | 67.4 | 68.2 (-2.9) | | | el | 77.2 | 79.3 | - | 76.3 | 75.7 | 77.7 | 74.1 (-5.2) | - | 73.1 | 75.2 | 76.2 | 75.0 (-4.3) | | | fa | 61.1 | 64.3 | - | 41.5 | 47.3 | 51.3 | 52.1 (-12.2) | - | 52.9 | 52.4 | 45.5 | 52.0 (-12.3) | | | AVG | 63.3 | 64.3 | - | 66.4 | 66.4 | 66.1 | 68.4 (+4.1) | - | 66.7 | 66.6 | 68.3 | 68.9 (+4.6) | | | NER | ko | 31.9 | 56.1 | - | 36.9 | 36.4 | 64.8 | 67.7 (+11.6) | - | 37.6 | 37.1 | 60.9 | 65.0 (+8.9) | | bn | 64.0 | 66.0 | - | 71.1 | 72.6 | 63.7 | 69.6 (+3.6) | - | 73.6 | 69.3 | 74.4 | 71.0 (+5.0) | | | fi | 70.5 | 69.7 | - | 74.9 | 74.0 | 73.0 | 73.3 (+3.6) | - | 74.9 | 74.9 | 73.1 | 74.0 (+4.3) | | | te | 70.1 | 72.9 | - | 74.9 | 74.6 | 69.9 | 78.3 (+5.4) | - | 75.9 | 69.9 | 77.0 | 77.0 (+4.1) | | | ar | 67.6 | 72.4 | 74.2 | 76.8 | 76.4 | 72.7 | 75.9 (+3.5) | 74.0 | 76.3 | 76.6 | 75.8 | 76.4 (+4.0) | | | sw | 66.1 | 69.9 | - | 73.0 | 74.7 | 72.4 | 73.4 (+3.5) | - | 72.3 | 73.4 | 73.6 | 73.5 (+3.6) | | | ru | 67.0 | 66.5 | - | 70.9 | 71.5 | 69.1 | 70.4 (+3.9) | - | 71.6 | 69.7 | 70.2 | 69.8 (+3.3) | | | id | 77.4 | 78.0 | - | 81.6 | 81.1 | 79.6 | 80.3 (+2.3) | - | 80.4 | 81.3 | 78.9 | 79.7 (+1.7) | | | AVG | 64.3 | 68.9 | - | 70.0 | 70.2 | 70.7 | 73.6 (+4.7) | - | 70.3 | 69.0 | 73.0 | 73.3 (+4.4) | | | QA | Fine-tuneen | NLLB+Word Align | NLLB+Markers | GMT+Word Align | GMT+Markers | | | | | | | | | | Event Extraction | XLMR | QAali. Awes. | Awesft | XML | EProj. (∆XLMR ) | QAali. Awes. | Awesft | XML | EProj. (∆XLMR ) | | | | | | Entity | 69.2 | 74.1 | 74.2 | 74.2 | 73.6 | 73.8 (+4.6) | 74.4 | 74.3 | 74.0 | 73.7 | 74.0 (+4.8) | | | | Relation | 28.1 | 34.7 | 35.2 | 30.8 | 30.8 | 30.7 (+2.6) | 34.8 | 33.1 | 34.2 | 31.8 | 33.7 (+5.6) | | | | Trig-I | 42.7 | 43.5 | 43.0 | 44.7 | 43.3 | 43.7 (+1.0) | 43.6 | 44.2 | 43.7 | 43.8 | 44.0 (+1.3) | | | | Arabic Trig-C | 40.0 | 41.4 | 41.3 | 42.9 | 41.1 | 41.8 (+1.8) | 41.8 | 42.6 | 42.0 | 41.5 | 42.0 (+2.0) | | | | Arg-I | 33.5 | 37.1 | 38.1 | 37.6 | 37.1 | 37.6 (+4.1) | 37.7 | 37.9 | 37.6 | 36.9 | 37.8 (+4.3) | | | | Arg-C | 30.8 | 34.3 | 35.4 | 34.7 | 34.9 | 34.8 (+4.0) | 34.6 | 34.5 | 34.5 | 34.1 | 35.2 (+4.4) | | | | AVG | 40.7 | 44.2 | 44.5 | 44.1 | 43.5 | 43.7 (+3.0) | 44.5 | 44.4 | 44.3 | 43.6 | 44.5 (+3.8) | | | | Entity | 59.1 | 67.8 | 70.7 | 70.7 | 73.5 | 73.5 (+14.4) | 67.1 | 68.8 | 70.6 | 70.2 | 71.0 (+11.9) | | | | Relation | 20.4 | 31.2 | 34.7 | 35.9 | 37.3 | 37.8 (+17.4) | 30.7 | 28.2 | 30.1 | 35.6 | 28.4 (+8.0) | | | | Trig-I | 25.0 | 48.6 | 55.3 | 56.2 | 49.3 | 52.5 (+27.5) | 43.7 | 53.5 | 50.0 | 50.7 | 52.6 (+27.6) | | | | Chinese Trig-C | 23.9 | 45.6 | 52.1 | 52.0 | 46.1 | 49.0 (+25.1) | 40.8 | 50.0 | 46.6 | 47.4 | 49.3 (+25.4) | | | | Arg-I | 28.6 | 42.6 | 42.8 | 40.9 | 43.6 | 42.3 (+13.7) | 38.7 | 39.6 | 39.4 | 39.8 | 40.1 (+11.5) | | | | Arg-C | 28.1 | 40.3 | 41.2 | 39.4 | 42.1 | 40.8 (+12.7) | 37.3 | 38.4 | 38.2 | 38.2 | 38.2 (+10.1) | | | | AVG | 30.9 | 46.0 | 49.5 | 49.2 | 48.7 | 49.3 (+18.4) | 43.1 | 46.4 | 45.8 | 47.0 | 46.6 (+15.7) | | | English:He was buried in Woodlawn Cemetery in Bronx , New York City . Alignment-based:他被埋葬在纽约市 布朗 克斯 的伍德劳恩公墓。 EasyProject:他被埋葬在纽约市 布朗 克斯 的伍德劳恩公墓。 span markers degrade translation quality. In Table 6 and 8, we show that EasyProject almost always outperforms alignment-based projection on NER, QA, and the more challenging event extraction tasks, when training on a combination of English data and the translated projected data in target languages. In addition, we find that EasyProject generally performs better than using XML tags, as the former has less impact on the translation quality. We also notice the relatively low zero-shot performance in ja, zh, and th on WikiAnn dataset, which is consistent with scores reported in prior literature (He et al., 2021b). We suspect this is due to their distinct script systems, and adding EasyProject data brings significant improvements to all of them - Japanese (+25.5 F1), Chinese (+18.8 F1), and Thai (+14.4 F1). EasyProject (GMT) also improves the performance of mT5large and mT5XL by 7.3 and 5.7 F1 on average across all target languages, and mT5XXL by 2.2 F1 on a subset of 8 languages, as shown in Table 7. Full results of the mT5 model are provided in Table 22 in Appendix. Accuracy of Projected Annotations. To answer why EasyProject can outperform alignment-based method even though it degrades translation quality, we manually inspect 400 sentences sampled from the WikiANN training set. EasyProject correctly projects 100% and 97.5% of the label spans, when using Google Translation and NLLB, respectively. Whereas the traditional method based on Awesomealign only achieves 97.5% and 93.4% accuracy. We found EasyProject can more accurately preserve the boundaries of the label span. For the alignmentbased method, most errors are caused by partial or missed alignments, as demonstrated in Table 9. More analyses are provided in the Appendix G.2. ![7_image_0.png](7_image_0.png) ## 5.2 Size Of Pre-Training Data Vs. Improvement In Performance Figure 4 shows improvements in NER F1 using EasyProject vs. size of data for each language in XLM-RoBERTa's pre-training corpus. EasyProject provides larger improvements on low-resource languages and languages without whitespaces. For high-resource languages in the Indo-European (e.g., Germanic and Romance) or Uralic families, using projected data struggles to significantly improve over a strong fine-tuning baseline. ## 5.3 Transfer From Non-English Languages Recent work has suggested that English may not always be the best language to transfer from (Turc et al., 2021). We demonstrate that the markerbased method is not limited to English-centric transfer learning; rather, it can be used for transfer learning from any language to any language provided with the availability of multilingual MT systems. In Figure 5, we show the relative F1 improvements of using EasyProject over fine-tuning on source language only for 9 different languages (81 directions in total), leveraging the multilingual capabilities of NLLB. Fine-tuning models only on source-language data does not work well when transferring to or from Chinese, consistent with observations from Hu et al. (2020). The markerbased method addresses this problem by providing substantial improvements in F1 on the WikiANN dataset for Chinese. Transferring to Arabic and from Russian are also challenging, but again, the marker-based method greatly boosts performance. ![8_image_0.png](8_image_0.png) ![8_image_1.png](8_image_1.png) ## 5.4 More Experiments And Analyses In The Appendix We also compare EasyProject against MulDA (Liu et al., 2021) (§F.1) and bitext projection (F.2), as well as evaluating it on low-resource languages: Maori and Turkmen ( ¯ §F.3). In addition, we analyze the two branches of label projection methods from other aspects, including projection rate (§G.2) and translation speed (§G.3). Due to space limits, we present all of them in the appendix. On the data side, we fix a sentence splitting issue for 12 extremely long sentences in the ACE05 Arabic testset (§E ). This issue is noticed by other researchers (Huang et al., 2022b) as well. We will release the improved ACE Arabic dataset to the community. ## 6 Conclusion In this paper, we presnet a thorough empirical assessment of various approaches for cross-lingual label projection. We also design an improved variant of the mark-and-translate method, which we call EASYPROJECT. Experiments on 57 target languages and three well-studied NLP tasks show that EasyProject consistently outperforms the alignment-based methods and effectively improves the performance of cross-lingual transfer. ## Limitations While our study shows that EasyProject can effectively translate the source sentences with special markers inserted to the target languages, using the Google Translation and NLLB model, it is unclear whether all translation models can work well when special markers are inserted. To generalize this approach to future MT systems, we design a simple and computationally efficient approach to improve the robustness of MT systems in handling special markers. However, the translation quality for the marker-inserted text still falls behind the original text. We leave the work of further optimizing the translation quality as future work. ## Acknowledgements This material is based upon work supported by the NSF (IIS-2052498) and IARPA via the BETTER and HIATUS programs (2019-19051600004, 202222072200004). The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of NSF, ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. ## References David Ifeoluwa Adelani, Graham Neubig, Sebastian Ruder, Shruti Rijhwani, Michael Beukman, Chester Palen-Michel, Constantine Lignos, Jesujoba O Alabi, Shamsuddeen H Muhammad, Peter Nabende, et al. 2022. MasakhaNER 2.0: Africa-centric transfer learning for named entity recognition. In Pro- ceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. Wasi Uddin Ahmad, Nanyun Peng, and Kai-Wei Chang. 2021. Gate: graph attention transformer encoder for cross-lingual relation and event extraction. In Proceedings of the AAAI Conference on Artificial Intelligence. Alan Akbik, Laura Chiticariu, Marina Danilevsky, Yunyao Li, Shivakumar Vaithyanathan, and Huaiyu Zhu. 2015a. Generating high quality proposition Banks for multilingual semantic role labeling. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. Alan Akbik, Laura Chiticariu, Marina Danilevsky, Yunyao Li, Shivakumar Vaithyanathan, and Huaiyu Zhu. 2015b. Generating high quality proposition Banks for multilingual semantic role labeling. In *Proceedings of the 53rd Annual Meeting of the Association* for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. Maryam Aminian, Mohammad Sadegh Rasooli, and Mona Diab. 2017. Transferring semantic roles using translation and syntactic information. In Proceedings of the Eighth International Joint Conference on Natural Language Processing. Maryam Aminian, Mohammad Sadegh Rasooli, and Mona Diab. 2019. Cross-lingual transfer of semantic roles: From raw text to semantic roles. In *Proceedings of the 13th International Conference on* Computational Semantics. Mihaela Bornea, Lin Pan, Sara Rosenthal, Radu Florian, and Avirup Sil. 2021. Multilingual transfer learning for QA using translation as data augmentation. In Proceedings of the AAAI Conference on Artificial Intelligence. Yang Chen and Alan Ritter. 2021. Model selection for cross-lingual transfer. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. Jonathan H. Clark, Eunsol Choi, Michael Collins, Dan Garrette, Tom Kwiatkowski, Vitaly Nikolaev, and Jennimaria Palomaki. 2020. TyDi QA: A benchmark for information-seeking question answering in typologically diverse languages. *Transactions of the* Association for Computational Linguistics. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Marta R Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, et al. 2022. No language left behind: Scaling human-centered machine translation. *arXiv preprint* arXiv:2207.04672. Hal Daumé III. 2007. Frustratingly easy domain adaptation. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics. Angel Daza and Anette Frank. 2020. X-SRL: A parallel cross-lingual semantic role labeling dataset. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. Jacob Devlin. 2018. Multilingual BERT readme document. Zi-Yi Dou and Graham Neubig. 2021. Word alignment by fine-tuning embeddings on parallel corpora. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics. Chris Dyer, Victor Chahuneau, and Noah A. Smith. 2013. A simple, fast, and effective reparameterization of IBM model 2. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Ahmed El-Kishky, Vishrav Chaudhary, Francisco Guzmán, and Philipp Koehn. 2020. CCAligned: A massive collection of cross-lingual web-document pairs. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. Ramy Eskander, Smaranda Muresan, and Michael Collins. 2020. Unsupervised cross-lingual part-ofspeech tagging for truly low-resource scenarios. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, et al. 2021. Beyond English-centric multilingual machine translation. Hao Fei, Meishan Zhang, and Donghong Ji. 2020. Cross-lingual semantic role labeling with highquality translated training corpus. In *Proceedings* of the 58th Annual Meeting of the Association for Computational Linguistics. Pengcheng He, Jianfeng Gao, and Weizhu Chen. 2021a. DeBERTaV3: Improving deberta using ELECTRAstyle pre-training with gradient-disentangled embedding sharing. *ArXiv preprint*. Ruidan He, Linlin Liu, Hai Ye, Qingyu Tan, Bosheng Ding, Liying Cheng, Jiawei Low, Lidong Bing, and Luo Si. 2021b. On the effectiveness of adapter-based tuning for pretrained language model adaptation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan Firat, and Melvin Johnson. 2020. XTREME: A massively multilingual multitask benchmark for evaluating cross-lingual generalisation. Proceedings of Machine Learning Research. Kuan-Hao Huang, I-Hung Hsu, Prem Natarajan, KaiWei Chang, and Nanyun Peng. 2022a. Multilingual generative language models for zero-shot crosslingual event argument extraction. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. Kuan-Hao Huang, I-Hung Hsu, Prem Natarajan, KaiWei Chang, and Nanyun Peng. 2022b. Multilingual generative language models for zero-shot crosslingual event argument extraction. In *Proceedings* of the 60th Annual Meeting of the Association for Computational Linguistics. Masoud Jalili Sabet, Philipp Dufter, François Yvon, and Hinrich Schütze. 2020. SimAlign: High quality word alignments without parallel training data using static and contextualized embeddings. In *Findings* of the Association for Computational Linguistics: EMNLP 2020. David Kamholz, Jonathan Pool, and Susan Colowick. 2014. PanLex: Building a resource for panlingual lexical translation. In *Proceedings of the Ninth International Conference on Language Resources and* Evaluation. Phillip Keung, Yichao Lu, Julian Salazar, and Vikas Bhardwaj. 2020. Don't use English dev: On the zero-shot cross-lingual evaluation of contextual embeddings. In *Proceedings of the 2020 Conference on* Empirical Methods in Natural Language Processing. Wuwei Lan, Yang Chen, Wei Xu, and Alan Ritter. 2020. An empirical study of pre-trained transformers for Arabic information extraction. In *Proceedings of the* 2020 Conference on Empirical Methods in Natural Language Processing. Wuwei Lan, Chao Jiang, and Wei Xu. 2021. Neural semi-Markov CRF for monolingual word alignment. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. Kyungjae Lee, Kyoungho Yoon, Sunghyun Park, and Seung-won Hwang. 2018. Semi-supervised training data generation for multilingual question answering. In *Proceedings of the Eleventh International Conference on Language Resources and Evaluation*. Patrick Lewis, Barlas Oguz, Ruty Rinott, Sebastian Riedel, and Holger Schwenk. 2020. MLQA: Evaluating cross-lingual extractive question answering. In *Proceedings of the 58th Annual Meeting of the* Association for Computational Linguistics. Ying Lin, Heng Ji, Fei Huang, and Lingfei Wu. 2020. A joint neural model for information extraction with global features. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Linlin Liu, Bosheng Ding, Lidong Bing, Shafiq Joty, Luo Si, and Chunyan Miao. 2021. MulDA: A multilingual data augmentation framework for lowresource cross-lingual NER. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. Mehrad Moradshahi, Giovanni Campagna, Sina Semnani, Silei Xu, and Monica Lam. 2020. Localizing open-ontology QA semantic parsers in a day using machine translation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. Masaaki Nagata, Katsuki Chousa, and Masaaki Nishino. 2020. A supervised word alignment method based on cross-language span prediction using multilingual BERT. In *Proceedings of the 2020 Conference on* Empirical Methods in Natural Language Processing. Jian Ni, Georgiana Dinu, and Radu Florian. 2017. Weakly supervised cross-lingual named entity recognition via effective annotation and representation projection. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Massimo Nicosia, Zhongdi Qu, and Yasemin Altun. 2021. Translate & Fill: Improving zero-shot multilingual semantic parsing with synthetic data. In Findings of the Association for Computational Linguistics: EMNLP 2021. Tong Niu, Kazuma Hashimoto, Yingbo Zhou, and Caiming Xiong. 2022. OneAligner: Zero-shot crosslingual transfer with one rich-resource language pair for low-resource sentence retrieval. In Findings of the Association for Computational Linguistics: ACL 2022. Farhad Nooralahzadeh, Giannis Bekoulis, Johannes Bjerva, and Isabelle Augenstein. 2020. Zero-shot cross-lingual transfer with meta learning. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing*. Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics. Xiaoman Pan, Boliang Zhang, Jonathan May, Joel Nothman, Kevin Knight, and Heng Ji. 2017. Crosslingual name tagging and linking for 282 languages. In *Proceedings of the 55th Annual Meeting of the* Association for Computational Linguistics. Jonas Pfeiffer, Ivan Vulic, Iryna Gurevych, and Se- ´ bastian Ruder. 2020. MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. Afshin Rahimi, Yuan Li, and Trevor Cohn. 2019. Massively multilingual transfer for NER. In *Proceedings* of the 57th Annual Meeting of the Association for Computational Linguistics. Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, and Yuxiong He. 2020. ZeRo: Memory optimizations toward training trillion parameter models. In *International Conference for High Performance Computing, Networking, Storage and Analysis*. IEEE. Parker Riley, Isaac Caswell, Markus Freitag, and David Grangier. 2020. Translationese as a language in "multilingual" NMT. In *Proceedings of the 58th Annual Meeting of the Association for Computational* Linguistics. Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong, and Francisco Guzmán. 2021a. WikiMatrix: Mining 135M parallel sentences in 1620 language pairs from Wikipedia. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics. Holger Schwenk, Guillaume Wenzek, Sergey Edunov, Edouard Grave, Armand Joulin, and Angela Fan. 2021b. CCMatrix: Mining billions of high-quality parallel sentences on the web. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. Elias Stengel-Eskin, Tzu-ray Su, Matt Post, and Benjamin Van Durme. 2019. A discriminative neural model for cross-lingual word alignment. In *Proceedings of the 2019 Conference on Empirical Methods* in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. Erik F. Tjong Kim Sang. 2002. Introduction to the CoNLL-2002 shared task: Language-independent named entity recognition. In COLING-02: The 6th Conference on Natural Language Learning 2002. Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142– 147. Iulia Turc, Kenton Lee, Jacob Eisenstein, Ming-Wei Chang, and Kristina Toutanova. 2021. Revisiting the primacy of English in zero-shot cross-lingual transfer. ArXiv preprint. Christopher Walker, Stephanie Strassel, Julie Medero, and Kazuaki Maeda. 2006. ACE 2005 multilingual training corpus. LDC2006T06. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2019. Huggingface's transformers: State-ofthe-art natural language processing. *ArXiv preprint*. Weijia Xu, Batool Haider, and Saab Mansour. 2020. End-to-end slot alignment and recognition for crosslingual NLU. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer. In *Proceedings of the 2021 Conference of the North American* Chapter of the Association for Computational Linguistics: Human Language Technologies. Mahsa Yarmohammadi, Shijie Wu, Marc Marone, Haoran Xu, Seth Ebner, Guanghui Qin, Yunmo Chen, Jialiang Guo, Craig Harman, Kenton Murray, Aaron Steven White, Mark Dredze, and Benjamin Van Durme. 2021. Everything is all it takes: A multipronged strategy for zero-shot cross-lingual information extraction. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. David Yarowsky, Grace Ngai, and Richard Wicentowski. 2001. Inducing multilingual text analysis tools via robust projection across aligned corpora. In Proceedings of the First International Conference on Human Language Technology Research. Thomas Zenkel, Joern Wuebker, and John DeNero. 2020. End-to-end neural word alignment outperforms GIZA++. In *Proceedings of the 58th Annual* Meeting of the Association for Computational Linguistics. Ran Zhou, Xin Li, Lidong Bing, Erik Cambria, Luo Si, and Chunyan Miao. 2022. Conner: Consistency training for cross-lingual named entity recognition. EMNLP. Imed Zitouni, Jeffrey Sorensen, Xiaoqiang Luo, and Radu Florian. 2005. The impact of morphological stemming on Arabic mention detection and coreference resolution. In *Proceedings of the ACL Workshop on Computational Approaches to Semitic Languages*. ## A Data Statistics For Conll-2002/2003 | CoNLL 2002/2003 | | |-------------------|---------------| | # of Lang. | 3 | | # of Docs | - | | # of Sent. | 14k/3.2k/3.4k | | Avg. Length | –/14.5 | | Avg. # of Spans | 1.7 | The statistics of the CoNLL-2002/2003 multilingual NER dataset are provided in Table 10. Table 10: The detailed statistics of train/dev/test sets for CoNLL-2002/2003 dataset. **Avg. Length** represents the average number of tokens in each article/sentence, and **Avg. \# of Spans** denotes the average number of annotated spans in each sentence. ## B Full Results On Masakhaner2.0 MasahkaNER2.0 is a NER dataset in the news domain, including the annotations on 20 African languages. Following the setting in the original paper (Adelani et al., 2022), we use CoNLL-03 dataset (Tjong Kim Sang and De Meulder, 2003) as the source corpus, and train the mDeBERTv3 (He et al., 2021a) model on it. Then the trained model is evaluated on the test set of MasahkaNER2.0, with a focus on the PER, ORG, and LOC types. | Language | Ref. | Fine-tuneen | +Awes. | +XML | +EasyProj. | |---------------------|--------|---------------|----------|--------|--------------| | Bambara(bam) | 38.4 | 37.1 | 45.0 | 44.3 | 45.8 | | Ghomala(bbj) | 45.8 | 43.3 | - | - | - | | Ewe(ewe) | 76.4 | 75.3 | 78.3 | 77.8 | 78.5 | | Fon(fon) | 50.6 | 49.6 | 59.3 | 60.2 | 61.4 | | Hausa(hau) | 72.4 | 71.7 | 72.7 | 71.6 | 72.2 | | Igbo(ibo) | 61.4 | 59.3 | 63.5 | 59.6 | 65.6 | | Kinyarwanda(kin) | 67.4 | 66.4 | 63.2 | 70.8 | 71.0 | | Luganda(lug) | 76.5 | 75.3 | 77.7 | 77.9 | 76.7 | | Luo(luo) | 53.4 | 35.8 | 46.5 | 50.0 | 50.2 | | Mossi(mos) | 45.4 | 45.0 | 52.2 | 53.6 | 53.1 | | Chichewa(nya) | 80.1 | 79.5 | 75.1 | 73.5 | 75.3 | | Naija(pcm) | 75.5 | 75.2 | - | - | - | | chiShona(sna) | 37.1 | 35.2 | 69.5 | 56.3 | 55.9 | | Kiswahili(swa) | 87.9 | 87.7 | 82.4 | 81.7 | 83.6 | | Setswana(tsn) | 65.8 | 64.8 | 73.8 | 72.9 | 74.0 | | Akan/Twi(twi) | 49.5 | 50.1 | 62.7 | 64.7 | 65.3 | | Wolof(wol) | 44.8 | 44.2 | 54.5 | 58.9 | 58.9 | | isiXhosa(xho) | 24.5 | 24.0 | 61.7 | 71.9 | 71.1 | | Yoruba(yor) | 40.4 | 36.0 | 38.1 | 36.8 | 36.8 | | isiZulu(zul) | 44.7 | 43.9 | 68.9 | 74.8 | 73.0 | | Averaged Perf. | 56.9 | 55.0 | 63.2 | 63.8 | 64.3 | | Averaged Proj. Rate | - | - | 86.9% | 77.5% | 93.7% | Table 11: F1 scores on MasakhaNER2.0 using NLLB translation model. We skip Ghomala and Naija as they are not supported by NLLB. ## C Details Of Word Alignment Models Awesome-align. This aligner (Dou and Neubig, 2021), when used in the unsupervised setting, primarily relies on the normalized similarity scores of all word pairs between the two sentences, which are calculated based on pre-trained multilingual word embeddings taken from specific Transformer layers. In the supervised setting, with access to parallel text, Awesome-align can be further improved by fine-tuning towards a set of self-training and language model objectives. We include experiments of both the unsupervised (*Awesome*) and supervised (Awesomef t) versions of Awesome-align based on multilingual BERT, which has shown to achieve better word alignment results than XLMRoBERTabase. For the supervised version, we finetune an individual Awesome-align model for each of the 39 target languages in WikiANN using parallel sentences sampled from the M2M model's (Fan et al., 2021) training datasets: CCAligned (ElKishky et al., 2020) and CCMatrix (Schwenk et al., 2021b). Specifically, we randomly sample 200k parallel sentences from the CCAligned corpus for language pairs from English to {te, ka, kk, my, th, yo}, and the rest from the CCMatrix. We use the codebase9from Dou and Neubig (2021) with the default softmax configuration to extract alignment. We do not apply the consistency optimization objective when fine-tuning the models because it may trade precision for recall, as suggested in the official instruction written by the authors. QA-align. This is a state-of-the-art supervised approach (Nagata et al., 2020) that formulates the word alignment problem as a SQuAD-style question answering task by fine-tuning multilingual BERT. Specifically, given a word in the source sentence, the model predicts the aligned span in the target sentence and reconciles both source-totarget and target-to-source directions by averaging and thresholding probabilities. We trained the QAalign model for English to Arabic, German, French, Chinese, and Japanese, where gold annotated word alignment data is available. We use the codebase from Nagata et al. (2020).10 For the training data of word alignment between 9https://github.com/neulab/awesome-align 10https://github.com/nttcslab-nlp/word_align | Lang. | Train | Test | |---------|---------|--------| | en-ar | 40,288 | 9,280 | | en-de | 300 | 208 | | en-fr | 300 | 147 | | en-ja | 653 | 357 | | en-zh | 4,879 | 610 | en and {de, zh, ja, fr}, we use the same data as in Nagata et al. (2020). For en-ar, we use the GALE English-Arabic word alignment data from LDC11, and use 80% of the sentence pairs for training. The data statistics can be found in Table 12. ## D Implementation Details Of Ie Models We follow the same learning rates and number of epochs reported in prior work: Hu et al. (2020) for QA, He et al. (2021b) and Pfeiffer et al. (2020) for NER (the latter for mi and tk) and Yarmohammadi et al. (2021) for ACE. For WikiANN NER (Pan et al., 2017), CoNLL-2002/2003 NER (Tjong Kim Sang, 2002; Tjong Kim Sang and De Meulder, 2003), MasakhaNER2.0 (Adelani et al., 2022), and TyDiQA-GoldP (Clark et al., 2020), we use the codebase from the XTREME benchmark (Hu et al., 2020),12 and MasakhaNER2.0 13, which is based on the Huggingface transformers library (Wolf et al., 2019). The hyperparameters of mDeBERTaV3 (276M) for MasakhaNER2.0 and XLMRoBERTalarge(550M) and for other datasets are presented in Table 13 following (Hu et al., 2020; He et al., 2021b; Liu et al., 2021; Adelani et al., 2022). We report the average result of three random seeds and select models based on the English development set. mT5 NER Model. Training mT5XXL (Xue et al., 2021) models, which have over 11 billion parameters, for the NER task is computationally challenging. We formulate the WikiANN NER task as generating a sequence of tokens with special entity tags (e.g. <per>, </per>) inserted around the entity spans. To fit the model into GPU memory | WikiANN CoNLL Masakha TyDiQA-GoldP | | | | | |--------------------------------------|------|------|------|--------| | Task | NER | NER | NER | QA | | Epochs | 5 | 10 | 5 | 3 | | Batch size | 32 | 32 | 32 | 8 | | Learning rate | 2e-5 | 2e-5 | 5e-5 | 3e-5 | | Warmup steps | 0 | 0 | 0 | 500 | | Weight decay | 0 | 0 | 0 | 0.0001 | Table 13: Hyperparameters for fine-tuning the NER and QA models. for training, we freeze the embedding layer and the bottom 12 layers of both encoder and decoder during fine-tuning. We also use the DeepSpeed (Rajbhandari et al., 2020) ZeRo3 with 32-bits configurations. We first fine-tune the model on English data for 20 epochs with a learning rate of 1e-4. To speed up the training process, we initialize the model from the English fine-tuned checkpoint and further fine-tune it on the combination of English and EasyProject + GMT data with a learning rate of 5e-5 for 5 epochs. We report results of mT5large by averaging over three random seeds. We use one random seed for the XL and XXL models due to the heavy computing cost. Experiment results of average performance across languages are shown in Table 7, and results of each language are reported in Table 22 in Appendix. ACE05. For ACE05 event extraction (Walker et al., 2006), we use the OneIE joint model v0.4.8 codebase14 with the same hyperparameters as Yarmohammadi et al. (2021). For evaluation, we use the OneIE scoring tool to report F1 scores for entities, relations, event triggers identification (Trig-I) and classification (Trig-C), argument role identification (Arg-I) and classification (Arg-C). We train models on the combination of English and projected Chinese data from scratch in the Chinese experiment and select the model based on the English development set. In the Arabic experiment, we initialize the model from the English fine-tuned checkpoint. We fine-tune the argument role classifier for event extraction tasks (Entity, Trig-I, Trig-C, Arg-I, Arg-C) and relation classifier in relation task for 5 epochs. We set the learning rate of task-specific classifiers at 1e-6 and the encoder at 5e-4. During the decoding process of relation classification, we only consider the joint model's relation and entity prediction scores. 14http://blender.cs.illinois.edu/software/ oneie/ | en | en† | zh | ar | ar† | | |----------|--------|--------|-------|-------|-------| | Sent. | 19,216 | 19,216 | 547 | 321 | 321 | | Entity | 47,554 | 28,996 | 2,388 | 2,897 | 1,971 | | Relation | 7,159 | 4,925 | 672 | 469 | 411 | | Trig. | 4,419 | 3,125 | 190 | 232 | 232 | | Arg. | 6,607 | 5,128 | 332 | 447 | 348 | | Tok/Sent | 14.2 | 14.2 | 37.4 | 32.4 | 32.4 | On the Arabic data annotation side, the ACE Arabic data contains language-specific annotations on pronoun entities due to morphological stemming (Zitouni et al., 2005), where we observe individual Arabic letter (prefix or suffix) is annotated as a pronoun entity. Because such annotations don't exist in English data, the label projection process may cause inconsistency in translated-Arabic data. Thus, we remove the pronoun entities in both Arabic test data and English training data for the Arabic experiment. The complete statistics of the Arabic test set is in Table 14. We report the average results of three random seeds. ## E Fixing Issues In The Arabic Ace Data The ACE data are pre-processed using the code from Lin et al. (2020). We use the same document splits as Lin et al. (2020) for English (ACE05- E +) and Chinese (ACE05-CN). For Arabic, we use the document splits from Lan et al. (2020) following Yarmohammadi et al. (2021). In the processed Arabic test set from Yarmohammadi et al. (2021), we observed 12 extremely long sentences with an average length of 381 tokens, which are significantly longer than the rest of the sentences with an average length of 28. This issue was also independently noticed by Huang et al. (2022b). A closer look reveals that these 12 sentences are 12 full articles in the original LDC release, which appear to be missing punctuation. We hire a native Arabic speaker to manually split them into sentences, resulting in 106 additional sentences. The data statistics are shown in Table 14. Because the ACE data is licensed, we will release the processing script instead. ## F More Experiments In this section, we present more experiments to compare EasyProject with other approaches. ## F.1 Comparison To Mulda Table 15 shows a direct comparison of EasyProject with MulDA (Liu et al., 2021), another translationbased label projection approach that has been recently proposed for NER. MulDA replaces named entities with placeholders one by one, such as 'PER0' and 'LOC1', then invokes the MT system separately for each entity to translate and project the data. Thus, MulDA is more time-consuming and costly than the EasyProject, which only requires one invocation of the MT system per sentence. We find that EasyProject outperforms MulDA in German, Spanish, and Dutch at much less time cost. In this experiment, we follow MulDA's experimental setup, which uses the CoNLL NER dataset and trains only on the projected data. In terms of translation speed, we calculate the relative time cost of EasyProject compared to translating the original sentences in CoNLL English data using the NLLB model on one A40 GPU. In Table 15, we observe marker-based (XML and []) translation takes 1.2× and 1.3× longer of time to translate, due to the additional markers in both the input and output. More analysis of the translation speed is provided in Appendix G.3. Method MT *de es nl time* MulDA GMT 73.9 75.5 79.6 - MulDA NLLB 74.5 73.5 77.5 2.4× XML GMT 74.3 **77.1** 79.8 - XML NLLB **75.3** 74.9 78.3 1.2× EasyProject GMT 74.9 70.3 **79.9** - EasyProject NLLB 75.2 73.0 77.5 1.3× Table 15: Comparison of MulDA (Liu et al., 2021) and EasyProject on CoNLL NER (F1), using *projected data* only. "*time*": relative time cost compared to translating the original sentence. ## F.2 Comparison To Bitext Projection Besides translation-projection, another alternative is bitext-projection, in which bilingual parallel corpora are used in place of a machine translation system. For example, we can apply a trained English IE model to the English side of the bilingual parallel corpus, then use word alignment to project the | Lang. FTen | EasyProject | Bitext100k | EasyProject +Bitext100k | | |--------------|---------------|--------------|---------------------------|--------------| | ar | 48.5 | 56.3 (+7.6) | 52.6 (+4.1) | 51.3 (+2.8) | | de | 79.4 | 80.2 (+0.8) | 81.0 (+1.6) | 81.4 (+2.0) | | es | 74.8 | 75.6 (+0.8) | 79.2 (+4.4) | 77.7 (+2.9) | | fr | 80.1 | 80.8 (+0.7) | 80.5 (+0.4) | 82.4 (+2.3) | | hi | 69.5 | 75.7 (+6.2) | 68.6 (-0.9) | 74.6 (+5.1) | | ru | 71.1 | 68.2 (-2.9) | 72.2 (+1.1) | 68.6 (-2.5) | | vi | 74.2 | 76.0 (+1.8) | 60.3 (-13.9) | 77.5 (+3.3) | | zh | 27.1 | 45.9 (+18.8) | 31.7 (+4.6) | 44.5 (+17.4) | | AVG 65.6 | 69.8 (+4.2) | 65.8 (+0.2) | 69.8 (+4.2) | | automatically predicted labels to the corresponding sentences in target languages. In Table 16, we show that bitext-projection improves F1 of WikiANN NER on 6 out of 8 languages used in (Yarmohammadi et al., 2021) over the fine-tuning baseline (Fint-tuneen), but is outperformed by EasyProject. For this experiment, we randomly sample 100,000 parallel sentences for each of the eight languages from WikiMatrix (Schwenk et al., 2021a), an automatically mined bitext corpus from Wikipedia that matches the domain of WikiANN. We use an XLMRoBERTalarge NER model trained on WikiANN English data with 83.9 F1 to generate named entity labels, and then apply Awesome-align to project labels to the target language. Finally, we train the XLM-RoBERTalarge model on English and bitextprojected data together for 2 epochs (Bitext100k). Bitext100k loses 13.9 F1 score for Vietnamese (vi), most likely due to Awesome-align projection errors being magnified by fine-tuning on 100,000 projected sentences. One surprising finding is that the Bitext100k improves by an absolute 4.4 F1 score on Spanish and 1.1 F1 on Russian. Translationprojection approaches struggle on these two languages as shown in Table 8. ## F.3 **Experiments On Low-Resource Languages** To investigate the effectiveness of label projection on very low-resource languages (Pfeiffer et al., 2020), we conduct experiments on Maori ( ¯ mi) and Turkmen (tk), which are not covered by the pretrained language models (i.e., XLM-RoBERTa and mBERT) and have a small number of Wikipedia articles (∼1.2k for Maori and ¯ ∼0.5k for Turkman). As shown in Table 17, EasyProject improves F1 | Method | MT | Maori( ¯ mi) Turkmen(tk) | | |---------------------------|------|----------------------------|------| | mBERT† | - | 21.8 | 47.2 | | XLM-RoBERTa† base | - | 15.9 | 43.4 | | XLM-RoBERTalarge | - | 30.3 | 52.2 | | + word-translation PanLex | 42.5 | 53.8 | | | + Awesome-align | GMT | 46.1 | 60.7 | | + EasyProject | GMT | 53.0 | 58.1 | score by an absolute 22.7 F1 score on Maori and ¯ 5.9 F1 on Turkmen compared to fine-tuning on English data only. We also include a lexicon-based baseline, replacing English words with their wordto-word translations based on PanLex (Kamholz et al., 2014), a commonly used multilingual dictionary. Both EasyProject and Awesome-align significantly outperform the word-level translations, likely because word-level translations still follow the English word orders and fail to capture the variation of word orders in Maori and Turkmen. For ¯ example, Maori has a verb-subject-object word ¯ order, while Turkmen uses a subject-object-verb. The improvement is less significant on Turkmen than Maori, potentially because Turkmen is close ¯ to Turkish, which is covered by both mBERT and XLM-RoBERTa. This is also a plausible reason why Awesome-align that uses mBERT did better on Turkmen. ## G More Analysis On Easyproject Here, we present more analysis of the EasyProject method in comparison with the traditional pipeline approach based on word alignment. ## G.1 Translation Quality To further measure the impact of adding special markers on the translation quality for the NLLB model, we adopt the evaluation setup used by NLLB (Costa-jussà et al., 2022) which utilizes the professional human-translated FLORES-200 parallel corpora (1000 sentences per language). For the marker-based approaches ("XML" and "[]"), special markers are removed from the outputs before calculating the BLEU scores. Table 18 presents the BLEU scores for the original NLLB model (3.3B) and the NLLB model further fine-tuned with three | Language | NLLB | NLLBfinetune | Language | NLLB | NLLBfinetune | | | | | | | | | |--------------------------------------------------------------------------------------------------------------|--------|----------------|------------|--------|----------------|------|-----------------|------|------|------|------|------|------| | Orig | XML | [] | Orig | XML | [] | Orig | XML | [] | Orig | XML | [] | | | | Afrikaans(af) | 44.0 | 44.3 | 43.6 | 45.9 | 45.4 | 45.5 | Luo(luo) | 15.6 | 15.6 | 15.3 | 16.5 | 16.0 | 15.9 | | Arabic(ar) | 39.8 | 38.5 | 38.2 | 39.0 | 36.7 | 37.6 | Malayalam(ml) | 34.1 | 33.2 | 33.4 | 39.1 | 36.6 | 37.8 | | Bulgarian(bg) | 47.8 | 47.5 | 46.7 | 48.4 | 45.0 | 46.7 | Mossi(mos) | 6.4 | 6.0 | 6.4 | 6.5 | 6.3 | 6.4 | | Bambara(bm) | 10.5 | 10.5 | 10.3 | 10.4 | 10.0 | 10.2 | Marathi(mr) | 29.4 | 27.9 | 27.8 | 29.8 | 27.1 | 28.1 | | Bengali(bn) | 34.6 | 32.6 | 33.2 | 35.3 | 31.1 | 33.0 | Malay(ms) | 45.0 | 43.9 | 43.5 | 45.1 | 43.6 | 43.5 | | German(de) | 45.3 | 44.2 | 44.1 | 45.7 | 43.3 | 44.6 | Burmese(my) | 16.2 | 16.1 | 14.0 | 20.4 | 16.0 | 17.7 | | Ewe(ee) | 16.3 | 16.0 | 16.4 | 16.7 | 15.9 | 16.2 | Dutch(nl) | 35.2 | 34.9 | 34.4 | 35.0 | 33.0 | 33.8 | | Greek(el) | 37.6 | 36.7 | 36.2 | 37.1 | 34.6 | 35.5 | Chichewa(ny) | 17.4 | 17.6 | 17.0 | 20.5 | 19.5 | 19.9 | | Spanish(es) | 32.7 | 32.3 | 32.0 | 32.1 | 31.0 | 31.3 | Portuguese(pt) | 53.7 | 53.0 | 52.5 | 54.6 | 52.2 | 53.1 | | Estonian(et) | 33.5 | 32.9 | 32.3 | 33.0 | 30.5 | 31.3 | Russian(ru) | 40.2 | 38.8 | 39.5 | 40.1 | 36.9 | 38.9 | | Basque(eu) | 26.1 | 26.3 | 24.9 | 30.8 | 28.8 | 29.2 | Kinyarwanda(rw) | 24.6 | 24.2 | 22.7 | 26.9 | 24.6 | 25.7 | | Persian(fa) | 32.4 | 31.8 | 31.4 | 32.3 | 30.3 | 30.8 | Shona(sn) | 19.4 | 18.2 | 18.2 | 19.6 | 16.6 | 17.5 | | Finnish(fi) | 32.4 | 31.9 | 32.0 | 32.9 | 29.5 | 31.2 | Swahili(sw) | 37.6 | 37.1 | 37.4 | 40.1 | 38.2 | 39.1 | | Benin(fon) | 5.3 | 5.0 | 5.7 | 5.3 | 4.0 | 5.6 | Tamil(ta) | 36.4 | 34.6 | 34.2 | 37.7 | 34.6 | 36.3 | | French(fr) | 55.0 | 54.7 | 53.6 | 55.3 | 52.9 | 53.6 | Telugu(te) | 38.3 | 37.3 | 37.5 | 39.1 | 37.4 | 38.2 | | Hausa(ha) | 29.2 | 28.6 | 28.0 | 29.4 | 28.0 | 28.5 | Thai(th) | 32.2 | 30.7 | 28.3 | 32.9 | 28.8 | 29.9 | | Hebrew(he) | 41.2 | 39.6 | 38.9 | 39.4 | 35.6 | 37.1 | Tagalog(tl) | 36.9 | 35.1 | 35.4 | 34.8 | 32.8 | 33.4 | | Hindi(hi) | 40.9 | 39.1 | 40.0 | 41.3 | 38.3 | 40.6 | Tswana(tn) | 25.8 | 25.8 | 25.2 | 26.5 | 24.5 | 26.5 | | Hungarian(hu) | 35.5 | 34.7 | 34.1 | 35.5 | 33.0 | 33.8 | Turkish(tr) | 40.4 | 39.2 | 38.7 | 41.0 | 37.7 | 38.9 | | Indonesian(id) | 46.8 | 45.7 | 45.7 | 46.5 | 43.9 | 45.3 | Twi(tw) | 16.1 | 16.1 | 15.8 | 16.7 | 16.5 | 16.2 | | Igbo(ig) | 19.7 | 19.8 | 19.3 | 20.2 | 19.5 | 20.0 | Urdu(ur) | 30.8 | 29.5 | 29.8 | 30.6 | 29.0 | 29.8 | | Italian(it) | 37.1 | 36.2 | 36.0 | 36.0 | 33.9 | 34.6 | Vietnamese(vi) | 42.1 | 41.5 | 41.3 | 41.9 | 39.8 | 40.8 | | Japanese(ja) | 17.8 | 17.0 | 13.7 | 19.9 | 18.0 | 18.8 | Wolof(wo) | 9.2 | 9.2 | 9.4 | 9.1 | 9.7 | 9.2 | | Javanese(jv) | 28.9 | 28.1 | 28.3 | 29.4 | 27.9 | 28.7 | Xhosa(xh) | 24.2 | 23.8 | 22.9 | 26.5 | 23.8 | 24.9 | | Georgian(ka) | 32.2 | 31.2 | 31.0 | 32.6 | 28.7 | 31.3 | Yoruba(yo) | 9.0 | 10.4 | 8.5 | 6.8 | 6.4 | 8.0 | | Kazakh(kk) | 30.6 | 30.1 | 29.9 | 33.2 | 30.1 | 31.4 | Chinese(zh) | 23.9 | 24.2 | 22.9 | 28.0 | 26.2 | 26.8 | | Korean(ko) | 24.9 | 23.9 | 23.6 | 25.2 | 22.2 | 24.4 | Zulu(zu) | 30.3 | 29.1 | 29.2 | 30.8 | 27.8 | 28.5 | | Ganda(lg) | 12.5 | 13.0 | 12.4 | 13.0 | 13.2 | 13.1 | | | | | | | | | Average | 30.2 | 29.5 | 29.1 | 30.9 | 28.8 | 29.7 | | | | | | | | | Table 18: BLEU score of NLLB on FLORES-200 (Costa-jussà et al., 2022) dev set (1000 sentences per language). | | | | | | | | | | | | | | Table 18: BLEU score of NLLB on FLORES-200 (Costa-jussà et al., 2022) dev set (1000 sentences per language). We compare three types of translations: original translation (Orig), inserted with XML and [] special markers. We also fine-tuned NLLB with different markers using the method described in Appendix 4.1. We found that the fine-tuned NLLB model using square brackets has the least negative impact on translation quality types of parallel sentences (original, inserted with XML and [] markers). As there is no gold NER annotation on the parallel corpus, we first train a NER model based on XLM-RoBERTlarge on the WikiAnn dataset, achieving an F1 score of 83.9. We then apply the trained model to the English side of the parallel corpus and apply the EasyProject method to translate sentences into the target language. After removing the special markers from the translation outputs, we use the sacreBLEU15 to calculate the BLEU scores by comparing the translations against gold references. We follow the NLLB evaluation setting and use multilingual tokenizations (flores200).16 ## G.2 Projection Rate We then compare the projection rate for all label projection methods, for which we divide the number of annotations after projection by the number of annotations occurring in the original training data. We also include the average number of successfully projected sentences after filtering out the incorrect ones, which have a different number of annotations compared to the source sentence. For example, the source sentence has a LOC and a PER entity, but the projected sentence has two LOC entities. Such sentences will be filtered. For QA-align in WikiANN NER, we show the average statistics for 5 languages {*ar, de, fr, ja, zh*} that have supervised training data. As shown in Table 21, Google Translation (GMT) is very robust in handling special markers, and EasyProject has a nearly perfect 100% projec- | English #1:Dean of Wolverhampton ( 1373 - 1394 ) | |--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Alignment-based:沃尔弗 汉普顿 伯爵, 1373 - 1394 年 EasyProject: 沃尔弗 汉普顿 伯爵 1373 - 1394 English #2:Pino Daniele ( 1955 - 2015 ) Alignment-based:皮诺· 丹尼埃尔 (Pino Daniele , 1955 - 2015 年) EasyProject: 皮诺· 丹尼尔 (1955 - 2015 年) | Table 19: Examples from WikiANN dataset using NLLB translation. The outputs from two projection methods and correct answers are marked. In \#1, the alignment-based method incorrectly misses the "沃尔 弗", which is a part of the translation for "Wolverhampton". In \#2, for the alignment-based method, the Chinese translation ("皮诺· 丹尼埃尔") and the original English span ("Pino Daniele") occur together in the translation. Alignment-based method incorrectly misses the correct projection "皮诺· 丹尼埃尔" and project to "Pino Daniele". | #Inputs | #Outputs | Time (sec) | | |-----------|------------|--------------|-------| | Original | 279,678 | 335,963 | 4,452 | | XML | 460,002 | 468,815 | 5,486 | | [] | 326,294 | 379,796 | 4,107 | | Entity | 64,293 | 72,309 | 1,553 | Table 20: Number of tokens in the three types of input sentences: original CoNLL NER English training data, adding XML and [] special markers; and their corresponding translations in German. Time is the total translation clockwise time in seconds. tion rate, higher than any word alignment-based method. Our manual inspection of 100 sentences, randomly sampled from the WikiANN training set for English to Chinese projection, also reveals that GMT+EasyProject successfully projects all the sentences without mistakes on any target named entities, whereas Awesome-align only projected 94 sentences and caused 4 entity projection errors. According to our manual analysis, EasyProject is less likely to introduce errors than the word alignmentbased method because the use of special markers encourages full-span projection. We found that most errors are caused by partial or missed alignments, which often occur when a span contains multiple words, a sentence contains many spans, or when both a Chinese transliteration and the original English name occur together in the translated sentence, which is a correct way to translate but poses challenges for label projection. More examples of alignment errors can be found in Table 19 in Appendix. ## G.3 Translation Speed Additional special markers added to the source sentence will affect the translation speed. In Table 20, we show the number of tokens in the input and translation output. We use the CoNLL-2002/2003 NER English training set as the source sentences, and translate them into German. All sentences are tokenized by the NLLB tokenizer. We also show the translation time per sentence and per entity on an A40 GPU with a batch size of 32. We estimate using XML tags takes 1.2× time compared to translating the original sentences, and EasyProject takes 1.3× time as it requires the additional translation of each entity span, for identifying the label correspondence. | NLLB+Word Aligner | NLLB+Markers | GMT+Word Aligner | GMT+Markers | | | | | | | | | |--------------------------------------------------------------------------------------------------------------------|----------------|--------------------|---------------|--------|---------|----------|------------------|--------|--------|--------|-------| | QAalign | Awesome. | Awesomeft | XML | EProj. | QAalign | Awesome. | Awesomeft | XML | EProj. | | | | NER | # Sents | 18,486(5) | 18,274 | 18,587 | 13,959 | 19,470 | 19,187(5) 19,003 | 19,408 | 20,000 | 20,000 | | | Proj. Rate | 92.4(5) | 91.4 | 92.9 | 69.8 | 97.4 | 96.0(5) | 94.8 | 98.2 | 100 | 100 | | | Event # Sents | 15,491 | 14,840 | 15,857 | 7,308 | 16,846 | 16,264 | 16,631 | 16,903 | 19,185 | 19,185 | | | Proj. Rate | 80.6 | 77.2 | 82.5 | 38.0 | 87.7 | 90.4 | 92.6 | 93.6 | 99.9 | 99.9 | | | QA | # Sents | - | 3,613 | 3,654 | 1,573 | 3,564 | - | 3,623 | 3,649 | 3,695 | 3,695 | | Proj. Rate | - | 97.8 | 98.9 | 42.6 | 96.4 | - | 97.8 | 99.1 | 100 | 100 | | | Table 21: Diagnosis analysis of projected data based on two metrics: number of sentences and the percentage of the | | | | | | | | | | | | Table 21: Diagnosis analysis of projected data based on two metrics: number of sentences and the percentage of the projected annotations (Proj. Rate). For QA-align in NER, we show 5 languages {*ar,de,fr,ja,zh*}. Lang. XLM-Rlarge +EasyProject mT5large +EasyProject mT5XL +EasyProject mT5XXL +EasyProject af 78.6 79.2 (+0.6) 79.2 81.0 (+1.8) 77.2 79.6 (+2.4) - - ar 48.5 56.3 (+7.8) 53.1 66.1 (+13.0) 57.4 68.0 (+10.6) 62.2 66.1 (+3.9) bg 82.1 80.6 (-1.5) 58.5 77.0 (+18.5) 61.5 76.1 (+14.6) - - bn 75.1 80.9 (+5.8) 57.3 76.2 (+18.9) 65.7 75.8 (+10.1) - - de 79.4 80.2 (+0.8) 75.6 77.9 (+2.3) 75.9 77.6 (+1.7) 76.5 77.3 (+0.8) el 79.3 75.0 (-4.3) 61.6 81.6 (+20.0) 79.4 77.2 (-2.2) - - es 74.8 75.6 (+0.8) 85.7 87.0 (+1.2) 86.3 85.3 (-1.0) 85.6 86.4 (+0.8) et 79.6 80.1 (+0.5) 71.8 72.8 (+1.0) 71.7 73.2 (+1.4) - - eu 63.6 69.0 (+5.4) 64.0 68.0 (+4.0) 64.0 74.1 (+10.1) - - fa 64.3 52.0 (-12.3) 47.0 67.5 (+20.5) 46.1 64.9 (+18.8) - - fi 80.6 80.3 (-0.3) 74.6 78.0 (+3.4) 73.5 79.2 (+5.7) - - fr 80.1 80.8 (+0.7) 84.6 84.9 (+0.3) 83.8 84.2 (+0.4) 83.4 84.2 (+0.8) he 56.0 67.1 (+11.1) 53.3 63.3 (+10.1) 57.9 66.1 (+8.2) - - hi 69.5 75.7 (+6.2) 70.1 76.0 (+5.9) 74.8 77.1 (+2.2) 76.0 76.4 (+0.4) hu 81.0 80.7 (-0.3) 76.0 82.0 (+6.0) 76.5 80.0 (+3.5) - - id 52.4 53.9 (+1.5) 77.6 77.9 (+0.3) 82.2 82.3 (+0.1) - - it 81.3 80.9 (-0.4) 86.2 86.4 (+0.1) 86.4 85.5 (-1.0) - - ja 18.0 43.5 (+25.5) 28.3 38.3 (+10.0) 29.8 38.0 (+8.3) - - jv 62.3 69.8 (+7.5) 72.4 75.7 (+3.2) 72.9 72.3 (-0.6) - - ka 68.8 74.7 (+5.9) 60.6 72.2 (+11.6) 67.1 72.5 (+5.4) - - kk 53.5 54.2 (+0.7) 32.7 53.1 (+20.4) 26.1 51.7 (+25.5) - - ko 57.9 61.9 (+4.0) 33.7 39.1 (+5.4) 30.6 44.7 (+14.1) - - ml 63.5 64.3 (+0.8) 42.1 65.1 (+23.0) 42.5 63.9 (+21.3) - - mr 63.9 67.1 (+3.2) 49.6 57.4 (+7.9) 53.9 55.6 (+1.8) - - ms 64.1 74.1 (+10.0) 79.3 79.6 (+0.3) 80.5 79.4 (-1.1) - - my 53.5 62.0 (+8.5) 35.0 38.7 (+3.7) 31.9 33.0 (+1.1) - - nl 84.1 83.1 (-1.0) 84.2 85.5 (+1.3) 83.5 84.1 (+0.5) - - pt 82.0 80.1 (-1.9) 83.0 82.9 (+0.0) 83.5 82.7 (-0.8) - - ru 71.1 68.2 (-2.9) 55.3 70.8 (+15.6) 59.8 70.1 (+10.3) 65.6 72.8 (+7.2) sw 68.5 70.7 (+2.2) 65.9 66.4 (+0.5) 66.8 73.9 (+7.1) - - ta 58.8 64.3 (+5.5) 49.5 61.7 (+12.1) 52.6 63.3 (+10.7) - - te 55.6 57.4 (+1.8) 47.4 57.5 (+10.1) 51.3 57.8 (+6.5) - - th 0.7 15.1 (+14.4) 2.0 3.8 (+1.8) 2.0 7.4 (+5.4) - - tl 73.0 80.0 (+7.0) 80.6 81.6 (+1.0) 81.9 83.2 (+1.3) - - tr 80.3 79.6 (-0.7) 68.8 69.7 (+1.0) 71.4 68.8 (-2.6) - - ur 63.6 74.7 (+11.1) 51.4 65.4 (+14.0) 56.9 67.0 (+10.1) - - vi 74.2 76.0 (+1.8) 81.4 83.0 (+1.6) 81.7 82.0 (+0.4) 82.4 79.6 (-2.8) yo 37.1 73.8 (+36.7) 75.7 82.3 (+6.6) 75.5 78.4 (+3.0) - - zh 27.1 45.9 (+18.8) 31.1 39.7 (+8.7) 31.6 39.8 (+8.2) 36.3 43.2 (+6.9) AVG 64.3 68.9 (+4.6) 61.2 68.5 (+7.4) 62.9 68.6 (+5.7) 71.0 73.3 (+2.3) Table 22: Cross-lingual NER F1 on WikiANN for mT5 and XLM-RoBERTalarge. Due to the computing limit, we run the largest mT5XXL model on 8 languages which were chosen following Yarmohammadi et al. (2021). The performance is averaged over 3 runs for XLM-Rlarge and mT5large models, and 1 run for mT5XL and mT5XXL models. Models are fine-tuned on a combination of English and EasyProject data with Google Translation. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitation section ✓ A2. Did you discuss any potential risks of your work? Limitation section ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Appendix A-F ✓ B1. Did you cite the creators of artifacts you used? Appendix A-F ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Appendix F ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Appendix E ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? we use existing published datasets. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Table 14 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Table 1 & 10 ## C ✓ **Did You Run Computational Experiments?** Section 3 & 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix E and H.3 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Appendix E ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Appendix E ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Appendix C and Appendix H.1 D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Appendix F D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Appendix F D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Appendix F
liu-etal-2023-enhancing
Enhancing Hierarchical Text Classification through Knowledge Graph Integration
https://aclanthology.org/2023.findings-acl.358
Hierarchical Text Classification (HTC) is an essential and challenging subtask of multi-label text classification with a taxonomic hierarchy. Recent advances in deep learning and pre-trained language models have led to significant breakthroughs in the HTC problem. However, despite their effectiveness, these methods are often restricted by a lack of domain knowledge, which leads them to make mistakes in a variety of situations. Generally, when manually classifying a specific document to the taxonomic hierarchy, experts make inference based on their prior knowledge and experience. For machines to achieve this capability, we propose a novel Knowledge-enabled Hierarchical Text Classification model (K-HTC), which incorporates knowledge graphs into HTC. Specifically, K-HTC innovatively integrates knowledge into both the text representation and hierarchical label learning process, addressing the knowledge limitations of traditional methods. Additionally, a novel knowledge-aware contrastive learning strategy is proposed to further exploit the information inherent in the data. Extensive experiments on two publicly available HTC datasets show the efficacy of our proposed method, and indicate the necessity of incorporating knowledge graphs in HTC tasks.
# Enhancing Hierarchical Text Classification Through Knowledge Graph Integration Ye Liu1,3, Kai Zhang1,2,3,∗, Zhenya Huang1,2,3**, Kehang Wang**1,3, Yanghai Zhang2,3, Qi Liu1,2,3, **Enhong Chen**1,2,3,∗ 1 School of Data Science, University of Science and Technology of China 2 School of Computer Science and Technology, University of Science and Technology of China 3 State Key Laboratory of Cognitive Intelligence {liuyer,kkzhang0808,wangkehang,apocalypseh}@mail.ustc.edu.cn {huangzhy,qiliuql,cheneh}@ustc.edu.cn ## Abstract Hierarchical Text Classification (HTC) is an essential and challenging subtask of multi-label text classification with a taxonomic hierarchy. Recent advances in deep learning and pretrained language models have led to significant breakthroughs in the HTC problem. However, despite their effectiveness, these methods are often restricted by a lack of domain knowledge, which leads them to make mistakes in a variety of situations. Generally, when manually classifying a specific document to the taxonomic hierarchy, experts make inference based on their prior knowledge and experience. For machines to achieve this capability, we propose a novel Knowledge-enabled Hierarchical Text Classification model (K-HTC), which incorporates knowledge graphs into HTC. Specifically, KHTC innovatively integrates knowledge into both the text representation and hierarchical label learning process, addressing the knowledge limitations of traditional methods. Additionally, a novel knowledge-aware contrastive learning strategy is proposed to further exploit the information inherent in the data. Extensive experiments on two publicly available HTC datasets show the efficacy of our proposed method, and indicate the necessity of incorporating knowledge graphs in HTC tasks. ## 1 Introduction Hierarchical Text Classification (HTC), as a particular multi-label text classification problem, has been extensively applied in many real-world applications, such as book categorization (Remus et al., 2019) and scientific paper classification (Kowsari et al., 2017). In HTC, documents are tagged with multiple categories that can be structured as a tree or an acyclic graph (e.g., the taxonomic hierarchy illustrated in the bottom left of Figure 1), which poses a higher challenge than the ordinary text classification problems (Sun and Lim, 2001). ![0_image_0.png](0_image_0.png) The existing state-of-the-art approaches for HTC (Zhou et al., 2020; Deng et al., 2021; Chen et al., 2021; Wang et al., 2022b,c) mainly focus on the representation learning from the input text and hierarchical label structure, most of which rely on the pre-trained language models (e.g., BERT (Devlin et al., 2018)). Specifically, Chen et al. (2021) adopted BERT as the encoder and proposed a matching network to mine the relative distance between texts and labels. Wang et al. (2022b) proposed a novel contrastive learning method to embed the hierarchy into BERT encoder. Despite the success of this paradigm, approaches without domain knowledge have significant limitations and may lead to mistakes in many cases. An example of this can be observed in Figure 1, where machines may classify a document as belonging to the category *Travel: USA & Canada* simply based on the presence of the phrase The USA in the document. However, if machines are equipped with a relevant knowledge graph, they can mine more information from other concepts, such as *Sahara* and *Algiers*. Specifically, *Sahara* is part of *Africa* and *Algiers* is the capital of Algeria in *Africa*. Further, *Sahara* and *Algiers* are both *Tourist Attractions*. With the above relevant knowledge, machines will be more facilitated to make the correct inference, i.e., *Travel* and Travel: Africa in the taxonomic hierarchy. Nevertheless, to ∗Corresponding author. the best of our knowledge, few works focused on incorporating knowledge graphs into HTC. Indeed, many technical challenges are inherent in designing effective solutions to incorporate knowledge graphs (KGs) into HTC. First, the text and KG are organized quite differently. Text is organized as a sequence of tokens, whereas the KG is organized as a graph. How to effectively integrate KGs into popular text representation models (e.g., BERT) is an open issue. Second, compared with ordinary text classification, HTC has a more complex label structure, which provides additional prior knowledge but also poses a significant challenge for label learning and the interaction between labels and documents. Third, documents within the same category may contain more common concepts in the knowledge graph because they describe similar entities or topics, while documents in different categories do not. This provides a new entry point on how we can further leverage KGs in HTC. In this paper, we propose a Knowledge-enabled Hierarchical Text Classification model (K-HTC) to incorporate knowledge graphs into HTC process. Specifically, we first design a Knowledge-aware Text Encoder (KTE), which can fuse the text representation and its corresponding concept representation learned from KGs at the word granularity, thereby obtaining a more comprehensive and effective representation. Subsequently, to perform label learning more effectively, we create a Knowledgeaware Hierarchical Label Attention (KHLA) module. It employs external knowledge from KGs for label representation and optimizes it based on the hierarchical structure, which further enhances the document representation via a label attention mechanism. After that, we propose a Knowledge-aware Contrastive Learning (KCL) strategy. It employs the shared knowledge concepts and hierarchical labels to learn the relationships between different documents, which can further exploit the information inherent in the data. Finally, extensive experiments on two publicly available datasets demonstrate the effectiveness of our proposed method, and further indicate the necessity to incorporate knowledge graphs, especially for the classification on deeper and more difficult levels. ## 2 Related Work 2.1 Hierarchical Text Classification Hierarchical text classification is a particular multilabel text classification problem, where the documents are assigned to one or more nodes of a taxonomic hierarchy (Wehrmann et al., 2018). Existing works for HTC could be categorized into local and global approaches according to their exploration strategies. The local approaches train multiple classifiers, each responsible for the corresponding local region (e.g., each label or level). For instance, Banerjee et al. (2019) trained a classifier for each label and proposed a strategy to transfer parameters of parent models to its child models. Shimura et al. (2018) designed a CNN-based method to use data in the upper levels to contribute to the categorization in the lower levels. As for global methods, they build a single classifier for all classes, which will take the class hierarchy as a whole into account. For example, Cai and Hofmann (2004) proposed a hierarchical Support Vector Machine (SVM) algorithm based on discriminant functions. In recent years, with the rapid development of deep neural networks, many deep learning algorithms, such as Attention and Pre-trained Language Models, have been employed in HTC. Huang et al. (2019) designed an attention-based recurrent network to mine the textclass associations. Zhou et al. (2020) adopted a typical structure encoder for modeling label dependencies in both top-down and bottom-up manners. Chen et al. (2021) adopted BERT as encoder and proposed a matching network to mine the relative distance between texts and labels. Wang et al. (2022b) suggested a contrastive learning method to embed the hierarchy into BERT encoder. Wang et al. (2022c) introduced prompt learning into HTC and proposed a novel multi-label MLM perspective. Nevertheless, most of these methods ignore the relevant knowledge in the modeling process and have significant limitations in many cases. ## 2.2 Knowledge Graph Knowledge Graph (KG) has millions of entries that describe real-world concepts (entities) like people, places and organizations. In a KG, concepts (entities) are represented as nodes, while the relations between concepts are described as edges. Recently, many knowledge graphs have been established in both academia and industry, such as ConceptNet (Speer et al., 2017), DBpedia (Lehmann et al., 2015) and Freebase (Bollacker et al., 2008). On the basis of KGs, researchers attempt to incorporate them into many downstream application tasks and obtain significant improvements. For in- (a) KTE (b) KHLA **(c) KCL** ![2_image_0.png](2_image_0.png) ![2_image_1.png](2_image_1.png) stance, Wang et al. (2017) proposed a CNN-based text classification method, which combined internal representation and external knowledge representation from KGs. Jang et al. (2021) presented a novel knowledge-infused attention mechanism to incorporate high-level concepts into Neural Network models, achieving accurate and interpretable text classification. Lin et al. (2019) proposed a textual inference framework for question answering, which effectively utilized external structured knowledge graphs to perform explainable inferences. As far as we know, there are very few works that have attempted to incorporate knowledge graphs into HTC, making our K-HTC model a pioneering approach in this field. ## 3 Preliminaries In this section, we first give the problem statement of incorporating KGs into HTC, and then introduce the knowledge preparation for K-HTC model. ## 3.1 Problem Statement Given the input document D and an external knowledge graph G1 = (*E, R, T*), HTC aims to predict a subset y of label set Y . The size of label set Y is K. In the knowledge graph G1, E is the set of concepts, R is the set of relations, and T = E ×R×E is the set of triples. It is notable that the label set Y is organized as an acyclic graph: G2 = (*Y, A*), where A is the adjacency matrix of Y . Besides, each label yi ∈ Y corresponds to a label name Li, which can be seen as a short text description. ## 3.2 Knowledge Preparation In this subsection, we first identify the concepts mentioned in the input documents and label names (Concept Recognition), and then pre-train the concept embedding (Concept Pre-training). Concept Recognition. Given the text x = {x1, x2*, ..., x*N }, we are expected to match its tokens to the concepts from the given knowledge graph G1 (in this paper, we adopt the advanced KG named ConceptNet (Speer et al., 2017)). Following the strategy proposed by (Lin et al., 2019), we set rules like soft matching with lemmatization and filtering of stop words to enhance the n-gram matching performance. After that, we can obtain two sequences: $$\begin{array}{l}{{x=\{x_{1},x_{2},...,x_{N}\},}}\\ {{c=\{c_{1},c_{2},...,c_{N}\},}}\end{array}\qquad\qquad(1)$$ where x is the original text sequence. c is matched concept sequence, which means that ciis the matched concept of xi. For n-gram concepts, we align them to the first token in its corresponding phrases in x (Zhang et al., 2019). If there is no matched concept for token xi, we set ci = [*P AD*]. Concept Pre-training. After the concept recognition process, we can obtain the set of concepts mentioned in the whole dataset. We retain these mentioned concepts and their related concepts (first-order neighbors) in the original knowledge graph G1, thus yielding a new pruned knowledge graph G′1 . Subsequently, we utilize the TransE (Bordes et al., 2013) model on G′1 to pretrain concept embedding U ∈ R Nc×v, where Nc is the number of concepts, v indicates the embedding size. This pre-trained concept embedding will be used as initialization in KTE module (Section 4.1). ![3_image_0.png](3_image_0.png) ## 4 K-Htc Model In this section, we will introduce the technical details of K-HTC model. As Figure 2 shows, K-HTC consists of three components: 1) Knowledge-aware Text Encoder (KTE); 2) Knowledge-aware Hierarchical Label Attention (KHLA); 3) Knowledgeaware Contrastive Learning (KCL). ## 4.1 Knowledge-Aware Text Encoder In this part, we aim to obtain the knowledgeaware representation of the given text by integrating external knowledge from KGs. As illustrated in Figure 3, given a token sequence x = {x1, x2*, ..., x*N } and its corresponding concept sequence c = {c1, c2*, ..., c*N }, we first apply the pretrained language encoder (i.e., BERT) to compute its word semantic embedding: $$\{w_{1},...,w_{N}\}=B E R T(\{x_{1},...,x_{N}\}).$$ Regarding the concept sequence c, we map each concept into the embedding space via the pretrained TransE embedding U: $$\{u_{1},...,u_{N}\}=U(\{c_{1},...,c_{N}\}).$$ Subsequently, for each concept ci, we randomly select k neighbors in the pruned knowledge graph G′1 to conduct the GraphSAGE algorithm (Hamilton et al., 2017), which can aggregate its context information in the KG: $$A G E(u_{i},G_{k}),$$ $$u_{i}^{\prime}=G r a$$ $\mathbf{A}$ ′i = GraphSAGE(ui, Gk), (4) where Gk is the context graph composed of ci and its k neighbors, u′i ∈ R vis the aggregated representation of concept ci. After that, we fuse the word semantic representation wi and its corresponding concept representation u′i : $$\{m_{1},...,m_{N}\}=\{w_{1}+u_{1}^{\prime},...,w_{N}+u_{N}^{\prime}\},\tag{5}$$ where + refers to the point-wise addition. We call {m1*, ..., m*N } as knowledge-aware representation. ## 4.2 Knowledge-Aware Hierarchical Label Attention In this part, we first learn the label representation via external knowledge and taxonomic hierarchy, and then conduct label attention to obtain the classenhanced document representation. Label Representation Learning. With the Knowledge-aware Text Encoder (KTE), we can obtain the knowledge-aware representation of hierarchical labels via their label names: $$\begin{array}{c}{{R_{l}^{i}=\!m e a n(K T E(L_{i})),i=1,...,K,}}\\ {{R_{l}=[R_{l}^{1},R_{l}^{2},...,R_{l}^{K}],}}\end{array}\quad\mathbf{(6)}$$ where Liis the name of label i, Ri l ∈ R vis the representation of label i, while Rl ∈ R K×vindicates the representation of all labels. Then, we adopt GCN layer to propagate the representation of labels on the label hierarchy graph G2. Specifically, it takes the feature matrix H(l) and the matrix Ae as input, and updates the embedding of the labels by utilizing the information of adjacent labels: $$\begin{array}{l}{{H^{(l+1)}=\sigma(\widetilde{D}^{-\frac{1}{2}}\widetilde{A}\widetilde{D}^{-\frac{1}{2}}H^{(l)}W^{(l)}),}}\\ {{\widetilde{A}=\sigma(\widetilde{D}^{-\frac{1}{2}}\widetilde{A}\widetilde{D}^{-\frac{1}{2}}H^{(l)}W^{(l)}),}}\end{array}$$ $$\left(7\right)$$ where Ae = A + I, A is the adjacency matrix of G2, I is the identity matrix, De =Pi Aeij , and W(l)is a layer-specific trainable weight matrix. σ denotes a non-linear activation function (e.g., ReLU). We set H(0) = Rl, and the last hidden layer is used as the propagated label representation, i.e., H = H(l+1) ∈ R K×v. $$({\mathfrak{I}})$$ Label Attention. After that, we apply the propagated label representation H to perform K different classes of attention to the input document: $$\begin{array}{c}{{R_{d}=K T E(D),}}\\ {{O=t a n h(W_{o}\cdot R_{d}^{T}),}}\\ {{W_{a t t}=s o f t m a x(H\cdot O),}}\end{array}\qquad\qquad(8)$$ where D is the input document, Rd ∈ R N×vis the knowledge-aware representation of D. Wo ∈ R v×vis a randomly initialized weight matrix, and the *sof tmax*() ensures all the computed weights sum up to 1 for each category. Watt ∈ R K×N denotes the attention matrix. Subsequently, we compute weighted sums by multiplying the attention matrix Watt and the document representation Rd: $$M_{1}=m e a n(W_{a t t}\cdot R_{d}),\qquad\qquad(9)$$ where M1 ∈ R vrepresents the class-enhanced representation for the document. Furthermore, inspired by (Wang et al., 2022b), we utilize another randomly initialized label embedding H2 ∈ R K×vto perform the same operation in Eq.(8-9) and obtain another class-enhanced document representation M2. Finally, we concat the M1, M2 and the [CLS] representation from BERT encoder as the final representation: $$\begin{array}{c}R_{c a t}=c o n c a t(M_{1},M_{2},H_{[C L S]}),\\ R_{f}=W_{f}\cdot R_{c a t}+b_{f},\end{array}\tag{10}$$ where Wf ∈ R v×3vis a randomly initialized weight matrix, bf ∈ R vis corresponding bias vector, Rf ∈ R vis the final document representation. ## 4.3 Knowledge-Aware Contrastive Learning As we discussed in Section 1, the documents in the same category may share more concepts in the knowledge graph, while documents in different categories do not (more analysis about this phenomenon can be found in Appendix A). Therefore, we propose a contrastive learning strategy to further exploit the information inherent in the data. Specifically, we design this from both knowledge-driven and hierarchy-driven perspectives. Knowledge-driven CL. In this part, we aim to close the distance between documents that share more concepts in the knowledge graph. Specifically, inspired by (Wang et al., 2022a), in a minibatch of size b, we define a function to output all other instances for a specific instance i: g(i) = {k|k ∈ {1, 2, ..., b}, k ̸= i}. Then the knowledgedriven contrastive loss for each instance pair (*i, j*) can be calculated as: $$L_{c}^{i j}=-\beta_{i j}\log\frac{e^{-d(z_{i},z_{j})/\tau}}{\sum_{k\in g(i)}e^{-d(z_{i},z_{k})/\tau}},\tag{11}$$ $$c_{i j}=|C_{i}\cap C_{j}|,\quad\beta_{i j}=\frac{c_{i j}}{\sum_{k\in g(i)}c_{i k}},\tag{12}$$ where τ is the temperature of contrastive learning, d(·, ·) is the euclidean distance and zi represents the final representation Rf of document i. Ciis the concept set in document i, cij indicates the number of shared concepts in document i and j, and βij is the normalization of cij . The contrastive loss for the whole mini-batch P is the sum of all the instance pairs: Lc = i Pj∈g(i) L ij c . With this contrastive loss, for an instance pair (*i, j*), the more concepts they share, ![4_image_0.png](4_image_0.png) the larger the weight βij will become, thus increasing the value of their loss term L ij c . In consequence, their distance d(zi, zj ) will become closer. On the contrary, if they share fewer concepts, their distance d(zi, zj ) will be optimized relatively farther. Hierarchy-driven CL. In addition to the knowledge-driven CL, we can optimize the document representation via hierarchical label structure. As illustrated in Figure 4, document D1 and D2 share two labels in the hierarchy, while D1 and D3 only share one. Naturally, the distance between D1 and D2 should be closer than that between D1 and D3. From this perspective, in a mini-batch, we calculate the number of shared labels between document i and j: $$l_{i j}=|Y_{i}\cap Y_{j}|,$$ $$(13)$$ lij = |Yi ∩ Yj |, (13) where Yi means the label set of document i. Then, we use lij to replace cij in Eq.(12), and further calculate another contrastive loss L ij hfollowing Eq.(11). After that, we sum this loss across the whole mini-batch and obtain the hierarchy-driven contrastive loss Lh =Pi Pj∈g(i) L ij h . ## 4.4 Output Layer Output Classifier. Following the previous work (Zhou et al., 2020), in the output layer, we flatten the hierarchy for multi-label classification. We feed the final document representation Rf in Eq.(10) to a two-layer classifier: $$\begin{array}{l}{{Q=\varphi(W_{q}\cdot R_{f}+b_{q}),}}\\ {{P=\sigma(W_{p}\cdot Q+b_{p}),}}\end{array}\qquad\qquad(14)$$ $\mathbb{P}v\!\times\!v\;\;W\;\;\subset\;\mathbb{P}K\!\times\!v$, are:. where Wq ∈ R v×v, Wp ∈ R K×vare ramdomly initialized weight matries, bq ∈ R v, bp ∈ R K are corresponding bias vectors, φ is a non-linear activation function (e.g., ReLU), while σ is the sigmoid activation. P is a continuous vector and each element in this vector Pi denotes the probability that the document belongs to category i. | Statistics | BGC | WOS | |-------------------------------|--------|--------| | # total categories | 146 | 141 | | # hierarchical levels | 4 | 2 | | # avg categories per instance | 3.01 | 2.0 | | # train instance | 58,715 | 30,070 | | # dev instance | 14,785 | 7,518 | | # test instance | 18,394 | 9,397 | Table 1: The data statistics of BGC and WOS datasets. Training. For multi-label classification, we choose the binary cross-entropy loss function for document i on label j: $$L_{bce}^{ij}=-y_{ij}\log(p_{ij})-(1-y_{ij})\log(1-p_{ij}),\tag{15}$$ $$L_{b c e}=\sum_{i}\sum_{j=1}^{K}L_{b c e}^{i j},\qquad\qquad(16)$$ where pij is the prediction score, yij is the ground truth. The final loss is the combination of the classification loss and the two constrastive losses: $$L=L_{b c e}+\lambda_{c}L_{c}+\lambda_{h}L_{h},\qquad(17)$$ where λc and λh are hyperparameters that control the weights of two contrastive losses. ## 5 Experiment 5.1 Experiment Setup Datasets and Evaluation Metrics. We conduct experiments on the BlurbGenreCollection-EN (BGC)1and Web-of-Science (WOS)2(Kowsari et al., 2017) datasets. BGC consists of advertising descriptions of books, while WOS contains abstracts of published papers from Web of Science. More statistics about the datasets are illustrated in Table 1. As for the knowledge graph, we adopt the advanced knowledge graph named ConceptNet (Speer et al., 2017). We measure the experimental results with standard evaluation metrics (Gopal and Yang, 2013; Liu et al., 2020; Zhang et al., 2022), including MacroPrecision, Macro-Recall, Macro-F1 and Micro-F1. Implementation Details. In the Knowledge Pretraining part, we utilize OpenKE (Han et al., 2018) to train concept embedding via TransE. The dimension of TransE embedding is set to 768. We adopt *bert-base-uncased* from Transformers (Wolf et al., 2020) as the base architecture. In KTE module, when we conduct GraphSAGE to aggregate the neighbor information to concepts, we set the neighbor num k = 3 for each concept. We choose the mean aggregator as the aggregation function of GraphSAGE and the layer is set to 1. In KHLA module, the layer of GCN is set to 1. In KCL module, we set the contrastive learning temperature τ = 10 for knowledge-driven CL, while τ = 1 for hierarchy-driven CL. The dimension of hidden states is set to v = 768 in this paper. As for the loss weight in Eq.(17), λh is set to 1e − 4 on both BGC and WOS, while λc is set to 1e − 3 on BGC and 1e − 2 on WOS 3. The batch size is set to 16, and our model is optimized by Adam (Kingma and Ba, 2014) with a learning rate of 2e − 5. We train the model with train set and evaluate on development set after every epoch, and stop training if the Macro-F1 does not increase for 10 epochs. We run all experiments on a Linux server with two 3.00GHz Intel Xeon Gold 5317 CPUs and one Tesla A100 GPU 4. Benchmark Methods. We compare K-HTC with the state-of-the-art HTC methods. | Methods | BGC | WOS | | | | | | | |-----------------------------------------------------------------------------------------------|-------|-------|-------|-------|-------|-------|-------|-------| | Precision Recall Macro-F1 Micro-F1 Precision Recall Macro-F1 Micro-F1 Hierarchy-Aware Methods | | | | | | | | | | HiAGM | 57.41 | 53.45 | 54.71 | 74.49 | 82.77 | 78.12 | 80.05 | 85.95 | | HTCInfoMax | 61.58 | 52.38 | 55.18 | 73.52 | 80.90 | 77.27 | 78.64 | 84.65 | | HiMatch | 59.50 | 52.88 | 55.08 | 74.98 | 83.26 | 77.94 | 80.09 | 86.04 | | Pre-trained Language Methods | | | | | | | | | | HiAGM+BERT | 65.61 | 61.79 | 62.98 | 78.62 | 81.81 | 78.86 | 80.09 | 85.83 | | HTCInfoMax+BERT | 65.47 | 62.15 | 62.87 | 78.47 | 79.95 | 79.59 | 79.33 | 85.18 | | HiMatch+BERT | 64.67 | 62.05 | 62.62 | 79.23 | 82.29 | 80.00 | 80.92 | 86.46 | | KW-BERT | 66.39 | 62.68 | 63.72 | 79.24 | 82.88 | 78.75 | 80.30 | 86.19 | | HGCLR | 67.65 | 61.28 | 63.64 | 79.36 | 83.67 | 79.30 | 81.02 | 87.01 | | HPT | 70.27 | 62.70 | 65.33 | 80.72 | 83.71 | 79.74 | 81.10 | 86.82 | | K-HTC (ours) | 71.26 | 63.31 | 65.99 | 80.52 | 84.15 | 80.01 | 81.69 | 87.29 | - KW-BERT (Jang et al., 2021) is the advanced text classification method that incorporates knowledge graphs, which also adopts BERT as the text encoder. Among these baselines, only HiAGM and HTCInfoMax do not adopt the BERT encoder. For fair comparison, we implement them with BERT encoder, and denote them as HiAGM+BERT and HTCInfoMax+BERT. ## 5.2 Experimental Result The main results are shown in Table 2. Our proposed K-HTC method outperforms all baselines in all metrics, except for HPT in Micro-F1 on the BGC dataset, which proves the effectiveness of our method and the necessity to incorporate knowledge graphs. Moreover, there are also some interesting phenomena from these results: First, the differences between the hierarchyaware methods (i.e., HiAGM, HTCInfoMax and HiMatch) and their BERT-variants are more pronounced on BGC than on WOS. In detail, The depth of WOS is 2, and each document is labeled with one label on each level. However, the depth of BGC is 4, and the number of labels per document is unfixed10. As a result, the BGC dataset is more difficult than WOS, and it may be more conducive to the role of BERT. Another consideration is the pre-trained corpora of BERT. One of the pre-trained datasets of BERT is BookCorpus (Zhu 10The average number is 3.01. Please recall Table 1 for more details. et al., 2015), which is the same document type as BGC. This also plays a great role in improving the model's effectiveness. Second, with the help of the external KG and the proposed knowledge-infused attention mechanism, KW-BERT achieves good results on both two datasets as well. However, it performs relatively poorly compared with K-HTC, which demonstrates the effectiveness of our model design from another perspective. Third, although HPT achieves a slight ahead over K-HTC in MicroF1 on the BGC dataset, it regresses obviously on other metrics. In detail, Micro-F1 directly takes all the instances into account, while Macro-F1 gives equal weight to each class in the averaging process. For the multi-label classification with complex label structures, Macro-F1 is harder and more differentiated, which can better reflect the model capability. We further conduct the significance test in Appendix B. ## 5.3 Ablation Study In this subsection, we conduct ablation experiments to prove the effectiveness of different components of K-HTC model. We disassemble K-HTC by removing the KTE, KHLA, and KCL modules in turn. In particular, removing KTE indicates that the text encoder degenerates to the traditional BERT encoder. After removing KHLA, K-HTC pays little attention on the interaction between documents and labels, and thus we directly conduct mean pooling on the output of KTE to obtain the final representation Rf of the document. Finally, omitting KCL means that we directly omit two contrastive losses | Ablation Models Macro-F1 Micro-F1 K-HTC 65.99 80.52 -w/o KTE 64.38 79.29 -w/o KHLA 63.63 78.82 -w/o KCL 64.02 79.43 | |-----------------------------------------------------------------------------------------------------------------------| Table 3: Ablation experiments on the BGC dataset. | Ablation Models Macro-F1 Micro-F1 K-HTC 81.69 87.29 -w/o KTE 80.57 86.29 -w/o KHLA 80.04 86.46 -w/o KCL 80.18 86.38 | |-----------------------------------------------------------------------------------------------------------------------| ## In Eq.(17) In The Training Process. The results on the BGC and WOS datasets are listed in Table 3 and Table 4, respectively. From these statistics, we can find that there are obvious decreases in all ablation variants, which thoroughly demonstrates the validity and non-redundancy of our K-HTC method. Additionally, on the BGC dataset, the importance of KHLA module is relatively stronger than other modules. It is reasonable as BGC has a more complicated label hierarchy, which puts higher demands for label learning and the interaction between the documents and labels. ## 5.4 Effect Of Knowledge On Different Levels To further verify the effect of incorporating knowledge, we analyze the performance of K-HTC and its ablation variants on different levels of the BGC dataset. Specifically, BGC has four levels of labels, with the granularity of classification getting finer from top to bottom. Figure 5 deposits the performance comparison on different levels. It is clear that as the level deepens, the performance of all methods decreases, indicating the classification difficulty increases significantly. At the same time, the gap between K-HTC and its ablation variants widens as the depth increases. This suggests that incorporating knowledge can help improve the classification effectively, especially for these deeper and more difficult levels. Furthermore, the situation is more evident in the comparison between K-HTC and its variant -w/o KHLA, which is consistent with the analysis in Section 5.3. ![7_image_0.png](7_image_0.png) No. λc λh Macro-F1 Micro-F1 ① 10−2 10−4 81.69 **87.29** | K-HTC | |-------------------------------| | Fine-tuning λc Fine-tuning λh | ② 10−1 10−4 80.14 86.17 ③ 10−3 10−4 81.01 86.90 ④ 10−4 10−4 80.60 86.71 Fine-tuning λh ⑤ 10−2 10−1 77.55 85.40 ⑥ 10−2 10−2 81.04 86.78 ⑦ 10−2 10−3 80.97 86.84 ⑧ 10−2 10−5 80.96 86.55 ## 5.5 Parameter Sensitivity To study the influence of the loss hyperparameters λc and λh in K-HTC, we conduct comprehensive parameter sensitivity experiments on the WOS dataset. The results are reported in Table 5. The first experiment is the best hyperparameters of our model. In experiment ② ∼ ④, we fix λh and fine-tune λc; in experiment ⑤ ∼ ⑧, λc is fixed and λh is fine-tuned. From the results, we find that the larger or smaller λc will lead to an obvious decrease on the classification performance. The same situation happens to λh. It is reasonable as these two hyperparameters control the weights of two contrastive losses. Too large weight will affect the original BCE classification loss, while too small weight will restrict its own effect. ## 5.6 Case Study To further illustrate the effect of incorporating knowledge graphs in the K-HTC model, we conduct case study on both WOS and BGC datasets. Specifically, in Figure 6 and 7, we present the in- ![8_image_0.png](8_image_0.png) ![8_image_1.png](8_image_1.png) put document, the knowledge retrieved from KG, the ground truth and the prediction of K-HTC, respectively. As shown in Figure 6, with the help of the knowledge *(Neural_Network, is_a, Machine_Learning)* and (Neural_Network, related_to, Computer), K-HTC reasonably makes the correct inference, i.e., *Computer Science* and *Machine* Learning in the taxonomic hierarchy. A similar situation can be found in the case of Figure 7 as well. These intuitively demonstrate the great role of knowledge and further verify the validity of our K-HTC method. More experimental analyses, such as Visualization and Bad Case Analysis, can be found in Appendix C and D. ## 6 Conclusions In this paper, we explored a motivated direction for incorporating the knowledge graph into hierarchical text classification. We first analyzed the necessity to integrate knowledge graphs and further proposed a Knowledge-enabled Hierarchical Text Classification model (K-HTC). Specifically, we designed a knowledge-aware text encoder, which could fuse the text representation and its corresponding concept representation learned from KGs. Subsequently, a knowledge-aware hierarchical label attention module was designed to model the interaction between the documents and hierarchical labels. More importantly, we proposed a knowledge-aware contrastive learning strategy, which could further boost the classification performance by exploiting the information inherent in the data. Finally, extensive experiments on two publicly available HTC datasets demonstrated the effectiveness of our proposed method. We hope our work will lead to more future studies. ## Limitations In our proposed K-HTC method, incorporating the knowledge graph requires the concept recognition and pre-training process, as we introduced in Section 3.2. This process may consume additional time compared with other HTC methods, but it can be done in advance and does not need to be repeated, making it suitable for both research and industrial settings. Besides, due to the errors of concept recognition algorithms, this process may introduce some noisy information in reality. This will interfere with the use of knowledge. In future work, we will attempt to utilize entity linking algorithms (Wang et al., 2023) to further guarantee the quality of recognized knowledge. Another limitation is that we utilize the label name in the KHLA module. It may not be available for some datasets with only label ids. In response to this, we can select high-frequency keywords from documents in each category, which play the same role as the label name. ## Acknowledgements This research was partially supported by grants from the National Natural Science Foundation of China (Grants No. U20A20229, No. 62106244), and the National Education Examinations Authority (Grant No. GJK2021009). ## References Siddhartha Banerjee, Cem Akkaya, Francisco PerezSorrosal, and Kostas Tsioutsiouliklis. 2019. Hierarchical transfer learning for multi-label text classification. In *Proceedings of the 57th Annual Meeting of* the Association for Computational Linguistics, pages 6295–6300. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data, pages 1247–1250. Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. *Advances in neural information processing systems*, 26. Lijuan Cai and Thomas Hofmann. 2004. Hierarchical document categorization with support vector machines. In *Proceedings of the thirteenth ACM international conference on Information and knowledge* management, pages 78–87. Haibin Chen, Qianli Ma, Zhenxi Lin, and Jiangyue Yan. 2021. Hierarchy-aware label semantics matching network for hierarchical text classification. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4370–4379. Zhongfen Deng, Hao Peng, Dongxiao He, Jianxin Li, and S Yu Philip. 2021. Htcinfomax: A global model for hierarchical text classification via information maximization. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, pages 3259–3265. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*. Siddharth Gopal and Yiming Yang. 2013. Recursive regularization for large-scale classification with hierarchical and graphical dependencies. In *Proceedings* of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 257– 265. Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. *Advances in neural information processing systems*, 30. Xu Han, Shulin Cao, Xin Lv, Yankai Lin, Zhiyuan Liu, Maosong Sun, and Juanzi Li. 2018. Openke: An open toolkit for knowledge embedding. In Proceedings of the 2018 conference on empirical methods in natural language processing: system demonstrations, pages 139–144. Wei Huang, Enhong Chen, Qi Liu, Yuying Chen, Zai Huang, Yang Liu, Zhou Zhao, Dan Zhang, and Shijin Wang. 2019. Hierarchical multi-label text classification: An attention-based recurrent network approach. In *Proceedings of the 28th ACM international conference on information and knowledge management*, pages 1051–1060. Hyeju Jang, Seojin Bang, Wen Xiao, Giuseppe Carenini, Raymond Ng, and Young ji Lee. 2021. Kw-attn: Knowledge infused attention for accurate and interpretable text classification. In Proceedings of Deep Learning Inside Out (DeeLIO): The 2nd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, pages 96–107. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Kamran Kowsari, Donald E Brown, Mojtaba Heidarysafa, Kiana Jafari Meimandi, Matthew S Gerber, and Laura E Barnes. 2017. Hdltex: Hierarchical deep learning for text classification. In 2017 16th IEEE international conference on machine learning and applications (ICMLA), pages 364–371. IEEE. Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick Van Kleef, Sören Auer, et al. 2015. Dbpedia–a large-scale, multilingual knowledge base extracted from wikipedia. Semantic web, 6(2):167–195. Bill Yuchen Lin, Xinyue Chen, Jamin Chen, and Xiang Ren. 2019. Kagnet: Knowledge-aware graph networks for commonsense reasoning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2829–2839. Ye Liu, Han Wu, Zhenya Huang, Hao Wang, Jianhui Ma, Qi Liu, Enhong Chen, Hanqing Tao, and Ke Rui. 2020. Technical phrase extraction for patent mining: A multi-level approach. In 2020 IEEE International Conference on Data Mining (ICDM), pages 1142– 1147. IEEE. Steffen Remus, Rami Aly, and Chris Biemann. 2019. Germeval 2019 task 1: Hierarchical classification of blurbs. In *KONVENS*. Kazuya Shimura, Jiyi Li, and Fumiyo Fukumoto. 2018. Hft-cnn: Learning hierarchical category structure for multi-label short text categorization. In *Proceedings* of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 811–816. Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In Thirty-first AAAI conference on artificial intelligence. Aixin Sun and Ee-Peng Lim. 2001. Hierarchical text classification and evaluation. In Proceedings 2001 IEEE International Conference on Data Mining, pages 521–528. IEEE. Jin Wang, Zhongyuan Wang, Dawei Zhang, and Jun Yan. 2017. Combining knowledge with deep convolutional neural networks for short text classification. In *IJCAI*, volume 350, pages 3172077–3172295. Kehang Wang, Qi Liu, Kai Zhang, Ye Liu, Hanqing Tao, Zhenya Huang, and Enhong Chen. 2023. Classdynamic and hierarchy-constrained network for entity linking. In *Database Systems for Advanced Applications: 28th International Conference, DASFAA* 2023, Tianjin, China, April 17–20, 2023, Proceedings, Part II, pages 622–638. Springer. Ran Wang, Xinyu Dai, et al. 2022a. Contrastive learning-enhanced nearest neighbor mechanism for multi-label text classification. In *Proceedings of the* 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 672–679. Zihan Wang, Peiyi Wang, Lianzhe Huang, Xin Sun, and Houfeng Wang. 2022b. Incorporating hierarchy into text encoder: a contrastive learning approach for hierarchical text classification. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7109–7119. Zihan Wang, Peiyi Wang, Tianyu Liu, Binghuai Lin, Yunbo Cao, Zhifang Sui, and Houfeng Wang. 2022c. Hpt: Hierarchy-aware prompt tuning for hierarchical text classification. In *Proceedings of the 2022 Conference on Empirical Methods in Natural Language* Processing (EMNLP). Association for Computational Linguistics. Jonatas Wehrmann, Ricardo Cerri, and Rodrigo Barros. 2018. Hierarchical multi-label classification networks. In *International conference on machine learning*, pages 5075–5084. PMLR. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 conference on empirical methods in natural language processing: system demonstrations, pages 38–45. Kai Zhang, Kun Zhang, Mengdi Zhang, Hongke Zhao, Qi Liu, Wei Wu, and Enhong Chen. 2022. Incorporating dynamic semantics into pre-trained language model for aspect-based sentiment analysis. In Findings of the Association for Computational Linguistics: ACL 2022, pages 3599–3610. Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. Ernie: Enhanced language representation with informative entities. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1441– 1451. Jie Zhou, Chunping Ma, Dingkun Long, Guangwei Xu, Ning Ding, Haoyu Zhang, Pengjun Xie, and Gongshen Liu. 2020. Hierarchy-aware global model for hierarchical text classification. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1106–1117. Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In *Proceedings of the IEEE international conference on computer vision*, pages 19–27. ## A Data Analysis | Hierarchical Level | BGC | WOS | |----------------------|-------|-------| | L-1 | 4.29 | 5.82 | | L-2 | 4.93 | 8.00 | | L-3 | 5.96 | − | | L-4 | 5.94 | − | | Total | 3.12 | 4.87 | Table 6: The average number of shared concepts between two arbitrary documents in the same category. "Total" refers to the shared concept situation across the whole dataset. We calculate the average number of shared concepts between two arbitrary documents in the same category. Table 6 illustrates this situation on different levels. The "Total" line reports the average number of shared concepts between two arbitrary documents across the whole dataset, which can be adopted as the comparison standard. We could find that the shared concepts increase as the depth deepens, except for a slight fluctuation on the fourth level of BGC. Besides, the results on different levels are all significantly larger than the "Total" line. These findings provide valid support for the Knowledge-aware Contrastive Learning (KCL) module in Section 4.3. ## B Significance Analysis | Methods | BGC | WOS | |----------------------------------------|-------|-------| | K-HTC / HPT | 0.046 | 0.016 | | Table 7: P-value between K-HTC and HPT | | | In Table 2, the experimental results of our KHTC model and HPT are relatively close. To better demonstrate the superiority of K-HTC, we do the Student t-test to clarify whether K-HTC performs better than HPT. Specifically, we repeat the experiment five times with different seeds on both BGC and WOS datasets, and report the p-value results on Macro-F1. From the results in Table 7, we find that both two results are smaller than the significance level 0.05. Therefore, we reject the hypothesis that the performances between K-HTC and HPT are approximate. It suggests that K-HTC is more effective than HPT in most circumstances. ## C Visualization ![11_image_1.png](11_image_1.png) In K-HTC, we design a Knowledge-aware Hierarchical Label Attention (KHLA) module to learn the label representation, which can further mine the interaction between documents and labels. In the hierarchical label structure, it is expected that labels with the same parent have more similar representations than those with different parents. To verify this, we plot the T-SNE projections of the learned label embedding (i.e., H learned from Eq.(7)) on the WOS dataset. Specifically, the depth of the WOS label hierarchy is 2. In Figure 8, the left part plots child labels on the second level, while the right part indicates the parent labels on the first level. From this figure, we can find that labels with the same parent are clearly clustered together, while labels with different parents are significantly farther apart from each other. This thoroughly demonstrates the effectiveness of the KHLA module. ## D Bad Case Analysis As we discussed in the Limitations section, we incorporate the knowledge graph via the concept ![11_image_0.png](11_image_0.png) recognition process. This may introduce some noise due to the inevitable errors of the recognition algorithm. More importantly, even with accurate concept recognition results, how to ensure the effectiveness of external knowledge is still a challenge. An example of this can be observed in Figure 9, although our method accurately recognizes the mentioned concepts *Clark* and *Scott*, it still introduces the noisy knowledge *(Clark, related_to, USA)* and *(Scott, related_to, USA)*. As a result, K-HTC makes a wrong prediction, i.e., Travel: USA & Canada. This indicates that we need to focus on the quality of relevant knowledge rather than roughly introducing all of them. In future work, we will attempt to design a knowledge filtering module to ensure the quality of introduced knowledge, which can further improve the performance of K-HTC. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 7 (after the Conclusions) A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 5.1 ✓ B1. Did you cite the creators of artifacts you used? Section 5.1 B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 5.1 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 5.1 ## C ✓ **Did You Run Computational Experiments?** Section 5 C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Not applicable. The efficiency is not our focus or goal in this work. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 5.1 and Section 5.5 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 5 and Appendix B ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 5.1 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
zhang-etal-2023-many
How Many Answers Should {I} Give? An Empirical Study of Multi-Answer Reading Comprehension
https://aclanthology.org/2023.findings-acl.359
The multi-answer phenomenon, where a question may have multiple answers scattered in the document, can be well handled by humans but is challenging enough for machine reading comprehension (MRC) systems. Despite recent progress in multi-answer MRC, there lacks a systematic analysis of how this phenomenon arises and how to better address it. In this work, we design a taxonomy to categorize commonly-seen multi-answer MRC instances, with which we inspect three multi-answer datasets and analyze where the multi-answer challenge comes from. We further analyze how well different paradigms of current multi-answer MRC models deal with different types of multi-answer instances. We find that some paradigms capture well the key information in the questions while others better model the relation between questions and contexts. We thus explore strategies to make the best of the strengths of different paradigms. Experiments show that generation models can be a promising platform to incorporate different paradigms. Our annotations and code are released for further research.
# How Many Answers Should I Give? An Empirical Study Of Multi-Answer Reading Comprehension Chen Zhang1, Jiuheng Lin1, Xiao Liu1**, Yuxuan Lai**3, Yansong Feng1,2∗ , Dongyan Zhao1,4,5 1 Wangxuan Institute of Computer Technology, Peking University, China 2 The MOE Key Laboratory of Computational Linguistics, Peking University, China 3 Department of Computer Science, The Open University of China 4 State Key Laboratory of Media Convergence Production Technology and Systems 5 Beijing Institute for General Artificial Intelligence {zhangch,lxlisa,fengyansong,zhaody}@pku.edu.cn [email protected] [email protected] ## Abstract The multi-answer phenomenon, where a question may have multiple answers scattered in the document, can be well handled by humans but is challenging enough for machine reading comprehension (MRC) systems. Despite recent progress in multi-answer MRC, there lacks a systematic analysis of how this phenomenon arises and how to better address it. In this work, we design a taxonomy to categorize commonlyseen multi-answer MRC instances, with which we inspect three multi-answer datasets and analyze where the multi-answer challenge comes from. We further analyze how well different paradigms of current multi-answer MRC models deal with different types of multi-answer instances. We find that some paradigms capture well the key information in the questions while others better model the relationship between questions and contexts. We thus explore strategies to make the best of the strengths of different paradigms. Experiments show that generation models can be a promising platform to incorporate different paradigms. Our annotations and code are released for further research1. ## 1 Introduction In the typical setting of machine reading comprehension, such as SQuAD (Rajpurkar et al., 2016), the system is expected to extract a single answer from the passage for a given question. However, in many scenarios, questions may have multiple answers scattered in the passages, and all the answers should be found to completely answer the questions, such as the examples illustrated in Figure 1. Recently, a series of MRC benchmarks featuring multi-answer instances have been constructed, including DROP (Dua et al., 2019), Quoref (Dasigi ∗Corresponding author. 1https://github.com/luciusssss/ how-many-answers ![0_image_0.png](0_image_0.png) Figure 1: Two examples from existing multi-answer MRC datasets. et al., 2019) and MultiSpanQA (Li et al., 2022). Most current research efforts focus primarily on improving the overall QA performance on these benchmarks (Hu et al., 2019; Segal et al., 2020; Li et al., 2022). Yet, as far as we know, there still lacks a systematic analysis of how the phenomenon of multi-answer arises and how we can better tackle this challenge. In this paper, we systematically analyze the categorization of multi-answer MRC instances and investigate how to design a strong multi-answer MRC system. We try to answer the following research questions: (1) Where does the multianswer challenge come from? (2) How do different MRC models specifically deal with the multianswer challenge? (3) How can we design better models by combining different multi-answer MRC paradigms? We first analyze existing multi-answer MRC datasets to track the origin of the multi-answer challenge. Previous works have attempted to categorize multi-answer instances primarily based on the distances or relationships between multiple answers (Li et al., 2022; Ju et al., 2022). Yet, they did not holistically consider the interaction between questions and contexts. We observe that in some cases the number of answers is indicated in the question itself (*two players* in Example A of Figure 1) while in others we have no idea until we read the documents carefully (Example B of Figure 1). To better understand this challenge, we develop a taxonomy for the multi-answer phenomenon, based on how the number of answers is determined: the question itself suffices, or both the question and the passage should be taken into consideration. We annotate 6,857 instances from DROP, Quoref, and MultiSpanQA based on our taxonomy and find that the procedure of dataset construction has a large influence on the expressions in the questions. Most questions in crowdsourced datasets contain certain clues indicating the number of answers. By contrast, real-world information-seeking questions are less likely to specify the number of answers, which is usually dependent on the passages. We further use our annotations to examine the performance of current MRC solutions regarding the multi-answer challenge (Hu et al., 2019; Segal et al., 2020; Li et al., 2022), which can be categorized into 4 paradigms, i.e., TAGGING, NUMPRED, ITERATIVE and GENERATION. We analyze their strengths and weaknesses and find that some efforts, e.g., NUMPRED, are good at capturing the key information in the questions, while others, e.g., ITER-ATIVE, can better model the relation between questions and contexts. This motivates us to investigate better ways to benefit from different paradigms. Given the complementary nature of these paradigms, we wonder whether a combination of paradigms improves performance on multi-answer MRC. We explore two strategies, early fusion and late ensemble, to benefit from different paradigms. With a generation model as the backbone, we attempt to integrate the paradigms NUMPRED and INTERATIVE, in a lightweight Chain-of-Thought style (Wei et al., 2022). Experiments show that the integration remarkably improves the performance of generation models, demonstrating that GENERA-TION is a promising platform for paradigm fusion. Our contributions are summarized as follows: (1) We design a taxonomy for multi-answer MRC instances according to how the number of answers can be determined. It considers both questions and contexts simultaneously, enlightening where the multi-answer challenge comes from. (2) We annotate 6,857 instances from 3 datasets with our taxonomy, which enables us to examine 4 paradigms for multi-answer MRC in terms of their strengths and weaknesses. (3) We explore various strategies to benefit from different paradigms. Experiments show that generation models are promising to be backbones for paradigm fusion. ## 2 Task Formulation In multi-answer MRC, given a question Q and a passage P, a model should extract several spans, A = {a1, a2*, ..., a*n}(n ≥ 1), from P to answer Q. Each span, ai ∈ A, corresponds to a partial answer to Q, and the answer set A as a whole answers Q completely. These spans can be contiguous or discontiguous in the passage. We distinguish between two terms, *multi-answer* and *multi-span*, which are often confused in previous works. *Multi-answer* indicates that a question should be answered with the complete set of entities or utterances. *Multi-span* is a definition from the perspective of answer annotations. In certain cases, the answer annotation of a question can be either single-span or multi-span, as explained in the next paragraph. Ideally, we expect that the answers to a multi-answer question should be annotated as multi-span in the passage, where each answer is grounded to a single span, although some of them can be contiguous in the passage. ## Q0: What'S Canada'S Official Language? P: [...] **English** And **French**, Are The Official Languages Of The Government Of Canada. [...] For example, in Q0, there are two answers, *English* and *French*, to the given question. According to the annotation guidelines of SQuAD, one might annotate this instance with a single continuous span English and French. Yet, this form of annotation is not preferred in the multi-answer MRC setting. It blurs the boundary of different answers and fails to denote explicitly the number of expected answers. Thus, it is suboptimal for a comprehensive model evaluation. Instead, we suggest denoting each answer with distinct spans, say, annotating this instance with two spans, *English* and *French*. With this criterion, we can encourage models to disentangle different answers. With fine-grained answer annotations, we can also assess how well a model answers a question sufficiently and precisely. This annotation criterion generally conforms to the annotation guidelines of existing multi-answer datasets, e.g., DROP, Quoref and MultiSpanQA. ![2_image_0.png](2_image_0.png) | Type | Question | # Ans. | |----------------------|-----------------------------------------------------------|----------| | Cardinal | Which two players completed 1-yard TD pass? | 2 | | Ordinal | Who scored the first touchdown of the game? | 1 | | Comp./Super. | What's the largest pizza chain in America? | 1 | | Is San Juan Bautista | | | | Alternative | incorporated or | 1 | | unincorporated? | | | | Other | What are the first names of the trio who try to call 911? | 3 | | Semantics | | | A few instances violating the criterion are considered as bad annotations, as discussed in Section 4.2. See more remarks on the task formulation in Appendix A. ## 3 Taxonomy Of Multi-Answer Mrc To better understand the challenge of multi-answer, we first design a taxonomy to categorize various multi-answer MRC instances. It assesses how the number of answers relates to the question or passage provided. Different from the previous works that classify questions according to the distances or relations between multiple answers (Li et al., 2022; Ju et al., 2022), our taxonomy, taking both questions and passages into consideration, focuses on how the number of answers is determined. This enables us to analyze multi-answer questions and single-answer questions in a unified way. We illustrate our taxonomy in Figure 2 and elaborate on each category as follows. Question-Dependent If one can infer the exact number of answers from the question without referring to the passage, this instance belongs to the question-dependent category. According to whether there are clue words that directly indicate the number of answers, this type is further divided into two sub-categories: (a) In a with-clue-words question, one can find a few words that indicate the number of answers. In Q1, the word two in the question indicates that two answers are expected. Q1: What are the two official languages of Puerto Rico? P: [...] **English** is an official language of the Government of Puerto Rico. [...] As another official language, Spanish is widely used in Puerto Rico. [...] We group the clue words into five types: cardinal, ordinal, comparative/superlative, alternative, and other lexical semantics, as illustrated in Table 1. (b) In a without-clue-words question, although we can not locate obvious clue words, we can infer the number of answers with sentence semantics or commonsense knowledge. In Q2, we can determine that there is only one conversion result for the question based on sentence semantics instead of any single words. Q2: 1 light year equal to how many km? P: [...] The light-year is a unit of length used to express astronomical distances. It is about **9.5 trillion kilometres** or 5.9 trillion miles. [...] In Q3, we can infer that the following question has only one answer, based on the commonsense that there is only one winner of a given Super Bowl. Q3: Who won Super Bowl XXXIX? P: [...] The Eagles advanced to Super Bowl XXXIX, where they dueled the 2004 **New England Patriots** season. [...] The Patriots won 24-21. [...] Passage-Dependent In a passage-dependent instance, the question itself is not adequate to infer the number of answers. One needs to rely on the provided passage to decide how many answers are needed to answer the question. In Q4, we have no idea of the number of answers solely based on the question. If we refer to the passage, we will find ten answers to the question. Q4: Which countries does the Danube River flow through? P: [...] Originating in **Germany**, the Danube flows southeast for 2,850 km, passing through or bordering Austria, Slovakia, Hungary, Croatia, Serbia, Romania, Bulgaria, **Moldova** and **Ukraine** before draining into the Black Sea. [...] ## 4 Analyses Of Multi-Answer Datasets We investigate existing multi-answer datasets based on our designed taxonomy to analyze where the multi-answer challenge comes from. | Dataset | All | Single-Ans. | Multi-Ans. | |-------------|-------|---------------|--------------| | DROP | 3,133 | 2,609 | 524 | | Quoref | 2,418 | 2,198 | 220 | | MultiSpanQA | 1,306 | 653 | 653 | | Total | 6,857 | 5,460 | 1,397 | ## 4.1 Datasets We annotate the validation sets of three widelyused multi-answer MRC datasets, i.e., DROP (Dua et al., 2019), Quoref (Dasigi et al., 2019), and MultiSpanQA (Li et al., 2022). The number of annotated questions is listed in Table 2 and more statistics are in Appendix B. DROP is a crowdsourced MRC dataset for evaluating the discrete reasoning ability. The annotators are encouraged to devise questions that require discrete reasoning such as arithmetic. DROP has four answer types: numbers, dates, single spans, and sets of spans. Since the previous two types of answers are not always exact spans in the passages, we only consider the instances whose answers are single spans or sets of spans. Quoref focuses on the coreferential phenomena. The questions are designed to require resolving coreference among entities. 10% of its instances require multiple answer spans. MultiSpanQA is a dataset specialized for multispan reading comprehension. The questions are extracted from NaturalQuestions (Kwiatkowski et al., 2019), which are real queries from the Google search engine. ## 4.2 Annotation Annotation Process Our annotation process is two-staged: we first automatically identify some question-dependent instances and then recruit annotators to classify the remaining ones. In the first stage, we automatically identify the questions containing certain common clue words such as numerals (full list in Appendix B) to reduce the workload of whole-process annotation. Afterward, the annotators manually check whether each instance is question-dependent. Out of the 4,594 recalled instances, 3,727 are identified as question-dependent. In the second stage, we recruit annotators to annotate the remaining 3,130 instances. For each instance, given both the question and the answers, the annotators should first check whether the form of answers is correct and mark incorrect cases as bad-annotation2. We show examples of common bad-annotation cases in Table 10. After filtering out the bad-annotation ones, the annotators are presented with the question only and should decide whether they could determine the number of answers solely based on the question. If so, this instance is annotated as question-dependent; otherwise passage-dependent. For a question-dependent instance, the annotators are further asked to extract the clue words, if any, from the question, which determines whether the instance is with-clue-words or without-clue-words. Quality Control Six annotators participated in the annotation after qualification. Each instance is annotated by two annotators. In case of any conflict, a third annotator resolves it. An instance is classified as bad-annotation if any annotator labels it as bad-annotation. Cohen's Kappa between two initial annotators is 0.70, indicating substantial agreement. See more details in Appendix B. ## 4.3 Analyses Of Annotation Results With our annotated data, we study how the multianswer instances differ across different datasets under our designed taxonomy. We find that the distributions of instance types are closely related to how the datasets are constructed. Instance Types The distributions of instance types in different datasets are shown in Table 3. Question-dependent prevails in DROP and Quoref, making up over 70% of the two datasets. In contrast, most instances in MultiSpanQA are passage-dependent. This difference stems from how the questions are collected. DROP and Quoref use crowdsourcing to collect questions with specific challenges. Given a passage, the annotators know the answers in advance and produce questions that can only be answered through certain reasoning skills. These artificial questions are more likely to contain clues to the number of answers, such as the question with ordinal in Table 1. By contrast, the questions in MultiSpanQA are collected from search engine queries. Users generally have no idea of the answers to the queries. The number of answers, as a result, is more often de-2In the first stage, the annotators also need to check whether an instance is bad-annotation. | Dataset | passage-dependent | question-dependent | bad-annotation | | | | |-------------|---------------------|----------------------|------------------|--------|-------------|-----------| | All | with-clue-word | no-clue-word | | | | | | DROP | 826 (26.4%) | 2,242 (71.6%) | 2,204 (70.3%) | 38 | (1.2%) | 65 (2.1%) | | Quoref | 711 (29.4%) | 1,704 (70.5%) | 1,639 (67.8%) | 65 | (2.7%) | 3 (0.2%) | | MultiSpanQA | 991 (75.9%) | 285 (21.8%) | 121 | (9.3%) | 164 (12.6%) | 30 (2.3%) | | Total | 2,528 (36.9%) | 4,231 (61.7%) | 3,964 (57.8%) | 267 | (3.9%) | 98 (1.4%) | | Dataset | with-clue-word | Cardinal | Ordinal | Comp./Super. | Alternative | Other Semantics | | | | | |-------------|------------------|------------|------------|----------------|---------------|-------------------|------------|--------|--------|---------------| | DROP | 2,204 | 113 | (5.1%) | 592 (26.9%) | 1,298 (58.9%) | 1,214 (55.1%) | 135 | (6.1%) | | | | Quoref | 1,639 | 83 | (5.1%) | 35 | (2.1%) | 25 | (1.5%) | 0 | (0.0%) | 1,501 (91.6%) | | MultiSpanQA | 121 | 51 (41.8%) | 26 (21.3%) | 23 (19.0%) | 2 | (1.6%) | 19 (15.6%) | | | | Table 3: Distribution of instance types in three datasets. Table 4: Distribution of clue word types in three datasets. A question may contain multiple types of clue words. pendent on the provided passages, such as Q4 in Section 3. Clue Words Since a large portion (57.8%) of the annotated instances belong to the with-clue-word type, we further investigate the distribution of clue words in different datasets, shown in Table 4. On the one hand, the questions contain a large variety of clue words, demonstrating the complexity of multi-answer MRC. On the other hand, the prevailing type of clue words is different in each dataset, reflecting the preference in dataset construction. Specifically, nearly 60% of the with-clue-word questions in DROP are alternative questions with comparatives/superlatives, because DROP's annotators are encouraged to inject discrete reasoning challenges, e.g., comparison, when writing questions. In Quoref, 91% of the clue words indicate the number of answers through their lexical semantics. This unbalanced distribution results from the emphasis on coreference resolution: most questions begin with *what is the name of the person who ...*, where *name of the person* is identified as clue words. In MultiSpanQA, whose questions are search engine queries, 63% of the with-clue-word questions contain numerals. If users already know the number of desired answers, they tend to restrict it in the question, such as seven wonders of the world. We provide more analyses on of how the instance types are distributed with respect to the specific number of answers in Appendix C. ## 5 Existing Multi-Answer Mrc Models Based on our categorization of the multi-answer instances, we continue to investigate how existing multi-answer MRC models perform differently on various types of multi-answer instances. We summarize current solutions into four paradigms according to how they obtain multiple answers, as illustrated in Figure 3. T**AGGING** Segal et al. (2020) cast the multianswer MRC task as a sequence tagging problem, similar to named entity recognition (NER), so that the model can extract multiple non-contiguous spans from the context. NUMPRED **(Number Prediction)** Hu et al. (2019) first predict the number of answers k as an auxiliary task and then select the top k nonoverlapped ones from the output candidate spans. I**TERATIVE** Searching for evidence iteratively is widely adopted in many QA tasks (Xu et al., 2019; Zhao et al., 2021; Zhang et al., 2021), but it is not explored in multi-answer MRC. We adapt this idea to extract multiple answers iteratively. In each iteration, we append the previously extracted answers to the question, with the word *except* in between, and then feed the updated question to a single-answer MRC model. The iterative process terminates when the model predicts no more answers. G**ENERATION** Generation has been adopted as a uniform paradigm for many QA tasks (Khashabi et al., 2020, 2022), but it is less explored on multianswer MRC. For GENERATION, we concatenate all answers, with semicolons as separators, to form an output sequence, and finetune the model to generate it conditioned on the question and passage. ## 5.1 Experimental Setup Implementation Details We use RoBERTabase (Liu et al., 2019) for the three extractive ![5_image_0.png](5_image_0.png) | Model | EM | PM | | | | | | | | |-------------|---------|----------|-------|-------|-------|-------|-------|--------|--------| | P | R | F1 | P | R | F1 | | | | | | DROP | | | | | | | | | | | TAGGING | 61.86 | 63.91 | 62.87 | 77.53 | 77.39 | 77.46 | | | | | NUMPRED | 61.59 | 56.77 | 59.09 | 76.71 | 74.86 | 75.77 | | | | | ITERATIVE | 60.66 | 60.07 | 60.36 | 76.19 | 76.04 | 76.11 | | | | | GENERATION | 60.07 | 57.15 | 58.58 | 75.39 | 72.39 | 73.86 | | | | | Quoref | | | | | | | | | | | TAGGING | 71.00 | 72.21 | 71.60 | 80.44 | 79.74 | 80.09 | | | | | NUMPRED | 65.61 | 63.57 | 64.57 | 77.30 | 78.20 | 77.75 | | | | | ITERATIVE | 67.28 | 66.35 | 66.81 | 78.57 | 78.58 | 78.57 | | | | | GENERATION | 63.57 | 63.39 | 63.48 | 73.38 | 74.02 | 73.70 | | | | | MultiSpanQA | | | | | | | | | | | TAGGING | 61.31 | 68.84 | 64.85 | 80.45 | 83.08 | 81.75 | | | | | NUMPRED | 55.03 | 46.06 | 50.15 | 80.16 | 75.26 | 77.63 | | | | | ITERATIVE | 66.32 | 67.98 | 67.14 | 84.39 | 80.96 | 82.64 | | | | | GENERATION | 65.40 | 62.60 | 63.97 | 82.06 | 78.14 | 80.06 | Model | p-dep. | q-dep. | | All | w/-clue | w/o-clue | | | | | | | | | DROP | | | | | | | | | | | TAGGING | 74.57 | 79.11 | 80.88 | 68.77 | | | | | | | NUMPRED | 72.37 | 77.54 | 79.32 | 70.08 | | | | | | | ITERATIVE | 73.47 | 77.60 | 79.21 | 65.73 | | | | | | | GENERATION | 72.18 | 74.77 | 76.19 | 72.62 | | | | | | | Quoref | | | | | | | | | | | TAGGING | 70.60 | 84.86 | 85.23 | 75.76 | | | | | | | NUMPRED | 69.45 | 81.88 | 82.44 | 70.12 | | | | | | | ITERATIVE | 71.42 | 82.18 | 82.37 | 77.30 | | | | | | | GENRATION | 66.31 | 77.41 | 78.38 | 52.63 | | | | | | | MultiSpanQA | | | | | | | | | | | TAGGING | 82.28 | 79.66 | 86.60 | 73.36 | | | | | | | NUMPRED | 77.77 | 77.11 | 78.19 | 78.77 | | | | | | | ITERATIVE | 82.78 | 82.09 | 87.22 | 77.80 | | | | | | | GENERATION | 80.57 | 78.05 | 81.73 | 75.85 | | | | | | paradigms and BART-base (Lewis et al., 2020) for GENERATION. We train models on the training sets of each dataset and evaluate them on the corresponding validation sets with our instance type annotations. See more details in Appendix D.1. Metrics We adopt the official metrics of MultiSpanQA (Li et al., 2022), including the precision (P), recall (R), and F1 in terms of exact match (EM) and partial match (PM). See Appendix D.2 for details. ## 5.2 Results And Analyses We report the overall performance in Table 5, and the performance on different instance types in Table 6. We observe that each of these paradigms has its own strengths and weaknesses. TAGGING outperforms other paradigms on DROP and Quoref, whose dominating instance type is question-dependent. Although TAG-GING has no explicit answer number prediction step, it can still exploit this information implicitly because it takes the question into account during the sequential processing of every token. Besides, TAGGING, as a common practice for entity recognition, is good at capturing the boundaries of entities. Thus, it is not surprising that it performs the best on DROP and Quoref, most of whose answers are short entities. ITERATIVE achieves the best overall performance on MultiSpanQA, whose prevailing instance type is passage-depenent. This paradigm does not directly exploit the information of the number of answers given in the question. Rather, it encourages adequate interactions between questions and passages, performing single-answer extraction at each step. As a result, ITERATIVE does well for the questions whose number of answers heavily depends on the given context. As for NUMPRED, although we expect high performance on question-dependent instances, it lags behind TAGGING by approximately 2% in PM F1 on DROP and Quoref. This might result ![6_image_0.png](6_image_0.png) from the gap between training and inference. The model treats the answer number prediction and answer span extraction as two separate tasks during training, with limited interaction. Yet during inference, the predicted number of answers is used as a hard restriction on multi-span selection. Different from the decent performance on DROP and Quoref, NUMPRED performs worst among the four paradigms on MultiSpanQA, because it is difficult for models to accurately predict the number of answers for a long input text that requires thorough understanding. Among all paradigms, GENERATION generally performs the worst. Under the same parameter scale, extractive models seem to be the better choice for tasks whose outputs are exact entity spans from the input, while generation models do well in slightly longer answers. This also explains the smaller gap between GENERATION and extractive paradigms on MultiSpanQA compared to that on DROP and Quoref: MultiSpanQA has many descriptive long answers instead of short entities only. ## 6 Fusion Of Different Paradigms From the above analysis, we can see that extractive methods can better locate exact short spans in the passage, and NUMPRED can provide potential guidance on the number of answers. Meanwhile, the generation models can better handle longer answers and are more adaptable to different forms of inputs and outputs. Now an interesting question is how to combine different paradigms to get the best of both worlds. We explore two strategies for combining different paradigms: **early fusion** and **late ensemble**. The former mixes multiple paradigms in terms of model architectures while the latter ensembles the predictions of different models. We discuss our exploration of late ensemble in Appendix E.1 since model ensemble is a well-explored technique. Here we primarily elaborate on early fusion. We carry out a series of pilot studies to demonstrate the potential of paradigm fusion. Previous works attempt to fuse two extractive paradigms, TAGGING and NUMPRED (Segal et al., 2020; Li et al., 2022). However, they only lead to marginal improvements, probably because TAG-GING can already implicitly determine answer numbers well and the help of NUMPRED is thus limited. Although the performance of base-size generation models on multi-answer MRC is inferior to that of extractive ones, generation models of larger sizes show great potential with more parameters and larger pre-training corpora (Khashabi et al., 2020, 2022). More importantly, GENERATION can easily adapt to various forms of inputs and outputs. We carry out pilot studies using a generation model as the backbone and benefiting from the ideas of other paradigms. We propose several lightweight methods to combine GENERATION with NUMPRED and ITERATIVE, as illustrated in Figure 4. GENERATION + NUMPRED Inspired by recent works on Chain-of-Thought (Wei et al., 2022), we guide the model with prompts indicating the number of answers. We introduce a NUMPRED prompt sentence (NPS) in the form of *There* are {2, 3, ...} *answers/There is only one answer*. We experiment with two variants, multitask and pipeline. In the multitask variant, the model outputs an NPS before enumerating all the answers. In the pipeline variant, we predict the number of answers with a separate classifier and then append the NPS to the question as extra guidance. GENERATION + I**TERATIVE** We substitute the original extractor of ITERATIVE with a generator. The iterative process terminates when the model outputs the string *No answer*. Besides the normal setting, we experiment with another variant that additionally outputs an NPS in the form of The number of remaining answers is {1, 2, 3*, ...*}. Results Our main experiments are conducted with BART-base and BART-large due to our limited computational budget. For the pipeline variant of GENERATION + NUMPRED, we use RoBERTabase as an answer number classifier. The overall experiment results are reported in Table 7 and the results on different question types are reported in Appendix E.2. When GENERATION is multitasking with NUMPRED, it outperforms the vanilla one consistently. The NPS in the output provides a soft but useful hint for the succeeding answer generation, improving the accuracy of answer number prediction by 1.7% on average for BART-base. The pipeline variant is often inferior to the multitasking one due to error propagation. Especially, its performance drops a lot on MultiSpanQA, whose instances are passage-dependent. The accuracy of the answer number classifier on MultiSpanQA lags behind that on the other two datasets by more than 12%. Thus the NPS in the input, with an unreliably predicted answer number, is more likely to mislead the subsequent answer span generation. The combination of GENERATION and ITERA-TIVE does not always lead to improvement. This might be because the answer generation process of GENERATION is already in an iterative style: in the output sequence, each answer is generated conditioned on the previously-generated ones. The incorporation of ITERATIVE thus does not lead to further improvement. When we further introduce an NPS with the number of remaining answers, the performance generally outperforms the normal setting. This proves that GENERATION, as a backbone, is easy to integrate with various hints. Pilot Study on GPT-3.5 To investigate whether these fusion strategies work on larger models, we conduct a pilot study on GPT-3.5. We use the 653 multi-answer instances in the validation set of MultiSpanQA for experiments. The prompts are listed in Appendix E.2. The experiment results are shown in Table 8. When given only one example for in-context learning, GPT-3.5 can already achieve 79.27% PM F1 on the multi-answer instances, with only a small gap between BART trained on full data. Its EM | Model | Base | Large | | | |----------------------|--------|---------|-------|-------| | EM | PM | EM | PM | | | DROP | | | | | | Vanilla GENERATION | 58.58 | 73.86 | 66.43 | 80.55 | | +NUMPRED (multitask) | 60.02 | 74.34 | 69.61 | 82.85 | | +NUMPRED (pipeline) | 59.19 | 73.94 | 66.45 | 80.63 | | +ITERATIVE (normal) | 58.44 | 73.58 | 66.55 | 80.53 | | +ITERATIVE (number) | 58.98 | 74.07 | 68.19 | 82.17 | | Quoref | | | | | | Vanilla GENERATION | 63.48 | 73.70 | 76.57 | 84.47 | | +NUMPRED (multitask) | 66.25 | 75.43 | 77.04 | 84.45 | | +NUMPRED (pipeline) | 67.94 | 77.42 | 75.42 | 83.66 | | +ITERATIVE (normal) | 68.81 | 78.23 | 74.72 | 82.60 | | +ITERATIVE (number) | 63.33 | 73.34 | 76.67 | 84.57 | | MultiSpanQA | | | | | | Vanilla GENERATION | 63.97 | 80.06 | 69.13 | 84.61 | | +NUMPRED (multitask) | 64.85 | 80.58 | 69.31 | 84.82 | | +NUMPRED (pipeline) | 39.71 | 60.94 | 45.34 | 68.09 | | +ITERATIVE (normal) | 63.26 | 79.97 | 65.62 | 82.88 | | +ITERATIVE (number) | 63.84 | 80.04 | 66.77 | 83.41 | Table 7: The performance (EM F1 and PM F1) of different strategies for early fusion of paradigms. Model Setting **EM F1 PM F1** Vanilla BART-base Supervised 66.77 81.24 Vanilla BART-large Supervised 71.93 85.83 Vanilla GPT-3.5 One-Shot 53.34 79.27 GPT-3.5 + NUMPED One-Shot 63.45 82.38 Table 8: The performance of BART and GPT-3.5 on the multi-answer instances of MultiSpanQA. F1 score is low because GPT-3.5 cannot handle the boundaries of answer spans well. This is not unsurprising since one example is not sufficient for GPT-3.5 to learn the annotation preference of span boundaries in MultiSpanQA. If we ask GPT-3.5 to predict the number of answers before giving all the answers, we observe an improvement of 10.1% EM F1 and 3.1% PM F1. This proves the effectiveness of fusing NUMPED with larger generation models As evidenced by the above trials, it is promising to fusion different paradigms. We hope that our exploration will inspire future works adopting larger generation models for multi-answer MRC. ## 7 Related Works Compared to the vast amount of single-answer MRC datasets, the resources for multi-answer MRC are limited. Aside from the datasets in Section 4.1, MASH-QA (Zhu et al., 2020) focuses on the healthcare domain, with 27% of the questions having multiple long answers, ranging from phrases to sentences. CMQA (Ju et al., 2022) is another multi-answer dataset in Chinese, featuring answers with conditions or different granularities. For our analysis, we select two commonly-used datasets, DROP and Quoref, as well as a newlyreleased dataset, MultiSpanQA. Current models addressing multi-answer MRC generally fall into two paradigms: TAGGING (Segal et al., 2020) and NUMPRED (Hu et al., 2019), as explained in Section 5. ITERATIVE (Xu et al., 2019; Zhao et al., 2021; Zhang et al., 2021; Gao et al., 2021) and GENERATION (Khashabi et al., 2020, 2022) have been adopted for many types of QA tasks including knowledge base QA, multiplechoice QA, and open-domain QA. Nevertheless, their performance on multi-answer MRC is less explored. In our paper, we also study how to adapt these paradigms for multi-answer MRC. Apart from the exploration of model architectures for multi-answer MRC, Lee et al. (2023) attempt to generate multi-answer questions as data augmentation. Previous works have made preliminary attempts in fusing two extractive paradigms. Segal et al. (2020) adopt a single-span extraction model for single-answer questions and TAGGING for multianswer questions; Li et al. (2022) add a NUMPRED head to the TAGGING framework. The predicted number of answers is used to adjust the tagging results. Both strategies lead to marginal improvement over the baselines. We instead resort to GENERA-TION for paradigm fusion, considering its potential with larger sizes and its flexibility in inputs and outputs. ## 8 Conclusion In this paper, we conduct a systematic analysis for multi-answer MRC. We design a new taxonomy for multi-answer instances based on how the number of answers is determined. We annotate three datasets with the taxonomy and find that multi-answer is not merely a linguistic phenomenon; rather, many factors contribute to it, especially the process of data collection. With the annotation, we further investigate the performance of four paradigms for multi-answer MRC and find their strengths and weaknesses. This motivates us to explore various strategies of paradigm fusion to boost performance. We believe that our taxonomy can help determine what types of questions are desirable in the annotation process and aid in designing more practical annotation guidelines. We hope that our annotations can be used for more fine-grained diagnoses of MRC systems and encourage more robust MRC models. ## Limitations First, our taxonomy of multi-answer MRC instances only considers whether we know the *exact* number of answers from the questions. In some cases, one might have an *imprecise estimate* of answer numbers from the question. For example, for the question *Who are Barcelona's active players?*, one might estimate that there are dozens of active players for this football club. Yet, these estimations are sometimes subjective and difficult to quantify. Therefore, this instance is classified as passage-dependent according to our current taxonomy. We will consider refining our taxonomy to deal with these cases in the future. Second, we did not conduct many experiments with pre-trained models larger than the large-size ones due to limited computational budgets. Generation models of larger sizes show great potential with more parameters and larger pre-training corpora. We encourage more efforts to deal with multi-answer MRC with much larger models, such as GPT-3.5. ## Acknowledgments This work is supported by NSFC (62161160339). We would like to thank the anonymous reviewers for their valuable suggestions, and our great annotators for their careful work, especially Zhenwei An, Nan Hu, and Hejing Cao. Also, we would like to thank Quzhe Huang for his help in this work. For any correspondence, please contact Yansong Feng. ## References Pradeep Dasigi, Nelson F. Liu, Ana Marasovic, Noah A. ´ Smith, and Matt Gardner. 2019. Quoref: A reading comprehension dataset with questions requiring coreferential reasoning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5925–5932, Hong Kong, China. Association for Computational Linguistics. Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019. DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In *Proceedings of the 2019 Conference of the North American* Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2368–2378, Minneapolis, Minnesota. Association for Computational Linguistics. Yifan Gao, Henghui Zhu, Patrick Ng, Cicero Nogueira dos Santos, Zhiguo Wang, Feng Nan, Dejiao Zhang, Ramesh Nallapati, Andrew O. Arnold, and Bing Xiang. 2021. Answering ambiguous questions through generative evidence fusion and roundtrip prediction. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3263–3276, Online. Association for Computational Linguistics. Minghao Hu, Yuxing Peng, Zhen Huang, and Dongsheng Li. 2019. A multi-type multi-span network for reading comprehension that requires discrete reasoning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1596–1606, Hong Kong, China. Association for Computational Linguistics. Robert A Jacobs, Michael I Jordan, Steven J Nowlan, and Geoffrey E Hinton. 1991. Adaptive mixtures of local experts. *Neural computation*, 3(1):79–87. Yiming Ju, Weikang Wang, Yuanzhe Zhang, Suncong Zheng, Kang Liu, and Jun Zhao. 2022. CMQA: A dataset of conditional question answering with multiple-span answers. In Proceedings of the 29th International Conference on Computational Linguistics, pages 1697–1707, Gyeongju, Republic of Korea. International Committee on Computational Linguistics. Daniel Khashabi, Yeganeh Kordi, and Hannaneh Hajishirzi. 2022. Unifiedqa-v2: Stronger generalization via broader cross-format training. *arXiv preprint* arXiv:2202.12359. Daniel Khashabi, Sewon Min, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter Clark, and Hannaneh Hajishirzi. 2020. UNIFIEDQA: Crossing format boundaries with a single QA system. In *Findings of the Association for Computational Linguistics:* EMNLP 2020, pages 1896–1907, Online. Association for Computational Linguistics. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question answering research. *Transactions of the Association for Computational Linguistics*, 7:452–466. Seongyun Lee, Hyunjae Kim, and Jaewoo Kang. 2023. Liquid: A framework for list question answering dataset generation. *arXiv preprint arXiv:2302.01691*. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 7871–7880, Online. Association for Computational Linguistics. Haonan Li, Martin Tomko, Maria Vasardani, and Timothy Baldwin. 2022. MultiSpanQA: A dataset for multi-span question answering. In *Proceedings of* the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1250–1260, Seattle, United States. Association for Computational Linguistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics. Elad Segal, Avia Efrat, Mor Shoham, Amir Globerson, and Jonathan Berant. 2020. A simple and effective model for answering multi-span questions. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3074–3080, Online. Association for Computational Linguistics. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. *arXiv preprint arXiv:2201.11903*. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Kun Xu, Yuxuan Lai, Yansong Feng, and Zhiguo Wang. 2019. Enhancing key-value memory neural networks for knowledge based question answering. In *Proceedings of the 2019 Conference of the North American* Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2937–2947, Minneapolis, Minnesota. Association for Computational Linguistics. Chen Zhang, Yuxuan Lai, Yansong Feng, and Dongyan Zhao. 2021. Extract, integrate, compete: Towards verification style reading comprehension. In *Findings of the Association for Computational Linguistics: EMNLP 2021*, pages 2976–2986, Punta Cana, Dominican Republic. Association for Computational Linguistics. Chen Zhao, Chenyan Xiong, Jordan Boyd-Graber, and Hal Daumé III. 2021. Multi-step reasoning over unstructured text with beam dense retrieval. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4635–4641, Online. Association for Computational Linguistics. Ming Zhu, Aman Ahuja, Da-Cheng Juan, Wei Wei, and Chandan K. Reddy. 2020. Question answering with long multiple-span answers. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 3840–3849, Online. Association for Computational Linguistics. ## A Additional Remarks On Task Formulation As discussed in Section 2, *multi-answer* and *multispan* are two orthogonal concepts. We have already shown an example (Q0 in Section 2) where a *multianswer* question can be annotated as *single-span* by certain annotation guidelines. Here is another example to demonstrate the difference between *multianswer* and *multi-span*. Q: Which offer of Triangle-Transit is most used by students? P: [...] Triangle-Transit offers **scheduled**, fixed-route regional and commuter **bus service**. The first is most used by students. This is an example where a *single-answer* question can be annotated as *multi-span*. A single answer, *scheduled bus service*, will be annotated as multiple-span, i.e., *scheduled* and *bus service* in the passage. Considering the differences between *multianswer* and *multi-span*, we suggest carefully distinguishing between these two terms in the future. ## B Annotation Details Dataset Statistics We report more statistics of the annotated datasets in Table 9. MultiSpanQA has the largest average number of answers since it is a dataset designed especially for multi-answer questions. The answers in MultiSpanQA are generally longer than those in DROP and Quoref because many of the answers in MultiSpanQA are long descriptive phrases or clauses instead of short entities. For all three datasets, the distances between answers are large. This indicates that the answers to a large proportion of the questions are discontiguous in the passages, demonstrating the difficulty of multi-answer MRC. | Dataset | DROP | Quoref | MultiSpanQA | |-----------------------|--------|----------|---------------| | Length of Question | 9.4 | 15.5 | 9.0 | | Length of Context | 214.7 | 326.0 | 219.9 | | Length of Answer | 1.9 | 1.6 | 3.1 | | #Answers | 1.2 | 1.1 | 1.9 | | #Answers (Multi) | 2.5 | 2.4 | 2.9 | | Distance Between Ans. | 30.5 | 17.3 | 10.3 | Table 9: Dataset Statistics, including the (a) average length (in words) of questions, contexts, and answers, (b) the average number of answers for all the instances and the multi-answer ones, (c) the average distances (in words) between answers. Pre-defined Clue Words Here, we list the predefined clue words in the first stage of annotation: - Numerals, including cardinals and ordinals - Comparatives and superlatives - The word or, as an indicator of alternative questions. - Other words, including only, last, single, *name* of the person, and, top. Selection of Annotators A total of 10 graduates proficient in English participated in our annotation task. We first provided training materials to the annotators and asked them to annotate 100 sample instances. Based on their annotation accuracy on the sample instances, six of them are qualified to continue annotating the remaining instances. The annotators are paid $10 per hour, which is adequate given the participants' demographic. The annotators are informed of how the data would be used. Examples of Bad Annotations In Table 10, we present several examples we marked as bad-annotation. Common reasons for bad annotations including incorrect segmentation of answers, irrelevant answers, and duplicate answers. ## C Additional Analyses On Annotation Results We report more statistics of the annotation results in Table 11 and Table 12, and conduct additional analyses from the perspective of the number of answers. For multi-answer instances, passage-dependent questions account for the largest proportion, followed by with-clue-word. As for the single-answer instances in DROP and Quoref, they tend to be question-dependent, while in MultiSpanQA most of them are passage-dependent. In terms of the clue words in the with-clue-word questions, cardinal numbers are more common in multi-answer questions while other types of clue words are more likely to appear in single-answer questions. ## D Experimental Setup D.1 Implementation Details We use base-size models for our main experiments for sake of energy savings. Since T5-base has twice as many parameters as RoBERTa-base and BART-base, we did not use it to ensure fair comparisons. We carefully tune each model on the | Type | Example | Explanation | |------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------| | Incorrect segmentation | Source: DROP | | | of answers | Question: Which event occurred first, Duke Magnus Birgersson started a war or Erik Klipping gathered a large army? Annotated Answers: Duke Magnus Birgersson; started a war | The correct answer Duke Magnus Birgersson started a war is wrongly split into two spans, Duke Magnus Birgersson and started a war. | | Irrelevant Answers | Source: DROP Question: Who scored first in the second half of the game, Cowboys or 49ers? Annotated Answers: end of the half; San Francisco scored; making the score 28-14 | All three annotated answers are not related to the questions. A correct answer should be either Cowboys or 49ers. | | Duplicate answers | Source: MultiSpanQA Question: who benefited by title ix of the education amendments Annotated Answers: women; women playing college sports | One annotead answer, women is duplicated with the other, women playing college sports. | Table 10: Examples and explanations of bad-annotation cases. | #Ans | p-dep. | q-dep. | | | |-------------|----------|----------|-----|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | w/-clue | w/o-clue | | | | | DROP | | | | | | 1 | 480 | 2,085 | 37 | | | 2 | 209 | 105 | 1 | | | 3 | 74 | 11 | 0 | | | >3 | 63 | 3 | 0 | | | Quoref | | | | | | 1 | 582 | 1,548 | 65 | | | 2 | 82 | 62 | 0 | | | 3 | 28 | 23 | 0 | | | >3 | 19 | 6 | 0 | | | MultiSpanQA | | | | | | 1 | 448 | 56 | 140 | | | 2 | 300 | 32 | 22 | | | 3 | 131 | 14 | 2 | | | >3 | 112 | 19 | 0 | #Ans Alternative Cardinal Comp./Super. Ordinal Others DROP 1 1,213 3 1,293 588 132 2 1 97 4 3 3 3 0 11 0 0 0 >3 0 2 1 1 0 Quoref 1 0 1 25 35 1,492 2 0 55 0 0 7 3 0 21 0 0 2 >3 0 6 0 0 0 MultiSpanQA 1 2 1 14 25 14 2 0 21 5 1 5 3 0 12 2 0 0 >3 0 17 2 0 0 Table 12: Distribution of clue word types in three datasets according to the number of answers. | training set and report its best performance on the validation set. We use an NVIDIA A40 GPU for experiments. A training step takes approximately 0.5s for RoBERTa-base and 0.2s for BART-base. We describe the implementation details of different models here. T**AGGING** We use the implementation by Segal et al. (2020) 3. We use the IO tagging variant, which achieves the best overall performance according to the original paper. We adopt the best-performing 3https://github.com/eladsegal/ tag-based-multi-span-extraction hyperparameters provided by the original paper. NUMPRED Because the implementation by the original paper (Hu et al., 2019) 4 does not support RoBERTa, we re-implement the model with Huggingface Transformers (Wolf et al., 2020) 5. We use the representation of the first token in the input sequence for answer number classification. The maximum number of answers of the classifier is 8. The batch size is 12. The number of training epochs is 10. The learning rate is 3e-5. The maximum sequence length is 512. I**TERATIVE** Our implementation is based on the scripts of MRC implemented by Huggingface. Dur-4https://github.com/huminghao16/MTMSN 5https://github.com/huggingface/transformers ing training, the order of answers for each iteration is determined by their order of position in the passage. The batch size is 8. The number of training epochs is 8. The learning rate is 3e-5. The maximum sequence length is 384. During inference, the beam size is set to 3 and the length penalty is set to 0.7. The maximum length of answers is 10. G**ENERATION** Our implementation is based on the scripts of sequence generation implemented by Huggingface. The batch size is 12. The learning rate is 3e-5. The number of training epochs is 10. The maximum input length is 384. The maximum output length is 60. ## D.2 Evaluation Metrics Here, we describe the evaluation metrics used in our experiments, which are the official ones used by MultiSpanQA (Li et al., 2022). The metrics consist of two part: exact match and partial match. Exact Match An exact match occurs when a prediction fully matches one of the ground-truth answers. We use micro-averaged precision, recall, and F1 score for evaluation. Partial Match For each pair of prediction pi and ground truth answer tj , the partial retrieved score s ret ij and partial relevant score s rel ij are calculated as the length of the longest common substring (LCS) between pi and tj , divided by the length of pi and tj respectively, as: - $s_{ij}^{ret}=\dfrac{\text{len(LCS}(p_i,t_j))}{\text{len}(p_i)}$ $s_{ij}^{rel}=\dfrac{\text{len(LCS}(p_i,t_j))}{\text{len}(t_j)}$ these are $u$-reactions and $j$. Suppose there are n predictions and m ground truth answers for a question. We compute the partial retrieved score between a prediction and all answers and keep the highest one as the retrieved score of that prediction. Similarly, for each ground truth answer, the relevant score is the highest one between it and all predictions. The precision, recall, and F1 are finally defined as follows: $\text{Precision}=\dfrac{\sum_{i=1}^n\max_{j\in[1,m]}\bigl(s_{ij}^{rel}\bigr)}{n}$ $\text{Recall}=\dfrac{\sum_{j=1}^m\max_{i\in[1,n]}\bigl(s_{ij}^{rel}\bigr)}{m}$ $\text{F1}=\dfrac{2*\text{Precision}*\text{Recall}}{\text{Precision}+\text{Recall}}$ We use micro-averaged scores for these metrics. ## E Additional Experiment Results E.1 Late Ensemble By late ensemble, we aggregate the outputs from models of different paradigms to boost performance. We experiment with a simple voting strategy. If a span is predicted as an answer by more than one model, we add it to the final prediction set. If a span is part of another span, we consider them equivalent and take the longer one. In rare cases where the four models predict totally different answers, we add them all to the final prediction set. Our voting strategy leads to improvements of 1.0%, 1.2%, and 1.3% in PM F1 on DROP, Quoref, and MultiSpanQA, respectively, over the bestperforming models in Table 5. Yet, this strategy might discard many correct answers. In the future, we can explore more sophisticated strategies. For example, similar to the idea of Mixture of Experts (Jacobs et al., 1991), the system can evaluate the probability that the instance belongs to a certain category and then adjust the weight of the model based on its capabilities in this category. ## E.2 Early Fusion In Table 13, we report the performance of different strategies for early fusion on different types of instances. In Table 14, we list the prompts used for our pilot study on GPT-3.5. ## F Licenses Of Scientific Artifacts The license for Quoref and DROP is CC BY 4.0. The license for HuggingFace Transformers is Apache License 2.0. Other datasets and models provide no licenses. | BART-base | BART-large | | | | | | | | |----------------------|--------------|----------|--------|---------|----------|-------|-------|-------| | Model | p-dep. | q-dep. | p-dep. | q-dep. | | | | | | All | w/-clue | w/o-clue | All | w/-clue | w/o-clue | | | | | DROP | | | | | | | | | | Vanilla GENERATION | 72.18 | 74.77 | 76.19 | 72.62 | 78.57 | 81.65 | 83.42 | 77.31 | | +NUMPRED (multitask) | 72.45 | 75.37 | 76.80 | 70.58 | 80.35 | 84.24 | 86.06 | 77.88 | | +NUMPRED (pipeline) | 70.58 | 75.72 | 77.14 | 76.77 | 76.79 | 82.66 | 84.34 | 79.65 | | +ITERATIVE (normal) | 71.82 | 74.55 | 75.97 | 68.26 | 78.07 | 81.90 | 83.56 | 74.03 | | +ITERATIVE (number) | 71.90 | 75.27 | 76.66 | 72.93 | 80.58 | 83.05 | 84.91 | 72.66 | | Quoref | | | | | | | | | | Vanilla GENERATION | 66.31 | 77.41 | 78.38 | 52.63 | 76.41 | 88.51 | 88.90 | 79.76 | | +NUMPRED (multitask) | 67.54 | 79.37 | 80.15 | 58.30 | 77.73 | 87.88 | 88.11 | 82.35 | | +NUMPRED (pipeline) | 66.26 | 82.88 | 83.55 | 65.20 | 75.37 | 87.71 | 88.22 | 77.36 | | +ITERATIVE (normal) | 69.40 | 82.68 | 83.24 | 68.24 | 73.13 | 87.43 | 87.93 | 77.20 | | +ITERATIVE (number) | 65.79 | 77.16 | 77.97 | 55.63 | 77.69 | 88.09 | 88.59 | 75.41 | | MultiSpanQA | | | | | | | | | | Vanilla GENERATION | 80.57 | 78.05 | 81.73 | 75.85 | 84.52 | 84.96 | 88.78 | 81.65 | | +NUMPRED (multitask) | 81.08 | 78.65 | 81.09 | 77.73 | 84.83 | 84.80 | 89.66 | 81.06 | | +NUMPRED (pipeline) | 60.24 | 63.56 | 68.67 | 58.25 | 67.27 | 71.21 | 74.53 | 69.33 | | +ITERATIVE (normal) | 80.46 | 78.06 | 81.81 | 74.78 | 83.16 | 81.78 | 84.84 | 80.87 | | +ITERATIVE (number) | 80.15 | 79.63 | 83.47 | 76.17 | 83.49 | 83.06 | 86.08 | 80.44 | Table 13: The performance (PM F1) of different strategies for early fusion on different types of instances. p-dep. denotes passage-dependent. q-dep. denotes question-dependent. Vanilla GPT-3.5 Answer the question based on the given context. Each question has more than one answer. Please give all the answers and separate them with a semicolon. Context: Laura Horton is a fictional character from the NBC soap opera , Days of Our Lives , a long - running serial drama about working class life in the fictional , United States town of Salem . Created by writer Peggy Phillips , the role was originated by actress Floy Dean on June 30 , 1966 till October 21 , 1966 . Susan Flannery stepped into the role from November 22 , 1966 to May 27 , 1975 . Susan Oliver briefly stepped into the role from October 10 , 1975 , to June 9 , 1976 , followed by Rosemary Forsyth from August 24 , 1976 , to March 25 , 1980 . Question: who played laura horton on days of our lives Answers: Floy Dean; Susan Flannery; Susan Oliver; Rosemary Forsyth Following the example above and answer the following multi-answer question. Please give all the answers and separate them with a semicolon. Context: {context} Question: {question} Answers: GPT-3.5 + NUMPED Answer the question based on the given context. Each question has more than one answer. Please predict the number of answers first, then give all the answers and separate them with a semicolon. Context: Laura Horton is a fictional character from the NBC soap opera , Days of Our Lives , a long - running serial drama about working class life in the fictional , United States town of Salem . Created by writer Peggy Phillips , the role was originated by actress Floy Dean on June 30 , 1966 till October 21 , 1966 . Susan Flannery stepped into the role from November 22 , 1966 to May 27 , 1975 . Susan Oliver briefly stepped into the role from October 10 , 1975 , to June 9 , 1976 , followed by Rosemary Forsyth from August 24 , 1976 , to March 25 , 1980 . Question: who played laura horton on days of our lives Answers: The number of answers is 4: Floy Dean; Susan Flannery; Susan Oliver; Rosemary Forsyth Following the example above and answer the following multi-answer question. Please predict the number of answers first, then give all the answers and separate them with a semicolon. Context: {context} Question: {question} Answers: Table 14: The one-shot prompts for GPT-3.5 to answer multi-answer questions in MultiSpanQA. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? The Limitation Section ✗ A2. Did you discuss any potential risks of your work? The dataset annotation and the methods in this work do not pose any ethical or security-related risks. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4, 5 ✓ B1. Did you cite the creators of artifacts you used? Section 4, 5 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Appendix F ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section 4 ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Our annotation is a categorization of the instances in previously published datasets, which have been peer-reviewed and are publicly available. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 4, Appendix B ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 4, Appendix B ## C ✓ **Did You Run Computational Experiments?** Section 5, 6 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 5, Appendix D The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 5, Appendix D ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 5, Appendix D ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 5, Appendix D D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Section 4 ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? The instructions are stated in Section 4.2. The dataset in this work do not pose any ethical or security-related risks. ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Appendix B ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Appendix B D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Ethics review is not required for this research in the country where this work is carried out. We carefully checked that there are no ethical problems in our research. ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Appendix B
kementchedjhieva-chalkidis-2023-exploration
An Exploration of Encoder-Decoder Approaches to Multi-Label Classification for Legal and Biomedical Text
https://aclanthology.org/2023.findings-acl.360
Standard methods for multi-label text classification largely rely on encoder-only pre-trained language models, whereas encoder-decoder models have proven more effective in other classification tasks. In this study, we compare four methods for multi-label classification, two based on an encoder only, and two based on an encoder-decoder. We carry out experiments on four datasets{---}two in the legal domain and two in the biomedical domain, each with two levels of label granularity{---} and always depart from the same pre-trained model, T5. Our results show that encoder-decoder methods outperform encoder-only methods, with a growing advantage on more complex datasets and labeling schemes of finer granularity. Using encoder-decoder models in a non-autoregressive fashion, in particular, yields the best performance overall, so we further study this approach through ablations to better understand its strengths.
# An Exploration Of Encoder-Decoder Approaches To Multi-Label Classification For Legal And Biomedical Text Yova Kementchedjhieva∗**Ilias Chalkidis**∗ Department of Computer Science, University of Copenhagen, Denmark {yova,ilias.chalkidis}[at]di.ku.dk ## Abstract Standard methods for multi-label text classification largely rely on encoder-only pretrained language models, whereas encoderdecoder models have proven more effective in other classification tasks. In this study, we compare four methods for multi-label classification, two based on an encoder only, and two based on an encoder-decoder. We carry out experiments on four datasets—two in the legal domain and two in the biomedical domain, each with two levels of label granularity— and always depart from the same pre-trained model, T5. Our results show that encoder-decoder methods outperform encoderonly methods, with a growing advantage on more complex datasets and labeling schemes of finer granularity. Using encoder-decoder models in a non-autoregressive fashion, in particular, yields the best performance overall, so we further study this approach through ablations to better understand its strengths. ## 1 Introduction Multi-label classification constitutes the task of predicting multiple labels for an input as opposed to a single (possibly binary) one. The labels are drawn from a set of up to several hundred classes, often with the added challenge of class imbalance. While the order in which labels are predicted is irrelevant, there can be interdependence between subsets of labels. The task is commonly approached with a classification model based on a pre-trained encoder followed by a multi-output classification head. Encoder-decoder models, like T5 (Raffel et al., 2020), have taken over recent NLP literature with state-of-the-art results on various tasks, such as question-answering (QA), summarization, singlelabel classification, etc. Raffel et al. (2020) showed that any given NLP task could be reformulated as a *text-to-text* task and solved with conditional ∗ Equal contribution. generation, i.e., generating a text sequence that represents the desired output, be that a span of text in QA, a text summary, a label descriptor, etc. Liu et al. (2021) presented an alternative use of encoder-decoder models for classification tasks in particular, wherein T5's decoder is used in a nonautoregressive fashion to obtain output representations, which are then fed to a classification head. The application of encoder-decoder methods to multi-label classification is currently limited to one experiment in the work of Liu et al. (2021), who compare a text-to-text approach and their nonautoregressive approach on a single dataset, including an encoder-only baseline built off of a different pre-trained model, BERT (Devlin et al., 2019). They obtain results favorable to the two encoderdecoder methods, but since the focus of their work is not multi-label classification in particular, their evaluation is insufficient to draw hard conclusions about this task, and analysis on the contribution of different model components to performance on the task is missing altogether. In this work, we carry out an extensive study of encoder-decoder approaches to multi-label classification. To ensure the thorough and fair evaluation of all methods: (a) We experiment on four datasets from two different domains (legal and biomedical), each with two levels of label granularity. (b) We include four methods for multi-label classification, two encoder-only methods and two encoder-decoder methods. (c) We conduct preliminary development to determine the best configuration for the application of each method, e.g. choice of label descriptors for the text-to-text approach. (d) We explore how model size affects performance, by fine-tuning small, base, and large T5 models. (e) We ablate components of the best performing approach, the non-autoregressive encoderdecoder method of Liu et al. (2021), to better understand its strengths. We release our code base to assure reproducibility and let others extend our study by experimenting with new methods and more datasets.1 ## 2 Related Work Class imbalance is a critical issue in multi-label classification, with researchers searching for the best method to handle rare (less represented) labels. Encoder-only Approaches Snell et al. (2017) introduced the idea of a *prototype* label vector, obtained by averaging over all instances of a given class and used to add inductive bias to their Prototypical Network for multi-label classification. In a similar vein, Mullenbach et al. (2018) developed the Label-Wise Attention Network (LWAN) architecture, in which label-wise document representations are obtained by learning to attend to the most informative input words for each label, using trainable label vectors as keys. Chalkidis et al. (2020) systematically studied the effects of different language encoders (CNNs, BIGRUs, BERT) and several variants of LWAN with regards to the representation of prototype labels. Experimenting with three datasets (EURLEX, MIMIC-III, and AMAZON), they showed that better language encoders counter-play the positive effect of the LWAN module, i.e., a standard BIGRU classifier outperforms CNN-based LWANs (Mullenbach et al., 2018), and a standard BERT outperforms BIGRU-LWAN, respectively. Moreover, BERT-based LWANs offer minor overall improvements compared to a vanilla BERT classifier, wherein BERT's CLS token representation is passed to a classification head (Devlin et al., 2019). Chalkidis et al. (2021) were the first to explore the use of a T5 model for multi-label classification, although they only considered an encoder-only classifier, disregarding the model's decoder. They followed the now standard approach of a classification head on top of the </s> token representation. In experiments with mT5 (Xue et al., 2021), they showcased improved results compared to XLMR (Conneau et al., 2020) on a newly introduced multilingual dataset, MultiEURLEX. 1https://github.com/coastalcph/ Multi-Label-Classification-T5 Encoder-Decoder Approaches Text-to-text approaches, which utilize the full encoder-decoder model, have proven effective for binary and singlelabel classification tasks (Raffel et al., 2020; Chung et al., 2022). The key to such approaches are label verbalizers, words in natural language which verbalize the underlying semantics of a given class. Label verbalizers are represented in the embedding space of pre-trained models and in this way benefit from the model pre-training. This can be more optimal especially for few- and zero-shot labels, in comparison to head-based classification methods where randomly initialized parameters have to be learned from scratch. Liu et al. (2021) presented an alternative use of the full T5 model for non-autoregressive tasks, e.g. single-label and multi-label classification, wherein the decoder is used to obtain label-wise representations informed by the input document, which in turn are fed to label-specific binary classification heads. Liu et al. (2021) performed one set of experiments on the EURLEX-57K dataset (Chalkidis et al., 2019), in which they compared their nonautoregressive approach to a T5-based text-to-text approach and a standard BERT-based classifier. They found that both T5-based approaches outperformed the encoder-only classifier, the nonautoregressive method performing best. Nonetheless, the encoder-only classifier had less than half the parameters of the T5 model (110M vs 222M). Encoder-decoder approaches thus seem to carry potential for multi-label classification, still with insufficient empirical evidence, however. 3 Methods We experiment with four methods for multi-label classification, Encoder+Head, LWAN, *Seq2Seq*, and *T5Enc*, basing their implementation on the T5 model (Raffel et al., 2020). T5 is a transformerbased encoder-decoder model (Vaswani et al., 2017), which encodes a string of input tokens and generates a string of output tokens. All methods discussed below use T5's encoder to represent input documents, a document being denoted as [x1, x2, . . . , xN], where N is the document length in terms of T5 subword tokens. Some methods further use the model's decoder—we introduce decoder notation where needed. Encoder+**Head** In this case, we use only the encoder of T5 in the standard classification setting, as introduced by Devlin et al. (2019). We feed the ![2_image_0.png](2_image_0.png) document to the encoder, and use the representation of the special </s> token as document representation (d ∈ IRdim). This representation is passed to L standard classification heads, one per label. LWAN In this case, we use a Label-Wise Attention Network (LWAN) (Mullenbach et al., 2018) on top of the T5 encoder, as done in Chalkidis et al. (2020). We feed the document to the encoder, and use one attention head per label to generate L label-wise document representations dl ∈ IRdim, i.e., L weighted averages of the contextualized token representations. Intuitively, each head focuses on possibly different tokens of the document relevant to the corresponding label. LWAN employs L linear layers (ol ∈ IRdim×1) each operating on a different label-wise document representation dl, to produce L scores (logits), one per label. Seq2Seq In this case, we use T5 for conditional generation, which is the standard form of use, since T5 was trained in an autoregressive fashion. The target labels are formatted as a sequence of label descriptors, separated by a comma and a space, and ordered alphabetically, e.g., 'EU, finance'. We feed the document to the encoder and use the decoder to generate the tokenized output sequence, [s1, s2, . . . , sM]. When we evaluate the trained model's performance in inference time, we split the generated sequences using comma as a delimiter, keeping only valid label descriptors, and treat them as a set (since their order does mot matter for the task). We consider different options for the label descriptors, discussed in Section 5.2. T5Enc In this case, we follow the work of Liu et al. (2021), where they use T5 in a nonautoregressive fashion.2 We feed the document to the encoder, and use the decoder in non-autoregressive fashion, where its inputs are fixed (pre-populated), i.e., we feed the decoder with single-token label descriptors, [d1, d2, ..., dL], where L is the size of the full label set. We then use a binary classification head (ol ∈ IRdim×1) per decoder output representation to produce L scores, one per label. This method can be seen as an advanced version of the LWAN method which builds label-wise representations (dl) via attention. In this case, however, these representations are further coattended (conditioned) via the standard decoder self-attention across many decoder layers. ## 4 Datasets We experiment with four datasets from the legal and biomedical domains, each with two different label granularities, i.e., label sets including more abstract or more specialized concepts. UKLEX United Kingdom (UK) legislation is publicly available as part of the United Kingdom's National Archives.3 Most of the laws have been categorized in thematic categories (e.g., healthcare, finance, education, transportation, planing), which are stated in the document preamble and are used for archival indexing purposes. The UKLEX dataset (Chalkidis and Søgaard, 2022) comprises 2We keep the name T5Enc, as coined by the authors, for consistency, although the model actually uses both the encoder and the decoder of T5. 3https://www.legislation.gov.uk/ 36.5k UK laws. The dataset is chronologically split in training (20k, 1975–2002), development (8k, 2002–2008), and test (8.5k, 2008–2018) sets. EURLEX European Union (EU) legislation is published on the EUR-Lex website. All EU laws are annotated by EU's Publications Office with multiple concepts from EuroVoc, a thesaurus maintained by the Publications Office.4 EuroVoc has been used to index documents in systems of EU institutions. We use the English part of the dataset of Chalkidis et al. (2021), which comprises 65k EU laws (documents). The dataset is chronologically split in training (55k, 1958–2010), development (5k, 2010–2012), and test (5k, 2012–2016) sets. It supports four different label granularities. We use the 1st and 2nd level of the EuroVoc taxonomy. BIOASQ The BIOASQ (Task A) dataset consist of biomedical articles from PubMed,5annotated with concepts from the Medical Subject Headings (MeSH) taxonomy (Tsatsaronis et al., 2015; Nentidis et al., 2021).6 MeSH is a hierarchicallyorganized vocabulary produced by the National Library of Medicine. The current version of MeSH contains more than 29k concepts referring to various aspects of the biomedical research (e.g., diseases, chemicals and drugs). It is primarily used for indexing, cataloging, and searching of biomedical and health-related information. We subsample 100k documents from the period 2000-2021 in the latest version (v.2022) of the dataset, and split those chronologically for training (80k, 1964– 2015), development (10k, 2015–2018), and testing (10k, 2018–2020). We use the 1st and 2nd levels of the MeSH taxonomy. MIMIC-III The MIMIC-III dataset (Johnson et al., 2017) contains approximately 50k discharge summaries from US hospitals. Each summary is annotated with one or more codes (labels) from the ICD-9 hierarchy, which has eight levels in total.7. The International Classification of Diseases, Ninth Revision (ICD-9) is the official system of assigning codes to diagnoses and procedures associated with hospital utilization in the United States. Documents in MIMIC-III have been anonymized to protect patient privacy, including chronological information (e.g., entry/discharge dates). Hence, | Dataset | Size | |L1| | L/D | T/L | |L2| | L/D | T/L | |-----------|--------|--------|-------|-------|--------|-------|-------| | UKLEX | 36.5k | 18 | 1.2 | 2.1 | 69 | 1.5 | 1.7 | | EURLEX | 65k | 21 | 3.2 | 2.4 | 127 | 4.5 | 2.9 | | BIOASQ | 100k | 16 | 5.6 | 3.4 | 116 | 8.9 | 4.0 | | MIMIC-III | 50k | 19 | 6.0 | 7.8 | 184 | 10.1 | 8.4 | Table 1: Summary of datasets in terms of size, number of labels on Level 1 (|L1|) and 2 (|L2|), average number of gold labels per document (L/D), and average number of tokens per label (T/L) in the T5 vocabulary. it is not possible to split the data chronologically, so we split it randomly in train (30k), development (10k), and test (10k) sets. We use the 1st and 2nd level of the ICD-9 hierarchy. All four datasets come with label descriptors, e.g. 'Agriculture & Food', 'Immigration & Citizenship' (UKLEX), and 'Chemicals and Drugs', 'Skin and Connective Tissue Diseases' (BIOASQ).8 More details about the datasets are provided in Table 1. Notice that Level 2 label sets are considerably larger than Level 1 label sets, and that the number of label assignments per document do not grow proportionately from Level 1 to Level 2, which means Level 2 labels have less representation on average. ## 5 Experiments 5.1 Experimental Setup We use the original checkpoints of T5 released by Raffel et al. (2020) from the Hugging Face Hub.9 Following Raffel et al., for all four methods we use the Adafactor optimizer (Shazeer and Stern, 2018) with a fixed learning rate of 1e-4 after warmup for one epoch.10 Seq2Seq models are trained with teacher forcing. We report results in terms of micro-F1 (µ-F1), and macro-F1 (m-F1) scores, the former more indicative of performance on wellrepresented labels, the latter, of performance on rare labels. When fine-tuning models, we use early stopping based on validation micro-F1 scores. We run each experiment with 4 seeds, and report the mean and standard deviations across runs. ## 5.2 Preliminary Experiments | No. Heads | UKLEX (L1) | EURLEX (L2) | | | |-------------|--------------|---------------|------------|------------| | µ-F1 | m-F1 | µ-F1 | m-F1 | | | N=1 | 83.3 ± 0.2 | 79.3 ± 0.7 | 76.3 ± 0.3 | 55.5 ± 0.8 | | N=4 | 82.8 ± 0.3 | 78.1 ± 0.7 | 75.1 ± 0.1 | 51.7 ± 2.1 | | N=6 | 83.2 ± 0.3 | 79.3 ± 0.5 | 75.1 ± 0.3 | 54.1 ± 0.6 | | N=12 | 83.0 ± 0.4 | 78.8 ± 1.4 | 75.2 ± 0.3 | 53.0 ± 1.2 | Table 2: Number of attentions heads for LWAN. LWAN - Number of attention heads Previous work which employed the LWAN approach always used a single attention head in the label-wise attention mechanism. Here, we experiments with N ∈ [1, 4, 6, 12]. In Table 2, we reports results on two datasets, UKLEX (L1) with 18 labels, and EURLEX (L2) with 127 labels. We observe that in the case of UKLEX (L1) increasing the number of attention heads does not improve results, while in the case of EURLEX (L2) it harms performance. It appears that the added expressivity from multihead attention is either not needed, or it is not easily utilized, since it adds more randomly initialized parameters which have to be learned from scratch. In subsequent experiments, we thus use the standard single-head attention mechanism. | Label | UKLEX (L1) | MIMIC (L1) | | | |------------|--------------|--------------|------------|------------| | µ-F1 | m-F1 | µ-F1 | m-F1 | | | Original | 84.2 ± 0.0 | 81.6 ± 0.2 | 73.2 ± 0.0 | 70.2 ± 0.2 | | Simplified | 84.8 ± 0.2 | 78.7 ± 0.3 | 73.1 ± 0.1 | 70.1 ± 0.1 | | Numbers | 83.8 ± 0.2 | 80.2 ± 0.7 | 73.3 ± 0.1 | 69.7 ± 0.2 | Table 3: Form of label descriptors for Seq2Seq. Seq2Seq - Form of Label Descriptors We consider three alternative forms of label descriptors: (a) the *original* label descriptors, which may include complex multi-word expressions, e.g., 'Anthropology, Education, Sociology, and Social Phenomena' (b) *simplified* versions of the original label descriptors, manually curated to consist of single-token expressions (as per the T5 vocabulary), e.g., 'Anthropology' for the example above (c) *numbers* arbitrarily assigned to labels, e.g. '1'. In Table 3, we present results on two datasets, UKLEX (L1), where the original label descriptors are mostly single-word expressions that map onto T5 sub-word tokens, and MIMIC (L1), where the original label descriptors are multi-word expressions which are further tokenized into subwords We observe mixed rankings between the three forms of label descriptors across different metrics and datasets, with slight advantage for a lexical form over the arbitrary numerical one. This is in line with the intuition that the semantics of the label descriptors contribute to the learning of the task. In subsequent experiments, we use the original label descriptors across all datasets. Table 4: Greedy decoding vs. beam search for Seq2Seq. | Decoding | UKLEX (L1) | MIMIC (L1) | | | | |------------|--------------|---------------------|------------|------------|-----| | µ-F1 | m-F1 | µ-F1 | m-F1 | | | | Greedy | 84.3 ± 0.0 | 81.6 ± 0.2 | 72.9 ± 0.2 | 69.4 ± 0.4 | | | Beam | 84.2 ± 0.0 | 81.6 ± 0.2 | 73.2 ± 0.1 | 70.3 ± 0.2 | | | Table | 4: | Greedy decoding vs. | beam | search | for | Seq2Seq - Greedy Decoding vs. Beam Search Raffel et al. (2020) suggested using greedy decoding for single-label classification tasks but also found beam search decoding (N=4) to work better for tasks with long output sequences, as is the case in multi-label classification. In Table 4, we compare the two decoding strategies on UKLEX (L1) and MIMIC (L1). We find that the choice of decoding strategy has little effect on performance, likely because the output space in these tasks is constrained to a fixed set of valid labels, in a single permissible (alphabetical) order. In subsequent experiments, we use beam search (N=4), as it performs slightly better on average. | Label | UKLEX (L1) | MIMIC (L1) | | | |------------|--------------|--------------|------------|------------| | µ-F1 | m-F1 | µ-F1 | m-F1 | | | Simplified | 84.8 ± 0.2 | 81.9 ± 0.5 | 73.6 ± 0.2 | 69.2 ± 1.5 | | Pseudo | 84.8 ± 0.1 | 82.3 ± 0.2 | 73.2 ± 0.1 | 67.7 ± 1.9 | Table 5: Form of label descriptors for T5Enc. T5Enc - Form of Label Descriptors We compare two forms of label tokens, lexical (using simplified descriptors, as they have to be single tokens), and pseudo descriptors, where we introduce special tokens to the vocabulary of T5 (e.g., <label 1>). Results on UKLEX (L1) and MIMIC (L1) are presented in Table 5. We observe that results are comparable for UKLEX, while simplified label descriptors perform slightly better for MIMIC. In subsequent experiments, we thus use simplified label descriptors for Level 1 datasets. For Level 2 datasets, | Method | UKLEX (L1) | EURLEX (L1) | BIOASQ (L1) | MIMIC (L1) | Average | | | | | | |----------|--------------|---------------|---------------|--------------|------------|------------|------------|------------|------|------| | µ-F1 | m-F1 | µ-F1 | m-F1 | µ-F1 | m-F1 | µ-F1 | m-F1 | µ-F1 | m-F1 | | | Enc+Head | 80.8 ± 0.5 | 77.2 ± 0.4 | 78.9 ± 0.4 | 67.9 ± 1.1 | 86.4 ± 0.0 | 76.8 ± 0.1 | 72.2 ± 0.2 | 66.3 ± 0.7 | 79.6 | 72.1 | | LWAN | 80.4 ± 0.3 | 76.6 ± 0.5 | 79.6 ± 0.4 | 68.4 ± 0.7 | 86.3 ± 0.1 | 77.2 ± 0.2 | 72.3 ± 0.3 | 66.8 ± 0.8 | 79.7 | 72.3 | | Seq2Seq | 79.6 ± 0.6 | 76.4 ± 0.6 | 78.8 ± 0.2 | 69.1 ± 0.3 | 86.0 ± 0.1 | 77.8 ± 0.2 | 72.9 ± 0.1 | 69.7 ± 0.2 | 79.3 | 73.3 | | T5Enc | 80.8 ± 0.4 | 77.1 ± 0.5 | 80.0 ± 0.3 | 70.5 ± 0.4 | 86.6 ± 0.0 | 77.9 ± 0.4 | 73.4 ± 0.3 | 68.8 ± 1.4 | 80.2 | 73.6 | | Method | UKLEX (L2) | EURLEX (L2) | BIOASQ (L2) | MIMIC (L2) | Average | | | | | | | µ-F1 | m-F1 | µ-F1 | m-F1 | µ-F1 | m-F1 | µ-F1 | m-F1 | µ-F1 | m-F1 | | | Enc+Head | 75.9 ± 0.5 | 64.9 ± 0.5 | 70.3 ± 0.2 | 48.2 ± 1.2 | 73.1 ± 0.0 | 60.1 ± 0.8 | 56.7 ± 0.6 | 22.3 ± 1.2 | 69.0 | 48.9 | | LWAN | 76.6 ± 0.2 | 65.0 ± 0.8 | 70.3 ± 0.3 | 49.0 ± 0.7 | 73.0 ± 0.1 | 59.7 ± 0.9 | 57.2 ± 0.4 | 24.2 ± 0.3 | 69.3 | 49.5 | | Seq2Seq | 75.3 ± 0.2 | 65.8 ± 0.4 | 70.6 ± 0.3 | 51.8 ± 1.0 | 73.8 ± 0.1 | 63.8 ± 0.1 | 57.4 ± 0.2 | 31.2 ± 1.7 | 69.3 | 53.2 | | T5Enc | 76.5 ± 0.3 | 66.8 ± 0.9 | 72.0 ± 0.2 | 53.2 ± 1.4 | 75.1 ± 0.1 | 66.0 ± 0.1 | 60.5 ± 0.1 | 31.1 ± 0.9 | 71.0 | 54.3 | we use pseudo labels, since we cannot manually curate simplified descriptors for hundreds of labels. | Encoder | UKLEX (L1) | BIOASQ (L2) | | | |-----------|--------------|---------------|------------|------------| | µ-F1 | m-F1 | µ-F1 | m-F1 | | | BERT | 84.4 ± 0.3 | 81.3 ± 0.9 | 71.7 ± 0.0 | 59.1 ± 0.0 | | RoBERTa | 84.3 ± 0.6 | 81.1 ± 1.1 | 73.0 ± 0.0 | 59.8 ± 0.0 | | T5 | 84.3 ± 0.3 | 80.7 ± 0.8 | 73.2 ± 0.1 | 60.8 ± 0.8 | Encoder-only Models Comparing encoder-only to encoder-decoder methods fro multi-label text classification in a fair manner is non-trivial since inherently encoder-only pre-trained models like BERT (Devlin et al., 2019), and RoBERTa (Liu et al., 2019) are trained on different data and with a different objective than the encoder-decoder model T5. Using T5's encoder for encoder-only methods circumvenes this problem but introduces another concern: that this encoder was trained in an encoder-decoder architecture and may thus be handicapped in comparison to encoders trained in an encoder-only architecture. In Table 7, we present development results on UKLEX (L1) and BIOASQ (L2) for encoder-only classifiers trained from BERT, RoBERTa and T5's encoder.11 We observe mixed results with BERT performing best on UKLEX (L1) and T5 performing best on EURLEX (L2), with absolute differences between the three models being relatively small and on average between the two datasets, favouring T5. We thus conclude that T5's encoder 11We use the prepended [CLS] token representation for BERT and RoBERTa. makes for a fair and strong encoder-only baseline and use it in subsequent experiments. ## 5.3 Main Results In Table 6, we present test results for all methods trained from T5-Base.12 The overall best performing approach is T5Enc, followed by Seq2Seq, LWAN and then Encoder+Head. The trend is thus for encoder-decoder approaches (T5Enc and Seq2Seq) to outperform encoder-only approaches (LWAN and then Encoder+Head), which use just half the model parameters. This result corroborates and considerably substantiates the observations of Liu et al. (2021). We gain further insights through a breakdown by metric and label granularity. The advantage of encoder-decoder methods can be especially seen across macro-F1 scores, where both T5Enc and Seq2Seq outperform encoder-only approaches almost categorically (the one exception being UKLEX (L1)). This indicates that encoderdecoder approaches are particularly good at assigning less frequent labels, which is a key challenge in multi-label classification. This reading of the results is further reinforced by the observation that the performance gap increases from Level 1 datasets, which contain a smaller number of labels, to Level 2 datasets, which contain more and thus on average less frequent labels. The most striking performance gap we observe measures 7 p.p. between LWAN and Seq2Seq on MIMIC (L2). Between the two encoder-decoder approaches, we see that the non-autoregressive use of the T5 decoder is more effective (T5Enc) than the conditional generation of labels (Seq2Seq), the gap 12We present development results in Table 11 in Appendix A for completeness. between the two methods growing from Level 1 to Level 2 datasets. In the case of T5Enc, the decoder serves to build representations for all labels relevant to a dataset and in this sense defines and constraints the output space for the task. Meanwhile, in the Seq2Seq approach the model has to learn the constraints on the output space during training, and as such it is likely more prone to errors. These main results give us a general idea of how the different approaches compare, indicating clearly that encoder-decoder approaches are superior. In subsequent sections we explore the source of performance and the limitations of encoderdecoder approaches further. ## 5.4 Model Capacity One possible explanation for the stronger performance of encoder-decoder methods is that they operate with twice as many parameters as encoderonly methods. Here, we test whether this alone is the source of their improved performance, by training models from different T5 models: small, base and large.13 Since we previously saw that trends in results are similar across L1 and L2 datasets, and more pronounced in the latter, we carry out this set of experiments on L2 datasets only. We include the stronger performing encoder-only approach, LWAN, as well as both encoder-decoder approaches. Results on the micro-F1 metric are presented in Figure 2, and on the macro-F1 metric in Figure 3 in Appendix A. 14 Firstly, we note that T5Enc consistently outperforms the other approaches across different model sizes, in line with earlier findings (see Table 6). We also see that all methods appear to scale, with steady improvements in performance observed across increasing model sizes. Comparing models of similar size (i.e., models with the same number of layers), we gain a more precise idea of how methods compare. Here, T5Enc still proves to be the superior approach, with T5Enc-Small outperforming LWAN-Base on 3 out of 4 datasets (UKLEX being the exception), and similarly T5Enc-Base outperforming LWANLarge on 3 out of 4 datasets. Notice that in these comparisons, the T5Enc variants are even at a disadvantage, having the same number of layers as the LWAN variants, but lower dimensionality. 13T5-Small has 12 layers of d=512, T5-Base has 24 layers of d=768, T5-Large has 48 layers of d=1024, where half of the layers are in the encoder and half in the decoder. 14All results are also presented in Table 12 in Appendix A. Seq2Seq models, on the other hand, underperform similarly-sized LWAN models on most comparisons in terms of micro-F1, which indicates that this approach is overall less suitable for the task.15 ## 5.5 Ablations On T5Enc Decoder Here, we analyse the contribution of different aspects of the T5Enc decoder through ablations on the decoder's depth, width and self-attention. Decoder Depth We train T5Enc models with a varying number of decoder layers. We experiments with N ∈ [1, 4, 6, 12]. In Table 8, we report results on two datasets, UKLEX (L1) and EURLEX (L2). We observe that larger depth in the decoder contributes to performance, with the full set of decoder layers (12) performing best. | Layers | UKLEX (L1) | EURLEX (L2) | | | |----------|--------------|---------------|------------|------------| | µ-F1 | m-F1 | µ-F1 | m-F1 | | | N=1 | 84.6 ± 0.1 | 81.9 ± 0.1 | 76.6 ± 0.1 | 56.9 ± 0.1 | | N=4 | 84.7 ± 0.1 | 81.8 ± 0.1 | 76.9 ± 0.1 | 58.1 ± 1.1 | | N=6 | 84.8 ± 0.1 | 82.2 ± 0.1 | 77.0 ± 0.1 | 58.4 ± 1.3 | | N=12 | 84.8 ± 0.2 | 81.9 ± 0.5 | 77.1 ± 0.1 | 58.8 ± 1.4 | Table 8: Development results for different numbers of decoder layers in T5Enc. Decoder Width In this ablation, we are interested to establish the importance of label-wise representations being built in the decoder as opposed to using it to create a single output representation shared across the classification heads. To this end, we feed the decoder with a single token ID, e.g., the ID of token *'label'*, and then pass its output representation (d ∈ IRdim) to a set of standard classification heads to produce L scores (logits), similar to the Encoder+Head method. This method can be seen as an advanced version of the Encoder+Head method that utilizes the decoder via cross-attention. Results for Level 2 datasets are shown in Table 9 under Single-step T5Enc (Level 1 results are shown in Table 11 in the Appendix). In comparison to the Encoder+Head baseline, Single-step T5Enc is superior across the board, likely because of the added number of parameters available to the model. Compared to the standard T5Enc approach, Single-step T5Enc works slightly better for UKLEX but on all other datasets it underperforms by a large gap. We observe the same pattern for L1 results in Table 11 and thus conclude that the additional computational 15See Appendix A for a discussion of macro-F1 results. ![7_image_0.png](7_image_0.png) | Method | UKLEX (L2) | EURLEX (L2) | BIOASQ (L2) | MIMIC (L2) | | | | | |-------------------|--------------|---------------|---------------|--------------|------------|------------|------------|------------| | µ-F1 | m-F1 | µ-F1 | m-F1 | µ-F1 | m-F1 | µ-F1 | m-F1 | | | Encoder+Head | 81.9 ± 0.6 | 72.9 ± 1.3 | 76.2 ± 0.2 | 54.0 ± 1.4 | 73.2 ± 0.1 | 60.8 ± 0.8 | 56.7 ± 0.7 | 22.3 ± 1.2 | | Single-step T5Enc | 82.6 ± 0.1 | 74.4 ± 0.8 | 76.7 ± 0.2 | 55.8 ± 1.4 | 73.5 ± 0.3 | 61.8 ± 1.1 | 58.3 ± 0.5 | 25.8 ± 0.9 | | T5Enc | 82.4 ± 0.4 | 74.2 ± 1.0 | 77.1 ± 0.1 | 58.8 ± 1.4 | 75.1 ± 0.0 | 66.3 ± 0.1 | 60.6 ± 0.1 | 31.1 ± 1.0 | | - No attention | 81.9 ± 0.1 | 73.0 ± 0.5 | 76.8 ± 0.1 | 57.6 ± 0.8 | 74.3 ± 0.1 | 64.3 ± 0.3 | 58.6 ± 0.3 | 27.4 ± 1.6 | | - Full attention | 82.3 ± 0.2 | 74.1 ± 0.8 | 77.1 ± 0.2 | 58.7 ± 0.8 | 75.2 ± 0.0 | 66.1 ± 0.0 | 60.6 ± 0.2 | 31.6 ± 0.7 | power of label-wise processing is important for the good overall performance of T5Enc. Attention Scheme The labels in multi-label classification are known to exhibit certain dependencies (Tenenboim et al., 2009; Bogatinovski et al., 2022). We measure the pair-wise dependency between labels in the four datasets included in this study, using Fisher's exact test.16 In Table 10, we report the percentage of label pairs in Level 2 label sets for which a significant association (p < .001) was discovered (see Appendix A for Level 1 results). Based on the observed non-trivial rates of inter-label dependency, we hypothesize that selfattention in the T5 decoder is of key importance to the performance of T5Enc. | Level | UKLEX | EURLEX | BIOASQ | MIMIC | |---------|---------|----------|----------|---------| | L2 | 39.5 | 39.7 | 71.2 | 21.3 | Table 10: Percentage of Level 2 label pairs with significant association according to Fisher's exact test. The decoder in T5 models uses *causal* attention, wherein decoder inputs can only attend to the left context. We measure the contribution of this system component by ablating it, i.e. training T5Enc models with no self-attention. In Table 9, we report results on Level 2 datasets under *No attention* (see Table 11 in Appendix A for Level 1 results). We observe that without self-attention, performance suffers considerably for all datasets, most notably so in terms of macro-F1 on MIMIC (∆ = 3.7). This result indicates that self-attention indeed has a key role, although its contribution does not prove to be proportional to the rate of significant pairwise associations in the data (Table 10)—this may be due to higher-order label dependencies taking precedence over pair-wise ones. Having confirmed the importance of modeling label dependency above, we next consider whether we can achieve even better performance with bidirectional (rather than causal) attention in the T5 decoder. In Table 9 *Full attention*, we see that the contribution of bidirectional attention is negligible. Assuming that the model is able to adjust to the new attention scheme during the fine-tuning process, we take these results to indicate that modeling label dependency in just one direction is sufficient. Indeed, Fisher's exact test measures two-way association, disregarding the direction of the dependency. ## 5.6 Errors In Seq2Seq Models The Seq2Seq approach similarly can model label dependency through self-attention and can even condition the prediction of labels on one another (in an autoregressive fashion), an ability which none of the other approaches included in this study posses. Yet, we find empirically that it underperforms T5Enc. Here, we investigate whether this finding can be explained in terms of the unconstrained output space in Seq2Seq models. Specifically, we analyse the models' predictions for the invention of novel labels. Such errors occur for two out of the four datasets, EURLEX and UKLEX, but with extremely low frequency: the highest observed rate is 0.2% of novel labels generated for UKLEX (L2). Some examples include 'accommodation', 'domestic violence' and 'vulnerable persons'. Labels in UKLEX and EURLEX are phrased in common terms, compared to the rather technical, domain-specific labels in MIMIC and BIOASQ (see Appendix B for examples). Models trained on UKLEX and EURLEX therefore seem to interpret the output space as openended and on occasion generate novel labels. Still the total number of novel labels generated is negligible, so this could not explain the lower performance of this approach compared to T5Enc. The reason may instead lie with the fact that Seq2Seq models have to learn the bounds of the output space during training, whereas T5Enc models have that as a given via the fixed decoder input. ## 6 Conclusions In this work, we compared four approaches to multi-label classification, two based on an encoder only and two based on an encoder-decoder. We experimented with 4 datasets from 2 different domains (legal and biomedical), which support two different label granularities. We found that encoderdecoder methods outperform encoder-only methods, in line with findings in other NLP tasks. We further found that the non-autoregressive use of an encoder-decoder model performs better than using it for conditional generation. We found that decoder depth, width and self-attention are all key contributors to the success of this best approach. In future work, we will consider prompt-based approaches as well, specifically instruction-based fine-tuned models (Wei et al., 2022), currently limited by the excessive computational cost of encoding the full label set as part of the input string. ## Limitations Recent work has shown that models of a certain size (upwards of 3B parameters) exhibit learning properties that cannot be observed in smaller models. Due to practical limitations and environmental concerns, in this study we chose not to train models larger than T5-Large. It is thus not possible to know how emergent properties in larger models may have affected the comparison between the different approaches compared here. We believe that our findings will nevertheless be useful to NLP practitioners who operate on a constrained compute budget and may thus opt for moderately-sized models anyway. We compare encoder-only and encoder-decoder models for multi-label classification. Decoder-only models (Radford et al., 2019) are omitted since at present there are no decoder-only methods for multi-label classification in the literature. While we could have adapted the Seq2Seq approach in our experiments to operate in a decoder-only context, we deem this unsuitable for the datasets we work with, as they contain long documents which will quickly cause problems for standard decoder-only models like GPT-2. Domain-specific pre-trained language models exist for both the legal and biomedical domain, which outperform their generic counterparts when used for classification tasks. These models all have an encoder-only architecture, however, which renders them unsuitable for a comparison of encoderonly and encoder-decoder approaches to multilabel classification. Our experiments consider datasets from the legal and biomedical domains first and foremost because there are publicly available datasets with hierarchical labelling in these domains, unlike others. Moreover, we believe that working in critical application domains is a worthy purpose and covering two such domains with two different datasets in each domain gives us a good view on how the examined methods are expected to work in such domains. ## Ethics Statement The legal and biomedical fields are both highly sensitive and have high impact on human life. In this work, we have ensured that the data we work with is sourced in compliance with the relevant regulations and are fully anonymized where necessary. The application of multi-label classification to this data carries no obvious risk as it can ease the processing and categorization of documents in these domains, without having any direct impact on individuals involved in legal and medical matters. ## Acknowledgments We thank our colleagues at the CoAStaL NLP Lab and the anonymous reviewers for their feedback. This work was fully funded by the Innovation Fund Denmark (IFD). ## References Jasmin Bogatinovski, Ljupco Todorovski, Sa ˇ soˇ Dzeroski, and Dragi Kocev. 2022. ˇ Comprehensive comparative study of multi-label classification methods. *Expert Systems with Applications*, 203:117215. Ilias Chalkidis, Emmanouil Fergadiotis, Prodromos Malakasiotis, and Ion Androutsopoulos. 2019. Large-scale multi-label text classification on EU legislation. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 6314–6322, Florence, Italy. Association for Computational Linguistics. Ilias Chalkidis, Manos Fergadiotis, and Ion Androutsopoulos. 2021. MultiEURLEX - a multi-lingual and multi-label legal document classification dataset for zero-shot cross-lingual transfer. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP), Online. Ilias Chalkidis, Manos Fergadiotis, Sotiris Kotitsas, Prodromos Malakasiotis, Nikolaos Aletras, and Ion Androutsopoulos. 2020. An empirical study on large-scale multi-label text classification including few and zero-shot labels. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7503–7515, Online. Association for Computational Linguistics. Ilias Chalkidis and Anders Søgaard. 2022. Improved multi-label classification under temporal concept drift: Rethinking group-robust algorithms in a labelwise setting. In Findings of the Association for Computational Linguistics: ACL 2022, pages 2441– 2454, Dublin, Ireland. Association for Computational Linguistics. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. 2022. Scaling instruction-finetuned language models. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzman, Edouard Grave, Myle Ott, Luke Zettle- ´ moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8440– 8451, Online. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Alistair EW Johnson, David J. Stone, Leo A. Celi, and Tom J. Pollard. 2017. MIMIC-III, a freely accessible critical care database. *Nature*. Frederick Liu, Siamak Shakeri, Hongkun Yu, and Jing Li. 2021. Enct5: Fine-tuning T5 encoder for nonautoregressive tasks. *CoRR*, abs/2110.08426. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*. Ilya Loshchilov and Frank Hutter. 2017. Fixing weight decay regularization in adam. *CoRR*, abs/1711.05101. James Mullenbach, Sarah Wiegreffe, Jon Duke, Jimeng Sun, and Jacob Eisenstein. 2018. Explainable Prediction of Medical Codes from Clinical Text. In *Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational* Linguistics: Human Language Technologies, pages 1101–1111. Anastasios Nentidis, Georgios Katsimpras, Eirini Vandorou, Anastasia Krithara, Luis Gasco, Martin Krallinger, and Georgios Paliouras. 2021. Overview of bioasq 2021: The ninth bioasq challenge on large-scale biomedical semantic indexing and question answering. In International Conference of the Cross-Language Evaluation Forum for European Languages (CLEF2021). Springer, Springer. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-totext transformer. *Journal of Machine Learning Research*, 21(140):1–67. Noam Shazeer and Mitchell Stern. 2018. Adafactor: Adaptive learning rates with sublinear memory cost. CoRR, abs/1804.04235. Jake Snell, Kevin Swersky, and Richard S. Zemel. 2017. Prototypical networks for few-shot learning. CoRR, abs/1703.05175. Lena Tenenboim, Lior Rokach, and Bracha Shapira. 2009. Multi-label classification by analyzing labels dependencies. In Proceedings of the 1st international workshop on learning from multi-label data, Bled, Slovenia, pages 117–132. George Tsatsaronis, Georgios Balikas, Prodromos Malakasiotis, Ioannis Partalas, Matthias Zschunke, Michael R Alvers, Dirk Weissenborn, Anastasia Krithara, Sergios Petridis, Dimitris Polychronopoulos, Yannis Almirantis, John Pavlopoulos, Nicolas Baskiotis, Patrick Gallinari, Thierry Artieres, Axel Ngonga, Norman Heino, Eric Gaussier, Liliana Barrio-Alvers, Michael Schroeder, Ion Androutsopoulos, and Georgios Paliouras. 2015. An overview of the bioasq large-scale biomedical semantic indexing and question answering competition. *BMC Bioinformatics*, 16:138. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pages 6000–6010, Long Beach, California, USA. Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V Le. 2022. Finetuned language models are zero-shot learners. In *International Conference on Learning Representations*. Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 483–498, Online. Association for Computational Linguistics. ## B Dataset Descriptors A Additional Results Comparing Seq2Seq models to similarly-sized LWAN models, on the other hand, we see quite a different trend here compared to the micro-F1 results discussed in Section 5.3: Seq2Seq-Small outperforms LWAN-Base models on 2 out of 4 datasets (BIOASQ and MIMIC), and Seq2Seq-Base models outperform LWAN-Base models on all 4 datasets. This suggests that the Seq2Seq approach is especially suitable for the prediction of rare labels, which are better represented by the macro-F1 metric and particularly abundant in the BIOASQ and MIMIC datasets. We presume that as the only approach with access to the actual tokens comprising Level 2 label descriptors, Seq2Seq gains from lexical overlap between label descriptors and prior knowledge of the semantics of these tokens. In Table 13, we show Fisher's exact test results for pair-wise association among labels in Level 1 label sets across all datasets. We see higher rates of pair-wise association, likely because of the smaller number of labels in each set. In Tables 14, 15, 16, 17, we list the original Level 1 and Level 2 label descriptors for the UKLEX, EURLEX, BIOASQ and MIMIC datasets, respectively, as well as the simplified Level 1 label descriptors, which we manually curated. In Table 11, we present the development results for all models trained from T5-Base across all datasets. In Table 12, we present detailed results for all L2 datasets for methods using T5-Small and Large. In Figure 3, which visualizes the macro-F1 results, we see that in comparisons between similarly-sized T5Enc models and LWAN models, the same trends hold here as observed in Section 5.3 for the microF1 metric: T5Enc is superior to LWAN as a method for multi-label classification. | Method | UKLEX (L1) | EURLEX (L1) | BIOASQ (L1) | MIMIC (L1) | | | | | |-------------------|--------------|---------------|---------------|--------------|------------|------------|------------|------------| | µ-F1 | m-F1 | µ-F1 | m-F1 | µ-F1 | m-F1 | µ-F1 | m-F1 | | | Encoder+Head | 84.3 ± 0.3 | 80.7 ± 0.8 | 82.9 ± 0.2 | 72.5 ± 0.8 | 86.6 ± 0.0 | 77.1 ± 0.2 | 72.4 ± 0.1 | 65.8 ± 0.9 | | LWAN | 84.5 ± 0.4 | 81.0 ± 1.1 | 83.0 ± 0.2 | 72.2 ± 0.3 | 86.6 ± 0.0 | 77.1 ± 0.3 | 72.5 ± 0.3 | 66.3 ± 1.2 | | Seq2Seq | 84.2 ± 0.0 | 81.6 ± 0.2 | 82.8 ± 0.1 | 74.3 ± 0.5 | 86.5 ± 0.0 | 77.6 ± 0.2 | 73.2 ± 0.1 | 70.3 ± 0.2 | | Single-Step T5Enc | 85.1 ± 0.2 | 82.4 ± 0.4 | 83.3 ± 0.2 | 73.8 ± 0.8 | 86.7 ± 0.1 | 77.1 ± 0.4 | 73.1 ± 0.1 | 67.4 ± 1.1 | | T5Enc | 84.8 ± 0.2 | 81.9 ± 0.5 | 83.6 ± 0.1 | 75.0 ± 0.6 | 87.0 ± 0.0 | 78.1 ± 0.3 | 73.6 ± 0.2 | 69.2 ± 1.5 | | - No attention | 85.0 ± 0.2 | 82.5 ± 0.1 | 83.5 ± 0.1 | 74.8 ± 0.5 | 87.0 ± 0.1 | 78.3 ± 0.2 | 73.6 ± 0.1 | 69.5 ± 0.4 | | - Full Attention | 84.7 ± 0.3 | 82.1 ± 0.6 | 83.6 ± 0.1 | 75.0 ± 0.3 | 87.0 ± 0.1 | 78.0 ± 0.3 | 73.3 ± 0.1 | 68.7 ± 1.2 | | Method | UKLEX (L2) | EURLEX (L2) | BIOASQ (L2) | MIMIC (L2) | | | | | | µ-F1 | m-F1 | µ-F1 | m-F1 | µ-F1 | m-F1 | µ-F1 | m-F1 | | | Encoder+Head | 81.9 ± 0.6 | 72.9 ± 1.3 | 76.2 ± 0.2 | 54.0 ± 1.4 | 73.2 ± 0.1 | 60.8 ± 0.8 | 56.7 ± 0.7 | 22.3 ± 1.2 | | LWAN | 82.0 ± 0.3 | 72.2 ± 0.6 | 76.3 ± 0.3 | 55.5 ± 0.8 | 73.2 ± 0.1 | 60.5 ± 0.8 | 57.2 ± 0.3 | 24.5 ± 0.4 | | Seq2Seq | 81.2 ± 0.3 | 72.7 ± 1.1 | 75.7 ± 0.1 | 57.2 ± 1.1 | 74.1 ± 0.1 | 64.3 ± 0.2 | 57.5 ± 0.3 | 30.7 ± 1.7 | | Single-Step T5Enc | 82.6 ± 0.1 | 74.4 ± 0.8 | 76.7 ± 0.2 | 55.8 ± 1.4 | 73.5 ± 0.3 | 61.8 ± 1.1 | 58.3 ± 0.5 | 25.8 ± 0.9 | | T5Enc | 82.4 ± 0.4 | 74.2 ± 1.0 | 77.1 ± 0.1 | 58.8 ± 1.4 | 75.1 ± 0.0 | 66.3 ± 0.1 | 60.6 ± 0.1 | 31.1 ± 1.0 | | - No attention | 81.9 ± 0.1 | 73.0 ± 0.5 | 76.8 ± 0.1 | 57.6 ± 0.8 | 74.3 ± 0.1 | 64.3 ± 0.3 | 58.6 ± 0.3 | 27.4 ± 1.6 | | - Full attention | 82.3 ± 0.2 | 74.1 ± 0.8 | 77.1 ± 0.2 | 58.7 ± 0.8 | 75.2 ± 0.0 | 66.1 ± 0.0 | 60.6 ± 0.2 | 31.6 ± 0.7 | Table 11: Development Results for all methods across datasets with T5 (base). | Method | UKLEX (L2) | EURLEX (L2) | BIOASQ (L2) | MIMIC (L2) | | | | | |-------------------|--------------|---------------|---------------|--------------|------------|------------|------------|------------| | µ-F1 | m-F1 | µ-F1 | m-F1 | µ-F1 | m-F1 | µ-F1 | m-F1 | | | T5 (Small) models | | | | | | | | | | LWAN | 75.1 ± 0.3 | 63.5 ± 0.3 | 69.4 ± 0.2 | 45.0 ± 0.6 | 71.4 ± 0.1 | 56.0 ± 0.2 | 54.4 ± 0.2 | 18.7 ± 0.6 | | Seq2Seq | 74.0 ± 0.4 | 64.7 ± 0.5 | 68.9 ± 0.6 | 48.7 ± 1.9 | 72.2 ± 0.1 | 60.7 ± 0.2 | 57.8 ± 0.3 | 27.1 ± 0.3 | | T5Enc | 75.8 ± 0.2 | 65.8 ± 0.4 | 71.4 ± 0.4 | 50.6 ± 1.6 | 73.7 ± 0.1 | 62.4 ± 0.6 | 58.8 ± 0.3 | 25.2 ± 0.4 | | T5 (Large) models | | | | | | | | | | LWAN | 77.1 ± 0.1 | 65.4 ± 0.8 | 70.9 ± 0.1 | 49.4 ± 1.8 | 74.0 ± 0.2 | 61.4 ± 0.9 | 58.3 ± 0.9 | 24.0 ± 3.0 | | Seq2Seq | 76.5 ± 0.3 | 67.1 ± 0.3 | 71.3 ± 0.1 | 54.1 ± 0.6 | 74.7 ± 0.1 | 65.5 ± 0.5 | 60.4 ± 0.1 | 34.5 ± 0.7 | | T5Enc | 77.7 ± 0.2 | 68.1 ± 0.7 | 72.4 ± 0.2 | 53.6 ± 1.2 | 75.8 ± 0.1 | 67.1 ± 0.2 | 60.8 ± 0.2 | 33.2 ± 1.6 | Table 12: Test Results for all methods across datasets with T5 (small) and (large). ![11_image_0.png](11_image_0.png) ![11_image_1.png](11_image_1.png) | Level | UKLEX | EURLEX | BIOASQ | MIMIC | |---------|---------|----------|----------|---------| | L1 | 72.8 | 82.3 | 93.8 | 85.6 | Table 13: Percentage of Level 1 label pairs with significant association according to Fisher's exact test. | Level 1 (Original) | | | | | |------------------------------------------------------------------------|--------------------------|--------------------|--------------|-----------------------------| | Agriculture and Food | Children | Criminal Law | Education | Environment | | EU | Finance | Healthcare | Housing | Immigration and Citizenship | | Local Government | Planning and Development | Politics | Public Order | Social Security | | Taxation | Telecommunications | Transportation | - | - | | Level 1 (Simplified) | | | | | | Agriculture | Children | Crime | Education | Environment | | EU | Finance | Healthcare | Housing | Immigration | | Local | Planning | Politics | Public | Social | | Taxation | Telecom | Transport | - | - | | Level 2 (Original) | | | | | | Agriculture | Air Transport | Animals | Banking | Broadcasting | | Children | Citizenship | Disabled Persons | Education | Elections | | Employment | Environment | EU | Finance | Fire and Rescue Services | | Food | Healthcare | Housing | Immigration | Insurance | | Land Registration | Local Government | NHS | Police | Pollution | | Social Security | Taxation | Telecommunications | Terrorism | Urban Development | | Table 14: Sample of label descriptors (Law Subject) for UKLEX dataset. | | | | | | Level 1 (Original) | | | | | |--------------------------|--------------------|-----------------------------|----------------------------|-------------------------| | Politics | European Union | International Relations | Law | Economics | | Trade | Finance | Social Questions | Education & Communications | Science | | Business & Competition | Environment | Transport | Working Conditions | Agriculture | | Forestry & Fisheries | Agri-Foodstuffs | Production | Technology & Research | Energy | | Industry | Geography | International Organisations | - | - | | Level 1 (Simplified) | | | | | | Politics | International | EU | Law | Economy | | Trade | Finance | Social | Education | Science | | Business | Environment | Transport | Employment | Agriculture | | Forestry | Food | Production | Technology | Energy | | Industry | Geography | Organisations | - | - | | Level 2 (Original) | | | | | | Political Framework | Political Party | Agricultural Activity | Engineering | European Organisations | | Politics & Public Safety | Forestry | International Affairs | Cooperation Policy | International Security | | Defence | Energy Policy | European Construction | EU Finance | Agricultural Production | | Justice | International Law | Rights and Freedoms | Economic Policy | Regional Policy | | Economic Structure | Trade Policy | Tariff Policy | International Trade | Marketing | | Distributive Trades | Monetary Relations | Monetary Economics | Farming Systems | Food Technology | Table 15: Sample of label descriptors (EUROVOC concepts) for EURLEX dataset. | Level 1 (Original) | | | | | |---------------------------------------------------------------------------|--------------------------|--------------------------------------|---------------------------------------------------|------------------------------| | Anatomy | Organisms | Diseases | Chemicals and Drugs | Analytical, Diagnostic and Therapeutic Techniques and Equipment | | Psychiatry and Psychology | Phenomena and Processes | Humanities | Disciplines and Occupations | Anthropology, Education, Sociology, and Social Phenomena | | Information Science | Named Groups | Health Care | Technology, Industry, and Agriculture | Publication Characteristics | | Geographicals | - | - | - | - | | Level 1 (Simplified) | | | | | | Anatomy | Organism | Disease | Drug | Technical | | Psychology | Process | Occupation | Human | Social | | Information | Groups | Healthcare | Technology | Publications | | Geography | - | - | - | - | | Level 2 (Original) | | | | | | Musculoskeletal System | Digestive System | Respiratory System Urogenital System | Endocrine System | | | Cardiovascular System | Nervous System | Sense Organs | Embryonic Structures | Cells, Fluids and Secretions | | Stomatognathic System | Hemic and Immune Systems | Tissues | Integumentary System | Plant Structures | | Fungal Structures | Bacterial Structures | Viral Structures | Biomedical and Dental Materials | Microbiological Phenomena | | Equipment and Supplies | Psychological Phenomena | Dentistry | Mental Disorders Behavior and Behavior Mechanisms | | | Table 16: Sample of label descriptors (MeSH concepts) for BIOASQ dataset. | | | | | | Level 1 (Original) | | | | | | | |------------------------------------------------------------------------|-----------------------------------------|---------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------|----------------------------------------------|-----------------------------|----| | Infection and Parasitic Diseases | Diseases of The Genitourinary Endocrine | Nutritional | and Diseases of Blood and Blood Forming Organs Mental | Disor | | | | ders | | | | | | | | Metabolic Diseases and Immunity Disorders | | | | | | | | System | | | | | | | | Diseases of Nervous System and Sense Organs Diseases of The Circulatory System | Diseases of The Respiratory System | Diseases of The Digestive System | Neoplasms | | | | | Complications | of | Pregnancy, | | | | | | Childbirth and the Puerperium | Diseases of The Skin and Subcutaneous Tissue | Diseases of The Musculoskeletal System and Connective Tissue | Certain Conditions Originating In The Perinatal Period Congenital Anomalies | | | | | Symptoms, Signs and Ill-Defined Injury and Poisoning Supplementary Factors Influencing Health Conditions Status and Contact With Health Services Supplementary Classification of - | - | | | | | | | External Causes of Injury and Poisoning Level 1 (Simplified) | | | | | | | | Infections | Cancer | Metabolic | Blood | Mental | | | | Nervous | Circular | Respiratory | Digestive | Urinar | | | | Pregnancy | Skin | Muscle | Birth | Newborn | | | | Symptoms | Injury | External | - | - | | | | Level 2 (Original) | | | | | | | | Osteopathies, | Chondropathies, Bulbus Cordis Anomalies and Hereditary and Degenerative Diseases of The Central Nervous | Poliomyelitis and Other NonArthropod-Borne Viral Diseases | Tuberculosis | | | | | and Acquired Musculoskeletal | Anomalies of Cardiac Septal Closure | | | | | | | Deformities | System | of Central Nervous System | | | | | | Viral Diseases Accompanied By Arthropod-Borne Viral Diseases | Rickettsioses | and | Other | | | | | Arthropod-Borne Diseases | Syphilis and Other Venereal Diseases | Mycoses | | | | | | Exanthem Hereditary Hemolytic Anemias | Acquired Hemolytic Anemias | Aplastic Anemia and Other Bone Marrow Failure Syndromes Other and Unspecified Anemias | Coagulation Defects | | | | | Personality Disorders, and Other Nonpsychotic Mental Disorders | Congenital Anomalies of Eye | Inflammatory Diseases of The Central Nervous System | Human Immunodeficiency Virus | Neurotic Disorders | | | | Disorders of The Peripheral Nervous System | Disorders of The Eye and Adnexa | Diseases of The Ear and Mastoid Chronic Rheumatic Heart Disease | Acute | | | | | Process | Rheumatic Fever | | | | | | | Ischemic Heart Disease | Diseases of Pulmonary Circulation | Acute Respiratory Infections | Chronic Obstructive Pulmonary Disease and Allied Conditions Pneumonia and Influenza | | | | | Intestinal Infectious Diseases | Anencephalus | and | Similar | Other Congenital Anomalies of Nervous System | Zoonotic Bacterial Diseases | Intellectual Disabilities | | Anomalies | | | | | | | | Table 17: Sample of label descriptors (ICD-9 codes) for MIMIC dataset. | | | | | | | ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section has no number (Limitations) ✓ A2. Did you discuss any potential risks of your work? Section has no number (Ethics statement) ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** 5 ✓ B1. Did you cite the creators of artifacts you used? 4 ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? To the best of our knowledge, the data is free to use for research purposes ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Ethics statement ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Three out of 4 datasets are open access public documents that don't concern individuals. The 4th is explicitly anonymized and we trust the anonymization applied by the creatorrs ✗ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not relevant ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Left blank. ## C ✓ **Did You Run Computational Experiments?** 5 ✗ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Our experiments are rather small-scale The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Left blank. ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Left blank. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Left blank. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
dai-etal-2023-domain
Domain Incremental Lifelong Learning in an Open World
https://aclanthology.org/2023.findings-acl.361
Lifelong learning (LL) is an important ability for NLP models to learn new tasks continuously. Architecture-based approaches are reported to be effective implementations for LL models. However, it is non-trivial to extend previous approaches to domain incremental LL scenarios since they either require access to task identities in the testing phase or cannot handle samples from unseen tasks. In this paper, we propose Diana: a dynamic architecture-based lifelong learning model that tries to learn a sequence of tasks with a prompt-enhanced language model. Four types of hierarchically organized prompts are used in Diana to capture knowledge from different granularities. Specifically, we dedicate task-level prompts to capture task-specific knowledge to retain high LL performances and maintain instance-level prompts to learn knowledge shared across input samples to improve the model{'}s generalization performance. Moreover, we dedicate separate prompts to explicitly model unseen tasks and introduce a set of prompt key vectors to facilitate knowledge sharing between tasks. Extensive experiments demonstrate that Diana outperforms state-of-the-art LL models, especially in handling unseen tasks.
# Domain Incremental Lifelong Learning In An Open World Yi Dai1∗† , Hao Lang2† , Yinhe Zheng2‡ , Bowen Yu2, Fei Huang2**, Yongbin Li**2‡ 1 Department of Computer Science and Technology, Tsinghua University 2 Alibaba Group {hao.lang, yubowen.ybw, f.huang, shuide.lyb}@alibaba-inc.com, [email protected], [email protected] ## Abstract Lifelong learning (LL) is an important ability for NLP models to learn new tasks continuously. Architecture-based approaches are reported to be effective implementations for LL models. However, it is non-trivial to extend previous approaches to domain incremental LL scenarios since they either require access to task identities in the testing phase or cannot handle samples from unseen tasks. In this paper, we propose **Diana**: a dynamic architecture-based lifelong learning model that tries to learn a sequence of tasks with a promptenhanced language model. Four types of hierarchically organized prompts are used in Diana to capture knowledge from different granularities. Specifically, we dedicate tasklevel prompts to capture task-specific knowledge to retain high LL performances and maintain instance-level prompts to learn knowledge shared across input samples to improve the model's generalization performance. Moreover, we dedicate separate prompts to explicitly model unseen tasks and introduce a set of prompt key vectors to facilitate knowledge sharing between tasks. Extensive experiments demonstrate that Diana outperforms state-ofthe-art LL models, especially in handling unseen tasks. We release the code and data at https://github.com/AlibabaResearch/ DAMO-ConvAI/tree/main/diana. ## 1 Introduction An essential ability of humans is to learn new tasks continuously in their lifetime since our surrounding world is ever involving (Thrun and Mitchell, 1995). Humans need to learn inputs from unseen new tasks everyday. However, neural network based NLP models tend to rapidly lose previously acquired knowledge when trained on new tasks. This phenomenon is referred to as catastrophic forgetting ![0_image_0.png](0_image_0.png) (French, 1999), and it's important to equip NLP models with the lifelong learning (LL) ability to alleviate this issue in advanced AI applications. An effective method to build LL models is the architecture-based approach (Chen et al., 2016; Rusu et al., 2016; Fernando et al., 2017; Wiwatcharakoses and Berrar, 2020), in which task-specific components are used to isolate knowledge for each separate task (Mancini et al., 2018). Recently, to leverage the power of pre-trained language model (PLM), some architecture-based LL models convert NLP tasks into a unified language modeling (LM) format (Sanh et al., 2021; Xie et al., 2022) and learn these tasks using a PLM. Separate prompts (Qin and Joty, 2022) or adapters (Madotto et al., 2021b) are allocated for different tasks to avoid the catastrophic forgetting issue. However, despite the reported effectiveness, most above models are designed for the task incremental learning scenario, in which we assume task IDs for testing samples are available (Wang et al., 2022a,b). This setting limits the application of LL models because practical applications usually follow a more general domain incremental learning scenario (van de Ven et al., 2022), i.e., we cannot access the task IDs of most input samples. There are generally two approaches to building LL models for domain incremental learning. One is to predict the task ID of each testing sample (Wortsman et al., 2020), and activate specified components based on the prediction (Figure 2a). This scheme achieves high LL performances if the predicted ID is correct (Madotto et al., 2021a). However, these models cannot handle samples from unseen tasks since there are no components designated for these samples and thus no task IDs to be predicted. This hinders the application of LL models because we often encounter samples from unseen tasks in practical situations (Dietterich, 2017). Another approach to building domain incremental LL models is to organize model components at the instance-level, i.e., a pool of fine-grained components are dynamically combined in the forward pass for each input instance (Figure 2b). This approach avoids the trouble of explicitly determining task IDs. However, it usually yields low LL performance because there are no dedicated components for each task to capture task-specific knowledge (Wang et al., 2022a). In this study, we combine the advantages of the above two approaches and propose **Diana**: a dynamic architecture-based lifelong learning model. We convert different NLP tasks into a unified LM format and propose to learn these tasks using a prompt-enhanced PLM (Figure 1). Specifically, Diana maintains four types of prompts to capture task knowledge from different granularities: 1. A *general prompt* Pg is used for all tasks; 2. The *format prompt*s Pf are shared between tasks in a similar format; 3. A *task prompt* Ptis assigned for each incoming task; 4. A pool of meta prompts Pm are dynamically combined for each input instance. These four types of prompts present a hierarchical structure with a decreasing knowledge granularity, i.e., Pg captures global knowledge between all tasks, while Pm captures local knowledge that is shared between instances. Diana can better generalize to unseen tasks while achieving high LL performances since its components are organized at both task and instance level. Moreover, we also maintain key vectors for Pt and Pm to better share task knowledge, and allocate separate task prompts to explicitly model samples for unseen tasks. Extensive experiments on benchmark NLP tasks indicate that Diana outperforms state-of-the-art (SOTA) baselines, especially in handling unseen tasks. Our main contributions are: 1. We propose Diana: a novel architecture-based domain incremental LL model that uses hierarchically organized prompts to capture knowledge in different granularities. 2. We are the first to consider unseen tasks in the testing phase of LL models. Specific prompts are designated in Diana to handle unseen tasks, and prompt keys are built to facilitate sharing of task knowledge. 3. Extensive experiments show that Diana outperformed SOTA baselines. ## 2 Related Work Lifelong Learning aims at incrementally acquiring new knowledge without catastrophically forgetting previously learned ones. Generally, three categories of LL methods are proposed: 1. Rehearsal-based methods (Rebuffi et al., 2017; Shin et al., 2017; Sun et al., 2019a; Chaudhry et al., 2019a; Buzzega et al., 2020) preserve past knowledge by replaying data from learned tasks; 2. Regularization-based methods (Kirkpatrick et al., 2017; Zenke et al., 2017; Li and Hoiem, 2017; Ritter et al., 2018; Farajtabar et al., 2020) consolidate model parameters that are important to previous tasks by introducing additional regularization terms; 3. Architecture-based methods (Chen et al., 2016; Rusu et al., 2016; Fernando et al., 2017; Maltoni and Lomonaco, 2019) add task-specific parameters to an existing base model for each task to prevent forgetting. Experiment settings of LL methods can be generally classified into three scenarios based on whether the task ID is provided for testing samples and whether it must be inferred (van de Ven and Tolias, 2019), i.e., task-incremental learning (Mallya and Lazebnik, 2018; Ebrahimi et al., 2020), domain-incremental learning (Pu et al., 2021; Gao et al., 2022), and class-incremental learning (Zhang et al., 2020). In this work, we focus on the domain-incremental learning setting, where task ID is not provided for each testing sample. One line of methods in this category attempt to detect the task ID for each input sample (Madotto et al., 2021a). However, these methods fail to generalize to unseen tasks (Wang et al., 2022a). Another line of methods try to build a dynamic architecture for each input sample, for example, maintaining a pool of prompts that can be dynamically combined (Wang et al., 2022b). However, these methods yield sub-optimal performance since no task-specific parameters are used. Our model Diana is the first attempt to take advantage of the two aforementioned types of methods. ![2_image_0.png](2_image_0.png) Input (,) PLM **Output** PLM **Output** ℒ **+ ℒ′** PLM **Output** Input (,) Pre-trained LM is becoming the de facto standard component for NLP models. To encourage knowledge sharing, existing approaches attempt to cast all NLP tasks into a unified text-to-text format (McCann et al., 2019) and learn these tasks by finetuning a PLM. A similar work compared to ours is ProQA (Zhong et al., 2022a), in which different QA tasks are unified and a set of structured prompts are used. However, ProQA only considers two QA tasks and is limited to the task incremental learning scenario, while our model is designed to tackle more general NLP tasks in a more general domain incremental learning scenario. ## 3 Method 3.1 Task Formulation In this study, we aim to sequentially learn N tasks T1, · · · , TN that are presented in L different formats F1, · · · , FL, (L ≤ N). Each task Tiis presented in a specific format Fj (such as "Classification" or "Summarization"), and each training sample of Tiis a tuple of a context C, a question Q, and an answer A: (*C, Q, A*). Note that the format of each task can be easily inferred from the context-question pair (*C, Q*). Our model gθ is built to predict A based on C and Q. We also consider a more challenging open domain lifelong learning setting, i.e., the model needs to predict answers for unseen tasks. Therefore, we collect another N′ unseen tasks TN+1, · · · , TN+N′ that are only used for testing. We assume that all task identities of inputs are not available in the testing phase. ## 3.2 Framework Of Hierarchical Prompts ℒெ ℒெ (b) Prompt Key **Prompt** We follow previous approaches to serialize the context C, question Q, and answer A into text sequences (Khashabi et al., 2020; Zhong et al., 2022a) and use a prompt-enhanced encoder-decoder model gθ to learn each task Tiin Diana. We use soft prompts (Liu et al., 2021; Lester et al., 2021; Vu et al., 2022) in our study, i.e., each prompt is a sequence of trainable embeddings that are randomly initialized and learned in the training process. For each training sample (*C, Q, A*) from task Ti, we first construct a prompt P(*C, Q*) based on (*C, Q*). Then the encoder takes in the concatenation of P(*C, Q*), C, and Q and the decoder predicts A, i.e., A = gθ([P(*C, Q*); C; Q]), in which "[; ]" denotes the sequence concatenation operation. Four types of prompts are contained in P(*C, Q*), i.e., P(*C, Q*) = [Pg; Pf (Fj ); Pt(Ti); Pm(*C, Q*)] (Figure 2c). Specifically, Pg is a general prompt, Pf (Fj ) is a format prompt (where Fj is the format of task Ti), Pt(Ti) is a task prompt and Pm(*C, Q*) is a combined meta prompt. These four types of prompts are organized hierarchically so that they are shared by samples in different granularities: 1. General Prompt Pg is shared for all training tasks so that it encodes global task knowledge. 2. Format Prompt Pf (Fj ) is shared between tasks in the same format Fj so that it captures format-related knowledge, i.e., knowledge that is shared between tasks in the format Fj . 3. Task Prompt Pt(Ti) is specifically allocated for the task Ti and it is only shared for samples from Ti. We use Pt(Ti) to learn task-specific knowledge. Moreover, to explicitly model samples from unseen tasks, we enlarge the set of task prompts with L extra prompts Pˆt(F1), *· · ·* , Pˆt(FL), in which each prompt Pˆt(Fj ) models the unseen task for a particular format Fj . 4. Meta Prompt Pm(*C, Q*) is a dynamic combination of various instance-level prompts. Specifically, we maintain M instance-level meta prompts {P im}M i=1 and dynamically combine these prompts based on the (*C, Q*) to obtain Pm(*C, Q*). Pm(*C, Q*) captures the knowledge shared between similar training instances. We expect these four types of prompts can capture knowledge from different granularities since they are shared in different scopes. Moreover, to facilitate knowledge sharing, we allocate a key vector kt(Ti) and k jm to each task prompt Pt(Ti) and meta prompt P jm, respectively, and build a fixed text enଵ ଷ ெ ( ) ··· ௧ (ଶ) ℒெ (c) ଵ ··· ଶ ଷ ெ ସ coder h to map a context-question pair (*C, Q*) to a query vector q = h(*C, Q*). A two-stage learning process is introduced in Diana to learn these keys and P(*C, Q*). Specifically, the first stage focuses on learning a representation space for prompt keys so that we can determine proper prompts to construct P(*C, Q*). The second stage optimizes the constructed prompt P(*C, Q*) and the backbone language model. These two stages are detailed in the following sections. ## 3.3 Key Vector Space Learning We first optimize key vectors assigned to each task prompt and meta prompt to construct the prompt P(*C, Q*) for each input (*C, Q*). Note that these key vectors are only used to determine the task prompt and meta prompt in P(*C, Q*) because the general prompt Pg is shared by all tasks in Diana, and the format prompt Pf (Fj ) can be determined based on the format of C and Q directly. Task Prompt Keys help to determine the task prompt in P(*C, Q*). Specifically, for a given input (*C, Q*), we first calculate its query vector q and then determine the most similar task prompt key kt(Ti) to q. The task prompt Pt(Ti) associated with kt(Ti) is used to construct P(*C, Q*). Ideally, the key vector kt(Ti) for a task prompt Pt(Ti) should be located near samples from task Ti and distant to samples from other tasks Tj (j ̸= i). Therefore, when learning each task Ti, we maintain a small memory buffer M for samples from previously learned tasks Tj , (*j < i*), and design the following exponential angular triplet loss (Ye et al., 2021) to enforce the above property: $$\begin{array}{l}\mathcal{L}_{t}=\exp(||h(C,Q),\mathbf{k}_{t}(T_{i})||+\\ \max(1-||h(C_{n},Q_{n}),\mathbf{k}_{t}(T_{i})||,0)),\end{array}\tag{1}$$ in which the operator ||·, ·|| determines the distance between two input vectors (here we use cosine distance), (Cn, Qn) is a negative sample extracted from the memory buffer M: $$(C_{n},Q_{n})=\begin{array}{c}\mbox{argmin}\|h(C^{\prime},Q^{\prime}),\mathbf{k}_{t}(T_{i})\|.\\ (C^{\prime},Q^{\prime})\in\mathcal{M}\end{array}\tag{2}$$ Meta Prompt Keys help to combine these instance-level meta prompts {P im}M i=1 to produce Pm(*C, Q*). Specifically, for each input (*C, Q*), we select M′ meta prompt keys that are closest to its query vector q = h(*C, Q*). Then Pm(*C, Q*) is obtained by concatenating these M′ meta prompts. Intuitively, the knowledge associated with (*C, Q, A*) is distributed in these M′ meta prompts. ![3_image_0.png](3_image_0.png) When learning meta prompt keys, we expect the distribution of these keys to balance two properties: diversity and *locality* (Figure 3). Specifically, the diversity property aims to distribute these keys to the whole vector space so that every meta prompt can be involved in the training process. The locality property aims to cluster similar meta prompts keys so that the knowledge of each sample can be better shared. For each input C and Q, we propose the following loss to enforce the above two properties: $$\begin{array}{c}{{\mathcal{L}_{m}=\sum_{i\in\mathcal{S}(C,Q)}\max(0,||\mathbf{k}_{m}^{i},h(C,Q)||-\eta)+}}\\ {{\sum_{i,j\in\mathcal{S}(C,Q)}\max(0,\gamma-||\mathbf{k}_{m}^{i},\mathbf{k}_{m}^{j}||)/{M^{\prime}}^{2},}}\end{array}\tag{3}$$ where S(*C, Q*) is the index set of these M′ meta prompt keys that are closest to h(*C, Q*), η and γ are scalar hyper-parameters for the distance margin. Specifically, the first term in Eq. 3 enforces the locality property by pulling these M′ meta prompt keys around the query vector. The second term enforces the diversity property by pushing these meta prompt keys away from each other to occupy the whole vector space. Note that Eq. 3 only involves a single query h(*C, Q*) from the current task. This may limit the learned meta prompt keys since samples from previously learned tasks are not considered. In this study, we extend Eq. 3 to better shape the distributions of meta prompt keys with the help of the memory buffer M, in which samples from previously learned tasks are contained. Specifically, when learning the task Ti, we first calculate query vectors for samples in M and then group these query vectors into B clusters (we set B = 5 × i in our experiments, where i is the number of received tasks). Centroids of these B clusters are denoted as c1, *· · ·* , cB. For each sample (*C, Q*) from M, the subsequent loss is optimized: $${\cal L}^{\prime}{}_{m}=\sum_{i\in S(C,Q)}\max(0,||\mathbf{k}_{m}^{i},\mathbf{c}_{k}||-\eta),\tag{4}$$ where ck is the centroid to which (*C, Q*) belong. The above loss enforces the global diversity by scattering meta prompt keys to each centroid. ## 3.4 Model Training Scheduled Sampling of Task Prompts When training Diana, the task ID of each sample (*C, Q*) is given so that we can directly get the task prompt Pt(Ti). However, naively using golden truth task IDs leads to an exposure bias issue, i.e., task IDs inferred in testing may not always be correct. In this study, we introduce a scheduled sampling process to tackle the exposure bias issue. Specifically, for a given sample (*C, Q, A*) in the k-th training step, we toss a coin and use the golden truth task ID with probability ϵk, or use the task ID inferred based on task prompt keys with probability 1 − ϵk (Bengio et al., 2015). Note that when starting to learn each task, prompt keys are not well optimized, and thus the selected task ID is not accurate. Therefore, we set the value of ϵk to favor the golden truth task ID at the beginning (i.e., when k is small) and gradually switch to the inferred task ID as the training proceeds (i.e., when k is large), i.e., a linear decrement of ϵk is scheduled: $$\epsilon_{k}=\operatorname*{max}(0,\alpha-k\beta),$$ ϵk = max(0, α − kβ), (5) in which α and β are scalar hyper-parameters. Note that LL models may encounter another source of exposure bias since we may receive inputs from unseen tasks in the testing phase. In this study, we use these L extra prompts Pˆt(F1), *· · ·* , Pˆt(FL) to explicitly model unseen tasks. Specifically, for each training sample (*C, Q, A*), we first determine its task format Fj based on (*C, Q*), and allocate a small probability to use Pˆt(Fj ) as its task prompt in P(*C, Q*). In this way, we can capture general knowledge about all tasks for a given format in Pˆt(Fj ) and expect the knowledge to facilitate handling unseen tasks. Train with LM Loss For each training sample (*C, Q, A*), we first construct the prompt P(*C, Q*) using approaches introduced above, and then optimize P(*C, Q*) together with the encoder-decoder model gθ using the following LM loss: $${\mathcal{L}}_{L M}=-l o g\,g_{\theta}(A|[P(C,Q);C;Q]).\quad(6)$$ The overall loss that we optimize for Diana is: It we optimize for $\texttt{Dione}$ i.e. $${\mathcal{L}}={\mathcal{L}}_{m}+{\mathcal{L}}_{m}^{\prime}+{\mathcal{L}}_{t}+{\mathcal{L}}_{L M}.$$ $$\left(7\right)$$ After learning each task Ti, we select a small number of samples from Ti based on the query vector of each sample to update the memory M. This selection process aims to maintain diverse samples in M. More details are in Appendix B. See summarized training process in Algorithm 1. ## 3.5 Model Inference When testing, we determine the prompt P(*C, Q*) for each input context C and question Q, and use the learned model gθ to predict the answer A. Adaptive Decision Boundaries (ADB) are used to select proper task prompts in the testing phase. Specifically, for each task Ti, a scalar boundary δi is constructed following the approach proposed by Zhang et al. (2021). An input (*C, Q*) is regarded as a sample from unseen tasks if its query vector h(*C, Q*) falls outside the boundary of every task: $$||h(C,Q),\mathbf{k}_{t}(T_{i})||>\delta_{i},\forall i\in[1,N].\quad\,\,\,\,0$$ For samples from unseen tasks, we use the prompt Pˆt(Fj ) as its task prompt in P(*C, Q*), where Fj is the format of (*C, Q*). Answer Prediction is performed with a greedy decoding process: $$A=\operatorname*{argmax}_{A^{\prime}}g_{\theta}(A^{\prime}|[P(C,Q);C,Q]).\quad(9)$$ A′ ## 4 Experiments 4.1 Datasets We use two sets of tasks to evaluate Diana: 1. decaNLP tasks: We follow Sun et al. (2019a) to select 5 tasks from the decaNLP (McCann et al., 2018) to train Diana. These tasks cover 3 different formats: Span Extraction, Sequence Generation, and Text Classification. We also collect N′ = 3 additional tasks for each of these 3 format from decaNLP to serve as unseen tasks in the testing phase, i.e., our model is trained on N = 5 seen tasks while tested on 8 tasks; 2. QA tasks: The second set focuses on question answering (QA) benchmarks. Specifically, we use 8 QA datasets over 3 QA formats, i.e., Extractive QA, Abstractive QA and Multiple-Choice QA to train Diana. We also collect N′ = 3 additional QA datasets for each of these three formats as unseen tasks, i.e., our model is trained on N = 8 seen tasks while tested on 11 tasks. Note that task IDs for all testing samples are not available in our experiments. See Appendix C,J for more details of our dataset settings. ## 4.2 Evaluation Metrics Individual tasks from above two task sets are evaluated following McCann et al. (2018) and Zhong et al. (2022a), respectively (see Appendix C). To evaluate the LL performance of Diana, we build a performance matrix R ∈ R N×(N+N′), where Ri,j is the model performance on task Tj after learning task Ti. The following LL metrics are computed: 1. Average Performance AN and AN′ is defined as the average performance of the final model on N seen tasks and N′ unseen tasks, respectively: $$A_{N}=\frac{1}{N}\sum_{j=1}^{N}R_{N,j},\quad A_{N^{\prime}}=\frac{1}{N^{\prime}}\sum_{j=N+1}^{N+N^{\prime}}R_{N,j}.\tag{10}$$ 2. Average Forget FN is defined as the average performance decrease of each task after it is learned: $$F_{N}=\frac{1}{N-1}\sum_{j=1}^{N-1}\max_{i\in\{1,\cdots,N-1\}}(R_{i,j}-R_{N,j}).\tag{11}$$ In our experiments, we perform five runs with different random seeds and task orders. All reported metric scores are averages of these five runs. Ideally, we expect a strong LL model to yield high AN and AN′ scores, and low FN scores. ## 4.3 Implementation Details We use T5-base (Raffel et al., 2020) to initialize our encoder-decoder model, and set the lengths of soft prompts Pg, Pf , Pt, Pm to 20, 40, 40, 20, respectively. We maintain totally M = 30 meta prompts, and for each sample (*C, Q*) we choose M′ = 5 meta prompts to construct Pm(*C, Q*). We use the AdamW (Loshchilov and Hutter, 2017) optimizer with a learning rate of 1e-4 and batch size of 64. Each task is trained for five epochs. We set η = 0.15 and γ = 0.3 in Eq. 3 and α = 0.9 and β = 3e − 4 in Eq. 5. We maintain 50 samples from each learned task in the memory M. All experiments are performed on 4 V100 GPUs, and the computational cost of our model is analyzed in Appendix G. See more details in Appendix A. ## 4.4 Baselines We use the following competitive baselines covering all three types of LL models: 1. *Regularization-based methods*: EWC (Kirkpatrick et al., 2017) adopts the elastic weight consolidation approach to add regularization on parameter changes; **FLCB** (Gao et al., 2022) | Task ID Methods | Buffer | QA Tasks | decaNLP Tasks | | | |-------------------|----------|-------------------|-------------------|-------|------| | in Test | Size | AN | FN | AN | FN | | Yes | ProQA | 0 | 50.69 12.10 66.70 | 10.54 | | | ProQA+ER | 50 | 54.00 | 7.27 | 71.26 | 5.33 | | Finetune | 0 | 46.81 15.47 57.92 | 18.41 | | | | EWC | 0 | 47.81 14.55 63.17 | 13.58 | | | | FLCB | 0 | 47.50 14.98 63.86 | 13.36 | | | | AdapterCL | 0 | 48.08 13.29 64.25 | 12.38 | | | | L2P | 0 | 48.15 13.89 63.76 | 13.47 | | | | DualPrompt | 0 | 48.54 13.66 64.47 | 12.49 | | | | ER | 50 | 51.30 10.72 68.17 | 7.42 | | | | DER++ | 50 | 52.01 10.05 69.10 | 6.86 | | | | AFPER | 50 | 52.69 | 9.28 | 69.78 | 6.17 | | Diana w/o M | 0 | 50.30 12.68 66.14 | 10.61 | | | | Diana | 50 | 55.93 | 6.75 | 72.70 | 4.25 | | Multitask | - | 59.23 | - | 77.97 | - | | No | | | | | | uses knowledge learned from previous tasks to guide future task learning; 2. *Rehearsal-based* methods: ER (Chaudhry et al., 2019b) replays memory samples from previous tasks to consolidate learned knowledge; **DER++** (Buzzega et al., 2020) augments ER with a L2 loss on the soft labels; **AFPER** (Mi et al., 2020) combines ER with an adaptive elastic weight consolidation mechanism; 3. *Architecture-based methods*: **AdapterCL** (Madotto et al., 2021a) allocates separate adapters for different tasks; L2P (Wang et al., 2022b) attaches a group of prompts on a pre-trained model to share fine-grained knowledge; DualPrompt (Wang et al., 2022a) uses different prompts to encode task-invariant and task-specific knowledge; **ProQA** (Zhong et al., 2022a) uses a unified structural prompt to implement LL models. Note that ProQA is designed for task incremental learning that requires accessing task IDs in the testing phase. We combine ProQA and ER to implement a stronger baseline **ProQA+ER**, in which samples from previous tasks are replayed for the ProQA model, and we also implement a variant of Diana by removing the memory buffer **Diana w/o** M. We further report the performance for sequentially fine-tuning the LL model on all tasks (**Finetune**) and multi-task learning (**Multitask**). Note that the performance of Multitask is generally regarded as the upper bound of LL models when only seen tasks are considered. All the above baselines are implemented following the same settings of our model, including using the same backbone PLM, prompt size, and memory size used for replay. Note that for the ProQA baseline, we follow its original setting to provide task IDs for testing samples when evaluating. ## 4.5 Experiment Results Results on Seen Tasks Table 1 shows the result on seen tasks from our two task sets. It can be seen that Diana outperforms all competitive baselines. Specifically, in the more general domain incremental learning scenario, i.e., when task IDs are unavailable in testing, Diana outperforms the best-performing baseline AFPER by a large margin. On QA tasks, Diana achieves 6.15% relative improvement on the AN score and 27.26% relative decrease on the FN score. Similar trend is also observed on decaNLP tasks. This means that Diana obtains higher performance with less forgetting in the LL process compared with other baselines. We can also observe that: (1) Diana even outperforms the ProQA+ER baseline, which leaks task IDs in testing. This proves the superiority of our model design. (2) When task IDs are unavailable, Diana w/o M outperforms all baselines that do not use the memory buffer. This demonstrates that Diana's hierarchical prompts help to improve the LL performance even without the memory buffer. Results on Unseen Tasks Table 2 shows the result on unseen tasks from our two task sets. Note that we cannot compute the average forget score for unseen tasks since these tasks are never learned. Diana yields the best performances on all settings. It also achieves a relative improvement of 9.49% and 11.04% on the AN′ score compared with the best baseline DER++ on these two task sets. We can also observe that: (1) When M is unavailable, models that share knowledge through fine-grained components (i.e., Diana and L2P) generally obtain high performance, and our model that allocates extra prompts for unseen tasks achieves the best performance. This validates our approach of using hierarchical prompts to explicitly model unseen tasks. (2) It is interesting to see that Diana even outperforms Multitask, which is usually regarded as the upper bound of traditional LL models when only seen tasks are considered. This indicates that traditional LL models have limited generalization ability to unseen tasks and it also proves that our model is effective in modeling unseen tasks. See Appendix D for detailed experimental results of all tasks. | Task ID in Test Methods | Buffer | AN′ | | | |---------------------------|------------------------|-------|-------|-------| | Size | QA Tasks decaNLP Tasks | | | | | Yes | ProQA | 0 | 35.85 | 30.08 | | ProQA+ER | 50 | 38.00 | 30.92 | | | Finetune | 0 | 35.51 | 28.08 | | | EWC | 0 | 36.07 | 29.76 | | | FLCB | 0 | 36.68 | 31.17 | | | AdapterCL | 0 | 36.84 | 30.32 | | | L2P | 0 | 37.60 | 31.19 | | | DualPrompt | 0 | 36.66 | 29.71 | | | ER | 50 | 37.80 | 30.05 | | | DER++ | 50 | 38.47 | 31.24 | | | AFPER | 50 | 36.79 | 30.22 | | | Diana w/o M | 0 | 39.22 | 33.19 | | | Diana | 50 | 42.12 | 34.69 | | | Multitask | - | 40.62 | 32.72 | | | No | | | | | ## 4.6 Ablation Studies We conduct ablation studies on different components of Diana. Specifically, three types of variants are implemented: 1. Each of these four prompt types is ablated: ## W/O General Prompt, W/O Format Prompt, W/O Task Prompt, **W/O Meta Prompt**. 2. Schemes to enhance task prompts are ablated: w/o Sched. Sampling removes the scheduled sampling scheme and only uses the ground truth task IDs in training; **w/o G.T. Identity** is similar to the above variant. Instead, it only uses predicted task IDs in training; **w/o Neg. Samples** only uses positive samples to train task prompt keys, i.e., the second term in Eq. 1 is removed; **w/o ADB** uses fixed decision boundaries instead of ADBs to detect unseen tasks. 3. Schemes to enhance meta prompts are ablated: w/o Sample Dive. does not enforce the diversity property of the meta prompt keys, i.e., the second term in Eq. 3 is removed; **w/o Memory Dive.** does not use samples from previous tasks to enhance the diversity property, i.e., the loss L′m (Eq. 4) is removed; **w/o Loc.** does not enforce the locality property of the meta prompt keys, i.e., the first term in Eq. 3 is removed; **w/o Cluster** does not cluster samples in M, i.e., ck in Eq. 4 is replaced with the query vector of each sample from M. Table 3 shows the performance of the above variants on QA tasks. It can be observed that Diana outperforms all the above variants. We can also see that: (1) "w/o Meta Prompt" lowers the LL performance by a large margin. This indicates that these fine-grained meta prompts are important in building lifelong models. (2) The scheduled sampling scheme helps to learn better task prompts and thus improves the LL performance. (3) ADB improves model performance on unseen tasks (i.e., AN′) by a large margin. (4) Enforcing the diversity property of meta prompt keys is important to obtain good key representations and facilitates the learning of each task. ## 4.7 More Analysis 4.7.1 Task Id Detection Performance Diana needs to detect task IDs of input samples when determining the task prompt to be used. To verify the performance of the task ID detector implemented in Diana (Section 3.3 and 3.5), we compare the approach used in Diana with other task ID detectors: (1) Perplexity-based detector implemented in baseline "AdapterCL" determines the task IDs based on the perplexity of the PLM when different adapter modules are activated. (2) Distance-based detector implemented in our variant "w/o Neg. Samples" determines the task identity based on the distance between each key and query vectors. (3) Advanced distance-based detector implemented in our variant "w/o ADB" utilizes negative samples based on the above detector. Note that we do not apply ADB in the above two distancebased detectors. On our testing data, the above three approaches achieve a task ID detection accuracy of 59.84%, 52.72%, and 63.43%, respectively, while Diana reaches a task ID detection accuracy of 66.97%. This verifies the effectiveness of our approaches to optimize task prompt keys in detecting task IDs. More detailed comparisons of these ## Task Id Detectors Can Be Found In Appendix E. 4.7.2 Distribution Of Meta Prompt Keys We also analyze the distribution of meta prompt keys K = {k jm}M j=1 constructed in Diana, which are expected to balance the locality and diversity property. Specifically, we introduce two metrics to quantify these two properties. For the diversity property, we follow Mansoury et al. (2020) to measure whether these meta prompt keys cover the whole vector space: $$Diversity=|\mathop{\cup}\limits_{j=1}^{M}{\cal N}_{Z}({\mathbf{k}}_{m}^{j},{\cal M})|/(Z\cdot M),\tag{12}$$ | Criteria | Models | Z=2 | Z=3 | Z=5 | Z=10 | |------------------|------------------|-------|-------|-------|--------| | w/o Sample Dive. | 0.73 | 0.72 | 0.70 | 0.48 | | | w/o Memory Dive. | 0.74 | 0.72 | 0.69 | 0.63 | | | Locality | Diana | 0.74 | 0.73 | 0.70 | 0.66 | | w/o Sample Dive. | 0.63 | 0.61 | 0.59 | 0.40 | | | Diversity | w/o Memory Dive. | 1.00 | 0.89 | 0.77 | 0.53 | | Diana | 1.00 | 0.96 | 0.89 | 0.63 | | where NZ(k jm,M) represents the set of top-Z nearest samples in M around k jm, and *| · |* returns the sample count of a set. High diversity scores are received if we can scatter meta prompt keys near every query vector from M. For the locality property, we follow Scellato et al. (2010) to measure whether there are keys clustered around each query vector q in M: $$Locality=\sum_{q\in\mathcal{M}\;\mathbf{k}\in\mathcal{N}_{Z}(q,\mathcal{K})}(1-||\mathbf{q},\mathbf{k}||)/(Z\cdot|\mathcal{M}|).\tag{13}$$ High locality scores are received if meta prompt keys in K are tightly clustered. On the QA tasks, we compare the above two metrics between Diana and our ablation variants for meta prompts under different values of Z. As can be seen from Table 4, the strategies we introduced in Diana (Section 3.3) help to enforce the locality and diversity properties of meta prompt keys. ## 5 Conclusion | Categories | Variants | AN | FN | AN′ | |---------------------|-------------------|-------|-------|-------| | w/o General Prompt | 55.47 | 6.93 | 40.74 | | | Prompt | w/o Format Prompt | 55.11 | 7.03 | 40.59 | | Types | w/o Task Prompt | 53.87 | 8.50 | 39.66 | | w/o Meta Prompt | 53.46 | 8.56 | 40.04 | | | w/o Sched. Sampling | 55.15 | 7.43 | 42.00 | | | Task | w/o G.T. Identity | 54.16 | 7.61 | 41.27 | | prompt | w/o Neg. Samples | 54.97 | 7.66 | 41.78 | | w/o ADB | 55.48 | 6.98 | 41.01 | | | w/o Sample Dive. | 55.24 | 6.91 | 41.23 | | | Meta | w/o Memory Dive. | 55.02 | 7.41 | 41.48 | | prompt | w/o Loc. | 54.70 | 7.54 | 41.16 | | w/o Cluster | 55.46 | 6.99 | 41.51 | | | Diana | 55.93 | 6.75 | 42.12 | | We propose Diana, a novel LL model for the domain incremental learning scenario. Diana converts different NLP tasks into a unified sequence generation format and uses a prompt-enhanced PLM to learn these tasks. We introduce four types of hierarchically organized prompts in Diana to capture knowledge in different granularities. These prompts are shared between different scopes of samples and are dynamically combined based on a set of key vectors. The space of key vectors is learned with several distance-based regularization terms. Dedicated components are also allocated in Diana to model samples from unseen tasks. Experiments and empirical analysis on two sets of tasks show that Diana outperforms SOTA LL models, especially in handling samples from unseen tasks. ## Limitations One major limitation of this study is its input modality. Specifically, our model is limited to textual inputs and ignores other modalities (e.g., vision and audio). Open and domain incremental lifelong learning across modalities is more realistic and challenging. Fortunately, we can obtain robust features of different modalities via multi-modal pre-training models (Xu et al., 2021; Huo et al., 2021). For future work, we will try to tackle multimodal tasks in an open (including out of distribution data (Lang et al., 2022, 2023a,b)) and domain incremental lifelong learning scenario with better approaches. ## Ethics Statement This work does not raise any direct ethical issues. In the proposed work, we seek to develop a model for domain incremental lifelong learning in an open world, and we believe this work leads to intellectual merits that benefit from a realistic and efficient lifelong learning model. All experiments are conducted on open datasets. ## References Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for sequence prediction with recurrent neural networks. *Advances* in neural information processing systems, 28. Pietro Buzzega, Matteo Boschini, Angelo Porrello, Davide Abati, and SIMONE CALDERARA. 2020. Dark experience for general continual learning: a strong, simple baseline. In *Advances in Neural Information Processing Systems*, volume 33, pages 15920– 15930. Curran Associates, Inc. Arslan Chaudhry, Marc'Aurelio Ranzato, Marcus Rohrbach, and Mohamed Elhoseiny. 2019a. Efficient lifelong learning with a-gem. In Proceedings of ICLR. Arslan Chaudhry, Marcus Rohrbach, Mohamed Elhoseiny, Thalaiyasingam Ajanthan, Puneet K. Dokania, Philip H. S. Torr, and Marc'Aurelio Ranzato. 2019b. On tiny episodic memories in continual learning. Tianqi Chen, Ian Goodfellow, and Jonathon Shlens. 2016. Net2Net: Accelerating learning via knowledge transfer. In *Proceedings of ICLR*. Pradeep Dasigi, Nelson F. Liu, Ana Marasovic, Noah A. ´ Smith, and Matt Gardner. 2019. Quoref: A reading comprehension dataset with questions requiring coreferential reasoning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5925–5932, Hong Kong, China. Association for Computational Linguistics. Thomas G Dietterich. 2017. Steps toward robust artificial intelligence. *Ai Magazine*, 38(3):3–24. Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019. DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In *Proceedings of the 2019 Conference of the North American* Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2368–2378, Minneapolis, Minnesota. Association for Computational Linguistics. Sayna Ebrahimi, Franziska Meier, Roberto Calandra, Trevor Darrell, and Marcus Rohrbach. 2020. Adversarial continual learning. In European Conference on Computer Vision, pages 386–402. Springer. Mehrdad Farajtabar, Navid Azizan, Alex Mott, and Ang Li. 2020. Orthogonal gradient descent for continual learning. In Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, volume 108 of *Proceedings of Machine* Learning Research, pages 3762–3773. PMLR. Chrisantha Fernando, Dylan Banarse, Charles Blundell, Yori Zwols, David Ha, Andrei A. Rusu, Alexander Pritzel, and Daan Wierstra. 2017. Pathnet: Evolution channels gradient descent in super neural networks. Robert M French. 1999. Catastrophic forgetting in connectionist networks. *Trends in cognitive sciences*, 3(4):128–135. Jiaqi Gao, Jingqi Li, Hongming Shan, Yanyun Qu, James Z. Wang, and Junping Zhang. 2022. Forget less, count better: A domain-incremental selfdistillation learning benchmark for lifelong crowd counting. Luheng He, Mike Lewis, and Luke Zettlemoyer. 2015. Question-answer driven semantic role labeling: Using natural language to annotate natural language. In *Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing*, pages 643–653, Lisbon, Portugal. Association for Computational Linguistics. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In *Advances in Neural Information* Processing Systems, volume 28. Curran Associates, Inc. Yuqi Huo, Manli Zhang, Guangzhen Liu, Haoyu Lu, Yizhao Gao, Guoxing Yang, Jingyuan Wen, Heng Zhang, Baogui Xu, Weihao Zheng, Zongzheng Xi, Yueqian Yang, Anwen Hu, Jinming Zhao, Ruichen Li, Yida Zhao, Liang Zhang, Yuqing Song, Xin Hong, Wanqing Cui, Dan Yang Hou, Yingyan Li, Junyi Li, Peiyu Liu, Zheng Gong, Chuhao Jin, Yuchong Sun, Shizhe Chen, Zhiwu Lu, Zhicheng Dou, Qin Jin, Yanyan Lan, Wayne Xin Zhao, Ruihua Song, and Ji-Rong Wen. 2021. Wenlan: Bridging vision and language by large-scale multi-modal pre-training. CoRR, abs/2103.06561. Daniel Khashabi, Sewon Min, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter Clark, and Hannaneh Hajishirzi. 2020. UNIFIEDQA: Crossing format boundaries with a single QA system. In *Findings of the Association for Computational Linguistics:* EMNLP 2020, pages 1896–1907, Online. Association for Computational Linguistics. James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. 2017. Overcoming catastrophic forgetting in neural networks. *Proceedings of NAS*, pages 3521–3526. Tomas Kocisky, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, Gabor Melis, and Edward Grefenstette. 2018. The narrativeqa reading comprehension challenge. *Transactions of the Association for Computational Linguistics*, 6(0):317–328. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7(0):452–466. Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. RACE: Large-scale ReAding comprehension dataset from examinations. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 785– 794, Copenhagen, Denmark. Association for Computational Linguistics. Hao Lang, Yinhe Zheng, Binyuan Hui, Fei Huang, and Yongbin Li. 2023a. Out-of-domain intent detection considering multi-turn dialogue contexts. *arXiv* preprint arXiv:2305.03237. Hao Lang, Yinhe Zheng, Yixuan Li, Jian Sun, Fei Huang, and Yongbin Li. 2023b. A survey on outof-distribution detection in nlp. *arXiv preprint* arXiv:2305.03236. Hao Lang, Yinhe Zheng, Jian Sun, Fei Huang, Luo Si, and Yongbin Li. 2022. Estimating soft labels for out-of-domain intent detection. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 261–276, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. In *Proceedings of the 2021 Conference on* Empirical Methods in Natural Language Processing, pages 3045–3059, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Omer Levy, Minjoon Seo, Eunsol Choi, and Luke Zettlemoyer. 2017. Zero-shot relation extraction via reading comprehension. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 333–342, Vancouver, Canada. Association for Computational Linguistics. Zhizhong Li and Derek Hoiem. 2017. Learning without forgetting. *TPAMI*, 40(12):2935–2947. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In *Text Summarization Branches Out*, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, and Jie Tang. 2021. Gpt understands, too. Ilya Loshchilov and Frank Hutter. 2017. Decoupled weight decay regularization. Andrea Madotto, Zhaojiang Lin, Zhenpeng Zhou, Seungwhan Moon, Paul Crook, Bing Liu, Zhou Yu, Eunjoon Cho, Pascale Fung, and Zhiguang Wang. 2021a. Continual learning in task-oriented dialogue systems. In *Proceedings of the 2021 Conference on* Empirical Methods in Natural Language Processing, pages 7452–7467, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Andrea Madotto, Zhaojiang Lin, Zhenpeng Zhou, Seungwhan Moon, Paul A Crook, Bing Liu, Zhou Yu, Eunjoon Cho, Pascale Fung, and Zhiguang Wang. 2021b. Continual learning in task-oriented dialogue systems. In *Proceedings of the 2021 Conference on* Empirical Methods in Natural Language Processing, pages 7452–7467. Arun Mallya and Svetlana Lazebnik. 2018. Packnet: Adding multiple tasks to a single network by iterative pruning. In *Proceedings of the IEEE conference* on Computer Vision and Pattern Recognition, pages 7765–7773. Davide Maltoni and Vincenzo Lomonaco. 2019. Continuous learning in single-incremental-task scenarios. Neural Networks, 116:56–73. Massimiliano Mancini, Elisa Ricci, Barbara Caputo, and Samuel Rota Bulo. 2018. Adding new tasks to a single network with weight transformations using binary masks. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops, pages 0–0. Masoud Mansoury, Himan Abdollahpouri, Mykola Pechenizkiy, Bamshad Mobasher, and Robin Burke. 2020. Fairmatch: A graph-based approach for improving aggregate diversity in recommender systems. In *Proceedings of the 28th ACM Conference on User* Modeling, Adaptation and Personalization, UMAP '20, page 154–162, New York, NY, USA. Association for Computing Machinery. Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. 2018. The natural language decathlon: Multitask learning as question answering. Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. 2019. The natural language decathlon: Multitask learning as question answering. Fei Mi, Liangwei Chen, Mengjie Zhao, Minlie Huang, and Boi Faltings. 2020. Continual learning for natural language generation in task-oriented dialog systems. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 3461–3474, Online. Association for Computational Linguistics. Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. 2018. Can a suit of armor conduct electricity? a new dataset for open book question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2381–2391, Brussels, Belgium. Association for Computational Linguistics. Nan Pu, Wei Chen, Yu Liu, Erwin M Bakker, and Michael S Lew. 2021. Lifelong person reidentification via adaptive knowledge accumulation. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pages 7901– 7910. Chengwei Qin and Shafiq Joty. 2022. LFPT5: A unified framework for lifelong few-shot language learning based on prompt tuning of t5. In *International Conference on Learning Representations*. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21:1– 67. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics. Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl, and Christoph H Lampert. 2017. iCaRL: Incremental classifier and representation learning. In Proceedings of CVPR, pages 2001–2010. Matthew Richardson, Christopher J.C. Burges, and Erin Renshaw. 2013. MCTest: A challenge dataset for the open-domain machine comprehension of text. In *Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing*, pages 193–203, Seattle, Washington, USA. Association for Computational Linguistics. Hippolyt Ritter, Aleksandar Botev, and David Barber. 2018. Online structured laplace approximations for overcoming catastrophic forgetting. In Proceedings of NIPS, pages 3738–3748. Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. 2016. Progressive neural networks. Victor Sanh, Albert Webson, Colin Raffel, Stephen H Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, et al. 2021. Multitask prompted training enables zero-shot task generalization. arXiv preprint arXiv:2110.08207. Maarten Sap, Hannah Rashkin, Derek Chen, Ronan Le Bras, and Yejin Choi. 2019. Social IQa: Commonsense reasoning about social interactions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4463– 4473, Hong Kong, China. Association for Computational Linguistics. Salvatore Scellato, Cecilia Mascolo, Mirco Musolesi, and Vito Latora. 2010. Distance matters: geo-social metrics for online social networks. In *3rd Workshop* on Online Social Networks (WOSN 2010). Hanul Shin, Jung Kwon Lee, Jaehong Kim, and Jiwon Kim. 2017. Continual learning with deep generative replay. In *Proceedings of NIPS*, pages 2990–2999. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In *Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing*, pages 1631–1642, Seattle, Washington, USA. Association for Computational Linguistics. Fan-Keng Sun, Cheng-Hao Ho, and Hung-Yi Lee. 2019a. Lamol: Language modeling for lifelong language learning. Kai Sun, Dian Yu, Jianshu Chen, Dong Yu, Yejin Choi, and Claire Cardie. 2019b. Dream: A challenge data set and models for dialogue-based reading comprehension. *Transactions of the Association for Computational Linguistics*, 7:217–231. Sebastian Thrun and Tom M Mitchell. 1995. Lifelong robot learning. *Robotics and autonomous systems*, 15(1-2):25–46. Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2017. NewsQA: A machine comprehension dataset. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 191–200, Vancouver, Canada. Association for Computational Linguistics. Gido M. van de Ven and Andreas S. Tolias. 2019. Three scenarios for continual learning. Gido M van de Ven, Tinne Tuytelaars, and Andreas S Tolias. 2022. Three types of incremental learning. Nature Machine Intelligence, 4(12):1185–1197. Tu Vu, Brian Lester, Noah Constant, Rami Al-Rfou', and Daniel Cer. 2022. SPoT: Better frozen model adaptation through soft prompt transfer. In *Proceedings of the 60th Annual Meeting of the Association* for Computational Linguistics (Volume 1: Long Papers), pages 5039–5059, Dublin, Ireland. Association for Computational Linguistics. Zifeng Wang, Zizhao Zhang, Sayna Ebrahimi, Ruoxi Sun, Han Zhang, Chen-Yu Lee, Xiaoqi Ren, Guolong Su, Vincent Perot, Jennifer Dy, and Tomas Pfister. 2022a. Dualprompt: Complementary prompting for rehearsal-free continual learning. Zifeng Wang, Zizhao Zhang, Chen-Yu Lee, Han Zhang, Ruoxi Sun, Xiaoqi Ren, Guolong Su, Vincent Perot, Jennifer Dy, and Tomas Pfister. 2022b. Learning to prompt for continual learning. In *Proceedings of* the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 139–149. Tsung-Hsien Wen, David Vandyke, Nikola Mrkšic, Mil- ´ ica Gašic, Lina M. Rojas-Barahona, Pei-Hao Su, Ste- ´ fan Ultes, and Steve Young. 2017. A network-based end-to-end trainable task-oriented dialogue system. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 438–449, Valencia, Spain. Association for Computational Linguistics. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In *Proceedings of the 2018 Conference of the North American* Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguistics. Chayut Wiwatcharakoses and Daniel Berrar. 2020. Soinn+, a self-organizing incremental neural network for unsupervised learning from noisy data streams. Expert Systems with Applications, 143:113069. Mitchell Wortsman, Vivek Ramanujan, Rosanne Liu, Aniruddha Kembhavi, Mohammad Rastegari, Jason Yosinski, and Ali Farhadi. 2020. Supermasks in superposition. In *Advances in Neural Information Processing Systems*, volume 33, pages 15173–15184. Curran Associates, Inc. Tianbao Xie, Chen Henry Wu, Peng Shi, Ruiqi Zhong, Torsten Scholak, Michihiro Yasunaga, Chien-Sheng Wu, Ming Zhong, Pengcheng Yin, Sida I Wang, et al. 2022. Unifiedskg: Unifying and multi-tasking structured knowledge grounding with text-to-text language models. *arXiv preprint arXiv:2201.05966*. Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, and Lidong Zhou. 2021. LayoutLMv2: Multi-modal pre-training for visually-rich document understanding. In *Proceedings of the 59th Annual Meeting of the Association for* Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2579–2591, Online. Association for Computational Linguistics. Hanrong Ye, Hong Liu, Fanyang Meng, and Xia Li. 2021. Bi-directional exponential angular triplet loss for rgb-infrared person re-identification. *IEEE Transactions on Image Processing*, 30:1583–1595. Friedemann Zenke, Ben Poole, and Surya Ganguli. 2017. Continual learning through synaptic intelligence. In *Proceedings of ICML*, pages 3987–3995. Hanlei Zhang, Hua Xu, and Ting-En Lin. 2021. Deep open intent classification with adaptive decision boundary. In *AAAI*, pages 14374–14382. Junting Zhang, Jie Zhang, Shalini Ghosh, Dawei Li, Serafettin Tasci, Larry Heck, Heming Zhang, and C-C Jay Kuo. 2020. Class-incremental learning via deep model consolidation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 1131–1140. Victor Zhong, Caiming Xiong, and Richard Socher. 2017. Seq2sql: Generating structured queries from natural language using reinforcement learning. *arXiv* preprint arXiv:1709.00103. Wanjun Zhong, Yifan Gao, Ning Ding, Yujia Qin, Zhiyuan Liu, Ming Zhou, Jiahai Wang, Jian Yin, and Nan Duan. 2022a. Proqa: Structural prompt-based pre-training for unified question answering. Wanjun Zhong, Yifan Gao, Ning Ding, Yujia Qin, Zhiyuan Liu, Ming Zhou, Jiahai Wang, Jian Yin, and Nan Duan. 2022b. ProQA: Structural promptbased pre-training for unified question answering. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4230–4243, Seattle, United States. Association for Computational Linguistics. ## A More Implementation Details We use T5-base (Raffel et al., 2020) to initialize our encoder-decoder model (12 layers, 768 dimensional hidden size, and 12 attention heads), and set the lengths of soft prompts Pg,Pf ,Pt,Pm to 20, 40, 40, 20, respectively. We use a fixed T5-base encoder with an average pooling layer to obtain the query vector. We maintain a pool of M = 30 meta prompts, and for each sample (*C, Q*) we choose M′ = 5 meta prompts to construct Pm(*C, Q*). We use the AdamW (Loshchilov and Hutter, 2017) optimizer for training. All hyperparameters are tuned according to the average score on validation datasets of NarQA, RACE, OBQA, SIQA and Dream. We tried epoch number of {2, 3, 4, 5, 6, 7, 8} and learning rate of {1e−5, 5e− 5, 1e − 4, 5e − 4, 1e − 3}. We finally set the learning rate to 1e-4 and the number of training epochs to 5. We set η = 0.15 and γ = 0.3 in Eq. 3 and α = 0.9 and β = 3e − 4 in Eq. 5. For η and γ, we have a grid search between 0 and 0.5 with an interval of 0.05. For α and β, α is searched among {0.9, 0.7, 0.5}, while β is searched among {1e − 5, 3e − 5, 1e − 4, 3e − 4, 1e − 3}. All experiments are performed on 4 V100 GPUs (32GB). The batch size is set to 64. In each set of tasks, We perform 5 runs with different task orders by setting the random seed to {42, 43, 44, 45, 46} respectively. In this way, we report the average score of each method. Note that we only use the random seed 42 for tuning hyper-parameters. In order to train extra task prompts {Pˆt(F1), *· · ·* , Pˆt(FL)} for unseen tasks, we allocate a small probability ω = 5% for each training sample (*C, Q, A*) to use Pˆt(Fj ) as its task prompt in P(*C, Q*), where Fj is the task format of (*C, Q, A*). To implement variant "w/o ADB" for ablation study, we use a fixed decision boundary instead of ADB. If for any task Ti, the distance ||h(*C, Q*), kt(Ti)|| > 0.35, we regard the sample is from unseen tasks. The adaptive decision boundary for each task is determined following the approach proposed by Zhang et al. (2021). We use AdamW optimizer with a learning rate of 0.02 to learn each decision boundary. To obtain the ROUGE-L score, we use the NLTK package for sentence tokenization, and python rouge-score package for evaluation. ## B Memory Update After learning task Ti, we select E diverse samples (we set E = 50 in our experiments) from Tito update the memory M based on the query vector of each sample. Specifically, our selection criteria are built based on the distance of these prompt keys and query vectors. For each meta prompt key k jm (j = 1, · · · , M), we select top-⌈ E M ⌉ samples (⌈·⌉ is the ceiling function), whose query vectors are closest to k jm. After accumulating M⌈ E M ⌉ memory samples selected by M meta prompt keys, we rank these samples based on their distance to the corresponding meta prompt keys, and choose top-E samples with the smallest distance to be fed into M. In this way, the memory M we constructed can expand to the whole space of prompt keys. Note that, the memory buffer M is optional in Diana. Without M, the loss in Eq. 4 is not optimized, and the second term in Eq. 1 is removed. ## C Detailed Dataset Setting And Evaluation Metrics For the decaNLP task set, 8 benchmarks over 3 formats are covered, i.e., (1) *Span Extraction*, including **SQuAD** (Rajpurkar et al., 2016), QA-ZRE (Levy et al., 2017), **QA-SRL** (He et al., 2015); (2) *Sequence Generation*, including WOZ (Wen et al., 2017), **WikiSQL** (Zhong et al., 2017), **CNN/DM** (Hermann et al., 2015); (3) *Text Classification*, including SST (Socher et al., 2013) and MNLI (Williams et al., 2018). For the QA task set, 11 QA benchmarks over 3 QA formats are covered, i.e., : (1) *Extractive* QA, including **SQuAD** (Rajpurkar et al., 2016), **NewsQA** (Trischler et al., 2017), and Quoref (Dasigi et al., 2019); (2) *Abstractive* QA, including **NarQA** (Kocisky et al., 2018), **NQOpen** (Kwiatkowski et al., 2019), and Drop (Dua et al., 2019); (3) *Multiple-Choice* QA, including **RACE** (Lai et al., 2017), **OBQA** (Mihaylov et al., 2018), **MCTest** (Richardson et al., 2013), **SIQA** (Sap et al., 2019), and **Dream** (Sun et al., 2019b). The statistics of the above datasets are summarized in Table 5. We follow the preprocess scheme released by Khashabi et al. (2020) to tackle these datasets. Some of these datasets do not contain a validation set, thus we only use the validation sets of NarQA, RACE, OBQA, SIQA and Dream in the QA task set to search hyperparameters. The evaluation for each single task follows Mc- ![13_image_0.png](13_image_0.png) ![13_image_5.png](13_image_5.png) | Task set | Dataset | Train set size | Val set size | Test set size | |------------|-----------|------------------|----------------|-----------------| | SQuAD | 87k | - | 10k | | | QA-ZRE | - | - | 12k | | | QA-SRL | 6.4k | - | 2.2k | | | WikiSQL | 56k | - | 15k | | | WOZ | 2.5k | - | 1.6k | | | CNN/DM | - | - | 11k | | | SST | 6.9k | - | 1.8k | | | MNLI | - | - | 20k | | | SQuAD | 87k | - | 10k | | | NewsQA | 76k | - | 4.3k | | | Quoref | - | - | 2.7k | | | NarQA | 65k | 6.9k | 21k | | | NQOpen | 9.6k | - | 10k | | | Drop | - | - | 9.5k | | | RACE | 87k | 4.8k | 4.9k | | | OBQA | 4.9k | 500 | 500 | | | MCTest | 1.4k | - | 320 | | | SIQA | 33k | 1.9k | 2.2k | | | Dream | - | 2.0k | 2.0k | | ![13_image_1.png](13_image_1.png) ![13_image_2.png](13_image_2.png) Cann et al. (2018); Zhong et al. (2022b). Among the decaNLP tasks, we compute F1 score for QASRL and QA-ZRE, Exact Match (EM) score for SQuAD, MNLI and SST, ROUGE-L for CNN/DM. For WOZ, we adopt turn-based dialogue state exact match (dsEM). For WikiSQL, we use exact match of logical forms (lfEM). For the QA task set, we compute the accuracy of option selection for all Multi-Choice QA tasks and use EM score for all Extractive QA tasks. Among Abstractive QA tasks, we use F1 score for Drop and NQOpen, and ROUGE-L (Lin, 2004) for NarQA. ## D Detailed Experimental Results We provide the detailed performance of Diana under each single task compared with competitive baselines. The results under five seen tasks of the decaNLP task set, and eight seen tasks of the QA task set are shown in Table 6 and Table 7. The results of unseen tasks for the decaNLP task set and the QA task set are shown in Table 8 and Table 9. ## E More Analysis Of Task Identity Detection Performance Architecture-based LL models need to detect task identities of input samples when these identities are unavailable in the testing phase. To verify the performance of the task identity detector implemented in Diana, we compare our approach with other task identity detectors: (1) Perplexity-based detector implemented in baseline "AdapterCL" determines the task identities based on the perplexity of the PLM when different adapter modules are activated. (2) Distance-based detector implemented in our variant "w/o Neg. Samples" determines the task ![13_image_3.png](13_image_3.png) ![13_image_4.png](13_image_4.png) identity based on the distance between each key and query vectors. (3) Advanced distance-based detector implemented in our variant "w/o ADB" utilizes negative samples based on the above detector. Note that we do not apply ADB in the above two distance-based detectors. The above approaches are trained and evaluated with the QA tasks under two scenarios: (1) In Closed-world: detectors are only required to detect samples from seen tasks. Note that in this setting, the Advanced distance-based detector used in "w/o ADB" is the same as the task identity detector implemented in Diana. (2) In **Open-world**: detectors are required to handle unseen task samples as well. When tested in the open-world scenario, these two distance-based detectors adopt a fixed decision boundary of 0.35 (see Appendix A). The perplexity-based detector adopts a perplexity threshold of 4, i.e., samples with a perplexity score above 4 are regarded as unseen task samples. This perplexity threshold is selected based on the model performance on the validation set. We report the task identity detection accuracy and Marco F1 scores for seen samples and unseen samples separately in Table 10. we can observe that: (1) The task identity detector used in Diana achieves the best performance in both scenarios. This proves the effectiveness of our task prompt keys in detecting task identities. (2) Negative samples used in Advanced distance-based detector significantly improve the task identity detection performance on seen tasks. (3) ADB is effective in improving the task identity detection performance on unseen tasks. ## F More Analysis Of Scheduled Sampling We perform a more detailed analysis of the scheduled sampling scheme introduced in Diana. Specifically, in the ablation variant "w/o G.T. Identity", the model only uses predicted task identities in training. This scheme helps to alleviate the discrepancy between training and testing with the cost of the model coverage speed. In the ablation variant "w/o Sched. Sampling", the model only uses golden truth task identities in the training process. This scheme leads to the discrepancy between training and testing. The above two schemes under-perform our model Diana. In this section, we analyze the task identity detection accuracy yield by the above schemes in | Task-ID | Methods | Buffer | RN,j | AN | FN | | | | | |-------------|-----------|----------|---------|-------|--------|-------|-------|-------|-------| | in Test | Size | SQuAD | WikiSQL | SST | QA-SRL | WOZ | | | | | Available | ProQA | 0 | 71.09 | 37.39 | 92.16 | 75.68 | 57.17 | 66.70 | 10.54 | | ProQA+ER | 50 | 75.57 | 50.98 | 91.67 | 76.74 | 61.33 | 71.26 | 5.33 | | | Finetune | 0 | 68.09 | 19.70 | 90.45 | 69.43 | 41.91 | 57.92 | 18.41 | | | EWC | 0 | 70.57 | 35.97 | 89.79 | 71.19 | 48.34 | 63.17 | 13.58 | | | FLCB | 0 | 70.96 | 33.35 | 90.03 | 74.71 | 50.23 | 63.86 | 13.36 | | | AdapterCL | 0 | 71.82 | 35.14 | 90.95 | 72.83 | 50.53 | 64.25 | 12.38 | | | L2P | 0 | 70.18 | 34.62 | 90.39 | 72.57 | 51.02 | 63.76 | 13.47 | | | DualPrompt | 0 | 70.99 | 35.33 | 90.91 | 73.92 | 51.18 | 64.47 | 12.49 | | | ER | 50 | 73.65 | 47.96 | 92.20 | 74.17 | 52.88 | 68.17 | 7.42 | | | DER++ | 50 | 74.18 | 49.27 | 92.34 | 75.11 | 54.61 | 69.10 | 6.86 | | | AFPER | 50 | 75.27 | 48.90 | 91.56 | 76.34 | 56.82 | 69.78 | 6.17 | | | Diana w/o M | 0 | 71.94 | 36.25 | 91.03 | 74.59 | 56.90 | 66.14 | 10.61 | | | Diana | 50 | 76.93 | 51.09 | 92.74 | 77.69 | 65.06 | 72.70 | 4.25 | | | Multitask | - | 79.68 | 53.65 | 93.59 | 80.38 | 82.57 | 77.97 | - | | | Unavailable | | | | | | | | | | Task-ID Methods Buffer RN,j AN FN in Test Size SQuAD NewsQA NarQA NQOpen RACE OBQA MCTest SIQA Available ProQA 0 67.66 38.73 37.96 37.72 53.75 43.73 68.27 57.73 50.69 12.10 ProQA+ER 50 71.20 40.17 41.94 39.00 57.09 47.00 77.94 57.67 54.00 7.27 Finetune 0 57.58 35.84 33.74 34.49 50.28 42.20 65.67 54.72 46.81 15.47 EWC 0 59.84 36.44 34.88 35.14 50.54 43.43 66.52 55.68 47.81 14.55 FLCB 0 58.73 36.97 34.27 34.90 51.63 41.53 66.60 55.39 47.50 14.98 AdapterCL 0 59.64 37.31 37.42 36.70 49.57 41.80 66.67 55.54 48.08 13.29 L2P 0 62.98 36.23 35.79 36.49 49.00 41.93 66.98 55.77 48.15 13.89 DualPrompt 0 62.60 36.36 34.35 36.53 52.10 42.67 67.57 56.26 48.54 13.66 ER 50 65.08 38.72 39.07 36.48 55.90 43.53 74.31 57.29 51.30 10.72 DER++ 50 67.08 39.03 39.91 36.93 56.42 44.13 74.77 57.77 52.01 10.05 AFPER 50 68.14 40.79 40.16 38.89 55.08 46.60 75.33 56.52 52.69 9.28 Diana w/o M 0 65.51 37.78 37.35 37.41 54.14 46.27 68.50 57.41 50.30 12.68 Diana 50 74.44 42.91 43.16 40.05 59.08 48.47 78.44 60.92 55.93 **6.75** Multitask - 80.22 44.74 47.30 41.72 64.05 51.00 83.44 61.41 59.23 - | Unavailable | |---------------| Table 7: Model performance on seen QA tasks. Best results (except the upper bound Multitask) are bold. Our model Diana significantly outperforms other baselines on all metrics with p-value<0.05 (t-test). Methods Buffer RN,j AN′ Size CNN/DM QA-ZRE MNLI ProQA 0 13.25 37.58 39.42 30.08 ProQA+ER 50 14.18 38.42 40.17 30.92 Finetune 0 10.61 36.50 37.12 28.08 EWC 0 11.78 37.62 39.88 29.76 FLCB 0 12.98 40.02 40.52 31.17 AdapterCL 0 13.23 37.88 39.84 30.32 L2P 0 13.09 40.16 40.31 31.19 DualPrompt 0 12.92 37.04 39.18 29.71 ER 50 13.04 38.06 39.04 30.05 DER++ 50 14.67 39.74 39.32 31.24 AFPER 50 12.14 38.66 39.85 30.22 Diana w/o M 0 14.94 43.95 40.69 33.19 Diana 50 15.80 44.74 43.53 **34.69** Multitask - 15.98 42.12 40.07 32.72 Methods Buffer RN,j AN′ Size Quoref Drop Dream ProQA 0 33.40 18.29 55.85 35.85 ProQA+ER 50 35.87 19.78 58.35 38.00 Finetune 0 33.08. 18.10 55.36 35.51 EWC 0 33.43 18.14 56.65 36.07 FLCB 0 34.85 18.31 56.88 36.68 AdapterCL 0 35.47 17.83 57.21 36.84 L2P 0 36.22 19.18 57.40 37.60 DualPrompt 0 35.22 18.52 56.25 36.66 ER 50 35.14 18.56 59.71 37.80 DER++ 50 36.15 19.08 60.17 38.47 AFPER 50 35.26 18.83 56.29 36.79 Diana w/o M 0 37.95 20.32 59.39 39.22 Diana 50 40.42 22.91 63.03 **42.12** Multitask - 36.27 22.99 62.60 40.62 Figure 4 when learning the last task TN in the input task sequence of QA task set. We can observe | Scenario | Methods | Scores on Seen Tasks | Scores on Unseen Tasks | Overall Scores | | | | |-------------------------|-------------------------|------------------------|--------------------------|------------------|----------|-------|-------| | F1 | Accuracy | F1 | Accuracy | F1 | Accuracy | | | | Perplexity-based | 44.92 | 52.20 | - | - | 44.92 | 52.20 | | | Closed-world | Distance-based | 43.18 | 63.34 | - | - | 43.18 | 63.34 | | Advanced distance-based | 54.37 | 75.35 | - | - | 54.37 | 75.15 | | | Perplexity-based | 33.15 | 58.64 | 26.14 | 62.98 | 32.37 | 59.84 | | | Distance-based | 38.51 | 50.53 | 21.98 | 58.48 | 36.67 | 52.72 | | | Open-world | Advanced distance-based | 44.12 | 64.86 | 24.17 | 59.67 | 41.90 | 63.43 | | Diana | 47.06 | 68.81 | 35.70 | 62.16 | 45.80 | 66.97 | | ![15_image_0.png](15_image_0.png) that the task identity detection accuracy achieved by "w/o G.T. Identity" is extremely low in earlier iterations, which hinders task prompts from sharing task-specific knowledge in the early training stage. The scheduled sampling process introduced in Diana effectively compromises between detecting correct task identities and alleviating the train-test discrepancy, and thus it results in the best LL performance among these variants. Note that the task identity detection accuracy in "w/o Sched. Sampling" is almost zero in the first 1,000 iterations when learning task TN . This is because the task prompt keys for previous N − 1 tasks are already well learned. The randomly initialized prompt key for task TN needs to be pulled to the query vector space before starting to be functional. ## G More Analysis Of Computational Cost We analyze the computational cost of Diana when learning the QA tasks, including the number of tunable parameters, time used for training and testing, and size of required memories retained from previous tasks. As indicated in Table 11, Diana does not introduce too much computation overhead. MethodsTunable Parameters Memory SizeTrain Time Per BatchTest Time All Tasks Lower Bound 222.90M 0 0.55 523 EWC 222.90M 0 0.93 596 FLCB 222.90M 0 0.59 591 AdapterCL 262.25M 0 0.73 5852 L2P 223.39M 0 1.01 1013 DualPrompt 223.17M 0 0.93 1147 ER 222.90M 50 0.58 541 DER++ 222.90M 50 0.68 604 AFPER 222.90M 50 0.95 630 ProQA 223.43M 0 0.86 863 Diana 223.84M 50 1.05 1108 Diana w/o M 223.84M 0 0.97 1123 Table 12: Performance with different sized PLMs on QA tasks. ![15_image_1.png](15_image_1.png) ## H Effect Of Plm Size | PLM Size | Method | AN | FN | AN′ | |------------|----------|-------|-------|-------| | T5-small | DER++ | 41.78 | 15.69 | 26.62 | | Diana | 46.50 | 10.42 | 31.95 | | | T5-base | DER++ | 52.01 | 10.05 | 38.47 | | Diana | 55.93 | 6.75 | 42.12 | | | T5-large | DER++ | 59.97 | 9.50 | 46.71 | | Diana | 64.19 | 6.85 | 51.28 | | We evaluate Diana and the best-performing baseline DER++ on different sized PLM using QA datasets. As shown in Table 12, Diana obtains better performance with larger PLM size, and consistently outperforms the baseline. ## I Analysis Of Training Method During training, we follow a full tuning scheme that updates parameters of the backbone language models (T5) along with prompts. We also investigate the performance of prompt tuning, which fixes the backbone language model and only updates the prompts. As indicated in Table 13, prompt tuning dramatically degenerates the performance of Diana. ## J Cases We list some samples for tasks we modeled from the decaNLP task set and the QA task set respectively, shown in Table 14 and Table 15. ## K Training Process Details about the training process of Diana are shown in Algorithm 1. | Format | Dataset | Case Context: (Private_school) Private schooling in the United States has been... | |---------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------| | SQuAD | Question: In what year did Massachusetts first require children to be educated in schools? Answer: 1852 | | | Span Extraction | Context:the race is in mixed eights , and usually held in late february / early march. Question:when is something held ? | | | QA-SRL | Answer:in late february / early march Context:travis hamonic ( born august 16 , 1990 ) is a canadian professional ice hockey... | | | QA-ZRE | Question:what team does travis hamonic belong to ? Answer:new york islanders Context:( cnn ) governments around the world are using the threat of terrorism... | | | CNN/DM | Question:what is the summary ? Answer:amnesty ' s annual death penalty report catalogs encouraging signs... Context:what is the phone number and postcode of a cheap restaurant in the east part of town ?... | | | WOZ | Question:what is the change in state ? Answer:price range : cheap , area : east ; phone , postcode | | | Sequence Generation | Context:the table has columns player , no . , nationality , position , years in toronto... | | | WikiSQL | Question:what is the translation from english to sql ? Answer:select nationality from table where player = terrence ross Context:no movement , no yuks , not much of anything . Question:is this review negative or positive ? | | | SST | Answer: negative | | | Text Classification | Context:premise:yeah i i think my favorite restaurant is always been the one closest you... Question:hypothesis:i like him for the most part , but would still enjoy seeing someone beat him. | | | MNLI | - - entailment , neutral , or contradiction ? Answer: entailment | | | Format | Dataset | Case Context: (Private_school) Private schooling in the United States has been... | |--------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------| | SQuAD | Question: In what year did Massachusetts first require children to be educated in schools? Answer: 1852 Context:ABECHE, Chad (CNN) - Most of the 103 children that a French charity... | | | NewsQA | Question:WHO ARE UNDER ARREST IN CHAD? Answer:Three French journalists, a seven-member Spanish flight crew and one Belgian | | | Extractive | Context:(Blast of Silence) Frankie Bono, a mentally disturbed hitman from Cleveland... | | | Quoref | Question:What is the first name of the person who follows their target to select...? Answer:Frankie Context:The play begins with three pages disputing over the black cloak usually worn by the actor... Question:WHO NORMALLY DELIVERS THE OPENING PROLOGUE IN THE PLAY? | | | NarQA | Answer:THE ACTOR WEARING THE BLACK CLOAK Context:- cartilage - cartilage cartilage is a resilient and smooth elastic tissue , a rubber... | | | NQOpen | Question:where is each type of cartilage located in the body? Answer:many other body components | | | Abstractive | Context:Hoping to rebound from their loss to the Patriots, the Raiders stayed at home for a Week... | | | Drop | Question:How many field goals did both teams kick in the first half? Answer:2 Context:It's cool, and it's hot, and everyone is doing it. People talk about it often, and friends... Question:A blogger is a person _ . | | | RACE | (A) who teaches kids bad words (B) who posts songs from the latest bands (C) who got drunk last weekend (D) who writes diaries online Answer: who writes diaries online Context:Null Question:Frilled sharks and angler fish live far beneath the surface of the ocean, which is why they are | | | OBQA | known as (A) Deep sea animals (B) fish (C) Long Sea Fish (D) Far Sea Animals Deep sea animals Answer:Deep sea animals | | | Multiple-Choice | Context:It was Jessie Bear's birthday. She was having a party... Question:Who was having a birthday? | | | MCTest | (A) Jessie Bear (b) no one (C) Lion (D) Tiger Answer:Jessie Bear Context:Tracy didn't go home that evening and resisted Riley's attacks Question:What does Tracy need to do before this? | | | SIQA | (A) make a new plan (B) Go home and see Riley (C) Find somewhere to go Answer:Find somewhere to go Context:M: How long have you been teaching in this middle school? W: For ten years... Question:What's the woman probably going to do? | | | Dream | (A) To teach a different textbook. (B) To change her job. (C) To learn a different textbook. Answer:To change her job. | | | Table 15: Samples extracted from different QA tasks. Each task contains a context, a question and an answer. | | | | 3: | if M ̸= ∅ then | | | |------------------|------------------------------------------------------------------------------|------------------------------|------------------------------------------------------------------| | 4: | Calculate cluster centroids c1, · · · , cB of M | | | | 5: | end if | | | | 6: | for number of training epochs do | ni | | | 7: | for Each mini-batch I ∈ {(Cj , Qj , Aj )} j=1 ∪ M do | | | | 8: | Obtain ϵk by Eq. 5 | | | | 9: | for (C, Q, A) ∈ I do | | | | 10: | Obtain format Fj of (C, Q, A) | | | | 11: | Sample ϵ, ζ from U(0, 1) | | | | 12: | if ζ < ω then | | | | 13: | Pt(C, Q) ← Pˆ t(Fj ) {Use task prompt Pˆ t(Fj ) for unseen tasks} | | | | 14: | else if ϵ < ϵk then | | | | 15: | Pt(C, Q) ← Pt(Ti) {Use the golden truth task identity to select task prompt} | | | | 16: | else | | | | 17: | Pt(C, Q) ← Pt( | argmin | (||q, kt(Tτ )||)) {Use the inferred task identity to select task | | Tτ ∈{T1,··· ,Ti} | | | | | prompt} | | | | | 18: | end if | | | | 19: | S(C, Q) ← indexes of M′ meta prompt keys that are closest to q | | | | 20: | Pm(C, Q) ← | {P j m} j∈S(C,Q) | | | 21: | P(C, Q)←[Pg; Pf (Fj ); Pt(C, Q); Pm(C, Q)] | | | | 22: | Calculate per sample loss LLM on gθ and P(C, Q) by Eq. 6 | | | | 23: | Obtain negative sample (Cn, Qn) from M by Eq. 2 | | | | 24: | Calculate per sample loss Lt on kt(Ti) by Eq. 1 | | | | 25: | Calculate per sample loss Lm on {k sj m}(sj ∈ S(C, Q)) by Eq. 3 | | | | 26: | if (C, Q, A) ∈ M then | sj m}(sj ∈ S(C, Q)) by Eq. 4 | | | 27: | Calculate per sample loss L ′m on {k | | | | 28: | end if | | | | 29: | end for | | | | 30: | Update gθ and prompts with accumulated LLM N | | | | 31: | Update task prompt keys {kt(Ti)} i=1 with accumulated Lt | | | | 32: | Update meta prompt keys {k i m}M i=1 with accumulated Lm and L ′m | | | | 33: | end for | | | | 34: | end for | ni | | | 35: | Update M with {(Cj , Qj , Aj )} j=1 according to details in Appendix B | | | ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section Limitations ✓ A2. Did you discuss any potential risks of your work? Section Ethics Statement ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4 ✓ B1. Did you cite the creators of artifacts you used? Section 4, Appendix C B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Appendix C ## C ✓ **Did You Run Computational Experiments?** Section 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section4, Appendix A, Appendix G The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4, Appendix A ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 4, Appendix A ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Appendix A D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
qiao-etal-2023-improving
Improving Knowledge Graph Completion with Generative Hard Negative Mining
https://aclanthology.org/2023.findings-acl.362
Contrastive learning has recently shown great potential to improve text-based knowledge graph completion (KGC). In this paper, we propose to learn a more semantically structured entity representation space in text-based KGC via hard negatives mining. Specifically, we novelly leverage a sequence-to-sequence architecture to generate high-quality hard negatives. These negatives are sampled from the same decoding distributions as the anchor (or correct entity), inherently being semantically close to the anchor and thus enjoying good hardness. A self-information-enhanced contrasting strategy is further incorporated into the Seq2Seq generator to systematically diversify the produced negatives. Extensive experiments on three KGC benchmarks demonstrate the sound hardness and diversity of our generated negatives and the resulting performance superiority on KGC.
# Improving Knowledge Graph Completion With Generative Hard Negative Mining Zile Qiao1, Wei Ye2, Dingyao Yu2**, Tong Mo**1,† Weiping Li1**, Shikun Zhang**2 1 School of Software and Microelectronics, Peking University 2 National Engineering Research Center for Software Engineering, Peking University {zileq, wye, yudingyao, zhangsk}@pku.edu.cn {motong, wpli}@ss.pku.edu.cn ## Abstract Contrastive learning has recently shown great potential to improve text-based knowledge graph completion (KGC). In this paper, we propose to learn a more semantically structured entity representation space in text-based KGC via hard negatives mining. Specifically, we novelly leverage a sequence-to-sequence architecture to generate high-quality hard negatives. These negatives are sampled from the same decoding distributions as the anchor (or correct entity), inherently being semantically close to the anchor and thus enjoying good hardness. A self-information-enhanced contrasting strategy is further incorporated into the Seq2Seq generator to systematically diversify the produced negatives. Extensive experiments on three KGC benchmarks demonstrate the sound hardness and diversity of our generated negatives and the resulting performance superiority on KGC. ## 1 Introduction Knowledge Graph (KG) is an efficient method of representing global knowledge (Cui et al., 2019; Lv et al., 2022), playing a fundamental role in many Natural Language Processing (NLP) tasks like question answering (Sun et al., 2019a; Saxena et al., 2020), recommender systems (Huang et al., 2018), and web search (Ji et al., 2020), etc. The knowledge graph is composed of triples (*h, r, t*), where h, t, and r denote a head entity, a tail entity, and their relation, respectively. Modern public KGs such as Freebase, YAGO, and Wikidata, although covering massive knowledge with a large number of entities, are inevitably incomplete. Therefore, Knowledge graph completion (KGC) has become a popular area of research in recent years (Wang et al., 2017). Generally, two types of methods are applied for this task. **Embedding-based methods** assign each entity/relation with a dense vector trained †Corresponding authors. ![0_image_0.png](0_image_0.png) Figure 1: Conceptual illustration of our generative negative mining strategy. We feed the query (e.g., "<Chara, occupation, ?>") into a sequence-to-sequence model to generate multiple entities consisting of the correct answer and negatives. These entities are sampled from the same decoding distributions, making the negatives inherently semantically close to the target entity (as the yellow arrows show). A contrasting strategy regulated by token self-information is imposed on the Seq2Seq generator to make the negatives more diversified (or uniformly distributed, as the grey arrows depict). with structural information of graphs. Representative efforts include TransE (Bordes et al., 2013), TransH (Wang et al., 2014), Complex (Trouillon et al., 2016), and RotatE (Sun et al., 2019b), etc. Text-based methods (Wang et al., 2019; Yao et al., 2019; Lv et al., 2022; Saxena et al., 2022) exploit textual names or descriptions of entities and relations to facilitate representation learning. Among the two categories, the performance of text-based ones generally lags behind that of embeddingbased ones, due to inadequate entity representation optimization of the former (Wang et al., 2022). The recent robust text-based method, SimKGC (Wang et al., 2022), tackles the sub-optimal representation problem by contrasting carefully sampled in-batch, pre-batch and self negatives, inspiring us to leverage hard example mining (Kalantidis et al., 2020; Hu et al., 2020) to unlock the potential of contrastive learning for text-based KGC methods. Specifically, there are two essential properties for negative sample selection: - **Vicinity** (Robinson et al., 2021; Tabassum et al., 2022a) is a metric to measure the distance between the negatives and the anchor. Lower vicinity can push the model to learn better representation boundaries. - **Uniformity** (Wang and Isola, 2020; Tabassum et al., 2022a) is another one to measure the representativeness of the negatives. Higher uniformity means negative samples are uniformly distributed on the hypersphere and will make the produced representation more generalized. To improve vicinity, we pioneeringly introduce a generative method to produce negatives in KGC. Unlike previous efforts that use in-batch or prebatch entities, we apply a sequence-to-sequence model as a generator, which is trained to directly generate the tail entity t for a given *< h, r >* pair. With a sampling strategy, we can get multiple tail entities from the generator, and the incorrect ones among them constitute our negatives for efficient contrastive learning. As shown in Figure 1, since the generated negative samples are decoded from the same hidden state as the correct answer, they inherently have better anchor similarity in the semantic space. To improve uniformity, we further incorporate the generator itself with a novel self-informationenhanced contrastive learning strategy. In particular, we apply a prefix tree to obtain the conditional frequency of the tokens at each decoding step, which is converted to self-information (Shannon, 1948) and utilized as a token-wise weight for contrastive learning in the generator. Typically, the generator prefers to predict more frequent entity descriptions which consist of tokens of low selfinformation. Incorporating the weighting mechanism forces the generator to produce more distinctive representations for more informative tokens (e.g., in less frequent entity descriptions), yielding more diversified negatives for KGC. In summary, our generative negative mining strategy innovatively balances the vicinity and uniformity of negative samples. Meanwhile, the hard negatives produced by the generator are natural candidates for inference. Compared to other textbased methods that have to enumerate all entities for inference, we can pick a small number of generated entities as a high-quality candidate set, significantly accelerating inference speed for large-scale KGs. We evaluate our method on three popular KGC datasets (WN18RR (Dettmers et al., 2017), FB15K237 (Toutanova and Chen, 2015a), and Wikidata (Wang et al., 2019)), and the empirical results verify the superiority of our method. Our contributions are as follows: - We pioneeringly incorporate generative methods to produce hard negatives in text-based KGC for better contrasting effects. - We design a novel self-information-enhanced contrastive learning method for the negative generator, providing us with high-quality negative samples. - The negatives we generated are proven to systematically balance the hardness and diversity (or vicinity and uniformity, more formally), leading to competitive performances on three KGC benchmarks. ## 2 Methodology The link prediction task of KGC is to infer the missing triples given an incomplete G. In this section, we mathematically describe the proposed Generative Hard Negative Mining (GHN) method in detail. The core idea of GHN is to obtain better negatives in terms of vicinity and uniformity. We first introduce the overall process of GHN in 2.1 which includes how to generate and use these negative samples to enhance KGC model training. Then, to further improve the vicinity and uniformity of generated negatives, we propose a self-informationenhanced token-level contrastive learning method which is discussed in 2.2. Finally, we introduce how to facilitate inference on KGC tasks in 2.3. ## 2.1 Model Architecture Our GHN model mainly consists of two parts, a generator that aims to provide hard negatives to facilitate efficient contrastive learning and a predictor which is supposed to predict the tail entity given a <head entity, relation> pair. Then, we will present how to leverage generated negatives for better training efficiency. Generator Saxena et al. (2022) has proved that a simple encoder-decoder Transformer (Vaswani et al., 2017) can perform knowledge graph completion in the form of a sequence-to-sequence task. The performance of directly using a Transformer to predict the tail entity may not appeal. However, using a sequence-to-sequence model to generate negatives has ideal properties, such as scalability (the inference speed is independent of the scale of KG) and the better vicinity of generated negatives, considering that the generation of negatives and golden entities share the same hidden state. First, we construct a mapping between the textual descriptions and entities/relations following Saxena et al. (2022); Wang et al. (2022). The textual mentions we used are provided by KGBERT (Yao et al., 2019). Then we formally describe how to convert link prediction queries to textual queries. Given a link prediction query (*h, r,* ?), where h and r represent the head entity and the relation, respectively. The textual query is the concatenation of the text mentions of h and r, separated by |. For example, given a link prediction query (St.Louis,time zones location, ?), the converted text query thr is ′St.Louis | time zones location′. The text query is the input of our generator, which is supposed to output the correct answer ′Central Time Zone′. To obtain diverse and semantically similar negatives to the golden entity, we use the sampling strategy to get multiple predictions for the same input query. Specifically, we get a probability distribution over tokens at each step of decoding, then sample a token from the distribution and autoregressively decode until the 'stop' token. This procedure will be repeated multiple times to produce a negative set N qfor each query. Finally, we drop the correct entity to get a set of negatives. Predictor Following SimKGC (Wang et al., 2022), our GHN adopts a bi-encoder architecture. The first encoder BERThr takes the text query thr as input and produces the relation-aware embedding ehr, where thr is the same text query as the input of the generator. Similarly, the second encoder BERTttakes the textual mentions of the tail entity and produces its embedding et. We use mean pooling followed by L2 normalization to obtain these embeddings. Then we compute cosine similarity to these embeddings and predict the one with the largest score: $$\phi(h,r,t_{i})=\cos\left(e_{h r},e_{t_{i}}\right),\qquad\qquad\mathrm{(1)}$$ $$\mathrm{argmax}\ \phi(h,r,t_{i}),\ t_{i}\in{\mathcal{E}},\qquad\qquad t_{i}$$ Training with Generated Negatives We first briefly introduce the objective of the generator. The generator is optimized by minimizing a crossentropy loss with the golden entity: $${\mathcal{L}}_{C E}=-\frac{1}{K}\sum_{k=1}^{K}\log p\left(y_{k}^{*}\mid\mathbf{y}_{<k},\mathbf{x}\right),$$ $${\mathrm{(2)}}$$ $$(3)$$ k| y<k, x), (2) where K is the length of the target entity. The probability p (y∗ k| y<k, x) is calculated by multiplying the last layer decoder hidden state of the generator sk and the softmax embedding matrix Ws together: $$p\left(y_{k}\mid\mathbf{y}_{<k},\mathbf{x}\right)\propto\exp\left(\mathbf{W_{s}}\cdot s_{k}\right),$$ where x and y denote the input sequence and output sequence respectively. Unlike other negative mining methods (Xiong et al., 2020; Kalantidis et al., 2020), our method does not need to calculate weights for negatives. Instead, we directly mix the generated negatives and in-batch negatives to produce a negative set N = Nib ∪ Ngen, where Nib and Ngen denote the set of in-batch negatives and generated negatives respectively. In training, the generated negative samples N q gen ⊆ N qare sampled from corresponding negative set N q: $$p(n\mid q)=\sum_{t=1}^{T}\log\left(p\left(n_{t}\mid\mathbf{y}_{<t},\mathbf{x}\right)\right),n\in\mathcal{N}^{q}.\tag{4}$$ After constructing the negative set, we use InfoNCE (Chen et al., 2020) loss to train the predictor: $${\mathcal{L}}_{p}=-\log\frac{e^{\phi(h,r,t)/\tau}}{e^{\phi(h,r,t)/\tau}+\sum_{i=1}^{|{\mathcal{N}}|}e^{\phi\left(h,r,t_{i}^{\prime}\right)/\tau}},\quad(5)$$ where τ is a learnable parameter. ## 2.2 Self-Information Enhanced Training We have used a sequence-to-sequence model to provide hard negatives for the predictor. Through further investigation, we noticed that the target of generative link prediction is different from traditional generative tasks like machine translation. We call tasks like link prediction as target-constraint generation tasks. Specifically, one is able to obtain all available targets in link prediction, which is impossible for traditional generative tasks. Intuitively, more informative tokens are more critical than others. We hypothesize that informative tokens may impact the diversity of generated results since the generator prefers to predict more frequent entity descriptions which consist of tokens of low self-information, typically. Our investigation (see section 4.3 for more details) verified this hypothesis. Therefore, we propose a self-informationenhanced contrastive learning method to further improve the uniformity of generated negatives by producing more distinctive representations for more informative tokens. First, we will introduce how to obtain the self-information (Shannon, 1948) of tokens. Then, we will introduce how to facilitate training for the generator of GHN with selfinformation. Self-Information is a measure of the information content associated with the outcome of a random variable (Shannon, 1948). Since all possible outputs are acquirable in KGC, we can easily calculate self-information for all tokens. We first construct a prefix tree T where nodes are annotated with tokens from the vocabulary of entity mentions. The children of each node t ∈ T indicate all the available continuations from the prefix defined traversing the prefix tree from the root to t. Then, given the previous sequence generated by a sequence-to-sequence model w1, · · · , wn−1 and the current label token wn, the self-information of the current label token I(wn|w<n) can be calculated as: $$\begin{array}{l}{{\mathrm{I}(w_{n}|\mathbf{w}_{<n})=-\log\left(\mathrm{P}\left(w_{n}|\mathbf{w}_{<n}\right)\right)}}\\ {{\mathrm{P}\left(w_{n}|\mathbf{w}_{<n}\right):=\frac{\mathrm{Count}(w_{n}|\mathcal{A}_{w_{1},\cdots w_{n-1}})}{\left|\mathcal{A}_{w_{1},\cdots w_{n-1}}\right|},}}\end{array}\tag{6}$$ where A is all the children nodes determined by the prefix w1, · · · , wn−1 on the prefix tree. Then we average the self-information of tokens that are present in the different positions of the vocabulary to produce the self-information for each token: $$\mathrm{I}(w_{n})={\frac{1}{K}}\sum_{k=1}^{K}\mathrm{I}(w_{n}^{k}),\qquad\qquad(7)$$ where w kn denotes the kth token in vocabulary and K is the total times of occurrences of wn. Self-Information-aware Contrastive Learning Token-level contrastive learning methods widen the representations of tokens that have different labels. Since we want the representation of informative tokens to be more distinctive, we assign a set of weights for each contrasting pair (higher weight for contrasting pairs with higher self-information tokens in it) according to their self-information. As a result, the model can assign more distinctive representations for tokens with more self-information and facilitate the generation of these tokens. Formally, given a target token ti and a negative sample tj , we assign a soft weight w(*i, j*) for them. The weight is determined by the self-information of both ti and tj . We use the same strategy as tokenlevel contrastive learning (Zhang et al., 2021) to build up the negative and positive samples. Given the sequence of former target tokens and current target label ti, we formulate the information-aware soft weight w(*i, j*) for ti and corresponding negative sample tj as: $$w(i,j)=\lambda\mathrm{I}(t_{i})\cdot\mathrm{I}(t_{j}),$$ w(*i, j*) = λI(ti) · I(tj ), (8) where λ is a hyperparameter. In our implementation, the mean value of information-aware weights for all negatives of each anchor is normalized to be 1. The contrastive learning object is: $$\mathcal{L}_{\text{CL}}=-\log\frac{e^{cos(s_{a},s_{p})}}{\sum_{i=1}^{N}w(a,i)e^{cos(s_{a},s_{i})}},\tag{9}$$ where $s_{a}$, $s_{p}$, and $s_{i}$ denote the representation of anchor, positive token, and negative tokens, respectively. Finally, we weight the traditional generative object LCE and the self-information-aware contrastive objective LCL by a hyperparameter γ to compute the final loss LGEN for the generator: $${\mathcal{L}}_{\mathrm{GEN}}=(1-\gamma){\mathcal{L}}_{\mathrm{CE}}+\gamma{\mathcal{L}}_{\mathrm{CL}}.\tag{10}$$ **erance** $\textbf{true}$ . ## 2.3 Inference Most text-based methods have to produce representation for each entity (Wang et al., 2022; Yao et al., 2019). Considering the large scale of modern KGs, the efficiency of link prediction is of concern. Therefore, we propose a simple generationclassification two-stage method to perform highefficiency inference. Specifically, the generator we used to produce hard negatives is also able to provide candidates for inference. In the first stage, the generator produces a set of candidates Nc by the same sampling strategy as we discussed in 2.1. In the second stage, the predictor computes scores for candidates and then predicts the one with the largest score: $$\operatorname*{argmax}_{t_{i}}\phi(h,r,t_{i}),\;\;t_{i}\in\mathcal{N}_{c}.\qquad(11)$$ The number of candidates |Nc| is empirically set to be 50. Be aware that the two-stage inference is not | dataset | #entity | #relation | #train | #valid | #test | |------------|-----------|-------------|------------|----------|---------| | WN18RR | 40,943 | 11 | 86,835 | 3,034 | 3,134 | | FB15k-237 | 14,541 | 237 | 272,115 | 17,535 | 20,466 | | Wikidata5M | 4,594,485 | 822 | 20,614,279 | 5,163 | 5,163 | enforced for our method, it can speed up inference only when the scale of KG is relatively large. ## 3 Experiments Setup 3.1 Datasets We evaluate our method on three widely-used link prediction datasets, **WN18RR** (Dettmers et al., 2017), **FB15K237** (Toutanova and Chen, 2015b), and **Wikidata** (Wang et al., 2019). The statistics are shown in Table 1. **WN18RR** consists of about 41k synsets and 11 relations from WordNet (Miller, 1995). This dataset is constructed by removing the inverse relations from FB15k (Bordes et al., 2013) which suffers from test set leakage. **FB15k-237** is a subset of Freebase (Bollacker et al., 2008), it consists of about 15k entities and 237 relations. Similar to the WN18RR dataset, Dettmers et al. (2017) removed the inverse relations to address the test set leakage problem. **Wikidata5M** has much larger scale which consists of ∼ 4.6M entities, 822 relations, and ∼ 20 million triples. Following most of KGC methods, we use the transductive version of Wikidata5M. All the textual descriptions for WN18RR and FB15k-237 are provided by Yao et al. (2019). The Wikidata5M dataset already provides descriptions for all entities and relations. ## 3.2 Baselines TransE (Bordes et al., 2013) constructs a relationspecific translation from the head entity to the tail entity. Trouillon et al. (2016) introduces complex number embeddings.Tucker (Balaževic et al. ´ , 2019) facilitate KGC based on Tucker decomposition of the binary representation of triples. DistMult (Yang et al., 2014) models the three-way interactions in triples. RotatE (Sun et al., 2019b) models a relation as rotation in complex space. DKRL (Xie et al., 2016) leverages a CNN network to obtain text representations. KEPLER (Wang et al., 2019) uses a Transformer-based encoder trained with the typical KGE objective and the masked language modeling objective. MTL-KGC (Kim et al., 2020) proposes a multi-task learning method that can learn more relational properties. KG-BERT (Yao et al., 2019) and StAR (Wang et al., 2021) both leverage pre-trained language models to produce the representation of entities. SimKGC (Wang et al., 2022) train the model with much more negatives by incorporating three types of negatives. ## 3.3 Implementation Details The generator is implemented using the HuggingFace library (Wolf et al., 2019) with the pre-trained weight of BART-base (Lewis et al., 2019). The predictor is initialized with BERT (Devlin et al., 2018). And we first train the generator with Lgen as the objective until validation accuracy did not significantly increase for 5k steps. Most hyperparameters except training epochs are shared across all datasets to avoid dataset-specific tuning. Follow SimKGC (Wang et al., 2022), the entity descriptions are truncated to a maximum of 50 tokens for a fair comparison. The size of N q gen is set to 10. We use AdamW optimizer with linear learning rate decay, the learning rate is initialized to 3 × 10−5. Models are trained with batch size 1024. All experiments were performed using 4 NVIDIA A100 GPUs. For the WN18RR, FB15k-237, and Wikidata5M datasets, we train for 50, 10, and 1 epochs, respectively. ## 4 Experiment Results 4.1 Main Results Table 2 shows the performance of the baselines and our method variants on WN18RR and FB15k237 tasks (statistically significant with p < 0.05). While Table 3 further demonstrates the performance and inference speed of baselines and our methods on Wikidata5M dataset, which has a much larger scale. We have the following observations. First, the superiority of GHN is significant in terms of performance. Compared to previous text-based methods, GHN shows consistent performance improvement in all three tasks. Compared to embedding-based methods, GHN has a significant performance advantage in WN18RR and Wikidata5M, while marginally trailing in FB15k237. All of the text-based methods lag behind | Method | WN18RR | FB15k-237 | | | | | | | |-----------------------------------------|----------|-------------|---------|------|--------|--------|---------|------| | MRR | Hits@1 | Hits@3 | Hits@10 | MRR | Hits@1 | Hits@3 | Hits@10 | | | embedding-based methods TransE 24.3 4.3 | 44.1 | 53.2 | 27.9 | 19.8 | 37.6 | 44.1 | | | | DistMult | 44.4 | 41.2 | 47.0 | 50.4 | 28.1 | 19.9 | 30.1 | 44.6 | | RotatE | 47.6 | 42.8 | 49.2 | 57.1 | 33.8 | 24.1 | 37.5 | 53.3 | | TuckER | 47.0 | 44.3 | 48.2 | 52.6 | 35.8 | 26.6 | 39.4 | 54.4 | | text-based methods KG-BERT 21.6 | 4.1 | 30.2 | 52.4 | - | - | - | 42.0 | | | MTL-KGC | 33.1 | 20.3 | 38.3 | 59.7 | 26.7 | 17.2 | 29.8 | 45.8 | | StAR | 40.1 | 24.3 | 49.1 | 70.9 | 29.6 | 20.5 | 32.2 | 48.2 | | SimKGC | 66.6 | 58.7 | 71.7 | 80.0 | 33.6 | 24.9 | 36.2 | 51.1 | | GHN-SL | 67.3 | 59.3 | 71.5 | 80.9 | 33.8 | 25.1 | 36.3 | 51.5 | | GHN | 67.8 | 59.6 | 71.9 | 82.1 | 33.9 | 25.1 | 36.4 | 51.8 | embedding-based methods on FB15k-237 dataset for now. The possible reason is that many links in FB15k-237 dataset are not predictable based on the available information (Cao et al., 2021) and may harm the training of text-based models. Second, though GHN already surpasses most baselines without the self-information-enhanced training method, this training method further improves the performance on link prediction tasks, especially reflected in Hits@10 metric. This improvement shows that the self-information-enhanced training method may lead to a more generalized KGC model by introducing more diverse negatives. Please refer to section 4.3 for more details. Third, GHN also significantly accelerates the inference speed on Wikidata5M dataset. To perform link prediction on a test set, SimKGC needs to do 2 × |T | + |E| times BERT forward pass, where *|T |* and |E| denote the size of the test set and the number of entities in corresponding KG, respectively. While GHN needs up to 2*× |T | ×*(|Nc|+ 1) times BERT forward pass and ∼ 2 × |Nc*| × |T |* times BART generation process. Since in large-scale datasets such as Wikidata5M, |T | << |E| (5,163 test samples and ∼ 4.6M entities for Wikidata5M), the significant acceleration is as expected. Note that the two-stage inference is not enforced, in the case that there are too many queries or the KG's scale is relatively small, GHN has the same inference speed as SimKGC. ## 4.2 Overall Impacts Of Generative Hard Negative Mining One of the key designs in GHN is leveraging a sequence-to-sequence model to generate hard negatives. To further investigate the impact of generated negatives, we construct an approximate nearest neighbor negative sampling strategy (ANN) inspired by Xiong et al. (2020); Kalantidis et al. (2020) and replace the generative negative sampling strategy in GHN with it. Specifically, we use the current predictor to find entities that are close to the golden tail entity among all entities as the hard negatives. Since the computation cost of producing representations for all entities is unaffordable, we randomly sample 500 entities for each query and pick the top 10 entities by their representations' cosine similarity with the representation of the golden tail entity. Table 4 shows the performance comparison between GHN, ANN, and four variants of SimKGC with different negative sampling strategies. Intuitively, since ANN picks the nearest entities to the anchor as negatives which are more "harder" than generated ones, it should further improve link prediction performance. However, it does not show the expected performance improvement. The possible reason is that the balance between vicinity and uniformity is critical since these picked negatives apparently have better vicinity than generated neg- | Method | Wikidata5M | | | | | |------------------------------------------|--------------|--------|---------|-----------------|------| | MRR | Hits@1 | Hits@3 | Hits@10 | Inference Speed | | | embedding-based methods TransE 25.3 17.0 | 31.1 | 39.2 | - | | | | RotatE | 29.0 | 23.4 | 32.2 | 39.0 | - | | text-based methods DKRL 16.0 | 12.0 | 18.1 | 22.9 | ∼ 3× | | | KEPLER | 21.0 | 17.3 | 22.4 | 27.7 | ∼ 5× | | SimKGC | 35.8 | 31.3 | 37.6 | 44.1 | ∼ 4× | | GHN-SL | 36.2 | 31.5 | 37.8 | 44.8 | ∼ 1× | | GHN | 36.4 | 31.7 | 38.0 | 45.3 | ∼ 1× | Table 3: Main results with inference speed comparison for **Wikidata5M** datasets. ![6_image_0.png](6_image_0.png) ![6_image_1.png](6_image_1.png) Method **WN18RR** MRR H@1 H@3 H@10 IB 67.1 58.5 **73.1** 81.7 IB+PB 66.6 57.8 72.3 81.7 IB+SN 66.7 58.8 72.1 80.5 IB+SN+PB 66.6 58.7 71.7 80.0 ANN 66.7 58.9 71.4 79.2 GHN **67.8 59.6** 71.9 **82.1** atives but lead to worse performance(more details in section 4.3). Besides, GHN also shows performance superiority to SimKGC with four different negative sampling strategies, which further demonstrates the effectiveness of our generative negative sampling strategy. ## 4.3 Effects On Representation Learning To further examine what makes GHN excel and how the self-information-enhanced training method impacts the generated negatives. We conduct a further investigation from the perspective of representation learning by computing two metrics Mv and Muni to measure the vicinity and uniformity for negatives produced by different sampling strategies, inspired by Wang and Isola (2020) and Tabassum et al. (2022b). Specifically, we use the average cosine distance of negatives between the anchor to measure the vicinity Mv for each negative sampling strategy. To measure the uniformity Muni of negatives' distribution on the hypersphere, we compute the logarithm of the average pairwise Gaussian potential between all negatives' embedding. Here we choose two variants of GHN, ANN, and IB negative sampling strategies to conduct the comparison. The reason we did not add PB and SN negative sampling strategy to the comparison is as follows: PB sampling strategy shares almost the same Mv and Muni with IB. SN sampling strategy only picks the head entity itself in the query as the only negative sample and it is impossible to calculate Muni for this sampling strategy. Figure 2 shows that our GHN is able to produce negatives with better uniformity but worse vicinity compared to the ANN sampling strategy, while the opposite is true when compared to IB sampling strategy. Meanwhile, our generative sampling strategy has a better MRR score than both NN and IB strategies (67.8 v.s. 66.7 and 67.1 MRR score). Leveraging self-information-aware contrastive learning (SL) to enhance the training of the generator is another key design in GHN. Figure 2 also demonstrates that SL training method significantly improves the uniformity of generated negatives while the improvement of vicinity is also visible, which empirically verified that SL method could lead to a more diverse generation. ![7_image_0.png](7_image_0.png) ## 4.4 Effects On Training Efficiency An abundance of negative samples is critical for contrastive learning (Chen et al., 2020; Wang et al., 2022) while causing more demands on GPU memory. Figure 3 illustrates the MRR metrics on WN18RR dataset vary for GHN and SimKGC under different numbers of negatives for each query. By introducing generated negatives, our GHN leads to much efficiency training: GHN only needs 511 negatives to reach a similar performance as SimKGC does with 1023 negatives. Note that we do not count the pre-batch negatives used in SimKGC for a fair comparison. This observation re-emphasizes the effectiveness of the proposed generative negative mining strategy. ## 5 Related Work Text-based KGC methods aims to leverage text information in KGs to assist KGC. DKRL (Xie et al., 2016) obtained text representations by CNN. KEPLER (Wang et al., 2019) leveraged the typical KGE training objective into the masked language model training. MTL-KGC (Kim et al., 2020) proposed to learn more relational properties in KGs with multi-task learning method. KG-BERT (Yao et al., 2019) and StAR (Wang et al., 2021) leveraged pre-trained language models to produce entity embeddings in the cross-encoder style and biencoder style, respectively. SimKGC (Wang et al., 2022) increase the number of negatives by incorporating three types of negatives and achieved notable performance improvements. Negative Mining aims to find property negatives to assist contrastive representation learning (Mao et al., 2021). The most effective way is to use the samples within the same mini-batch for negative candidates (Wang et al., 2022; Chen et al., 2020). Another effective and widely used method is to store negative samples with an asynchronous update mechanism (Zhang et al., 2018; He et al., 2019), which allows more negative candidates to be involved during training. To make contrastive pairs difficult to discriminate, Zhang et al. (2013); Chen et al. (2017); Xiong et al. (2020) compute propensity score for each <query, sample candidate> pair. Ying et al. (2018) further utilizes PageRank score to calculate weights for negative candidates. Motivated by the generative adversarial networks (Goodfellow et al., 2014), Wang et al. (2020) proposed a sampling strategy by adaptively receiving knowledge-aware rewards. And Hu et al. (2020) proposed to adversarially generate the hard negative samples together with the representation network. ## 6 Conclusion We have presented our method for KGC tasks, which incorporate generative methods with a novel self-information-enhanced training strategy to produce high-quality negatives. And we further reveal that the proposed method systematically balances uniformity and vicinity, two essential properties for negative sample selection. Empirical results on three widely-used datasets (WN18RR, FB15k-237, Wikidata5M) have verified the superiority of our method. ## 7 Limitations For now, the superiority of the proposed two-stage inference speed-up method cannot adapt to inductive datasets since the generated sequences are difficult to map to unseen entities. Therefore, we will explore how to efficiently perform KGC under the inductive setting in the future. Like the other text-based KGC methods, our GHN lag behind embedding-based methods on FB15k-237 dataset. Cao et al. (2021) claims that many links in the FB15k-237 dataset are not predictable based on the information in the KG and we hypothesize this may harm the training of textbased models. In the future, we intend to examine this more thoroughly. ## Acknowledgements We thank anonymous reviewers for their valuable comments. This work was supported in part by the National Key R&D Program of China under Grants No.2022YFF0902703. ## References Ivana Balaževic, Carl Allen, and Timothy M ´ Hospedales. 2019. Tucker: Tensor factorization for knowledge graph completion. *arXiv preprint* arXiv:1901.09590. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In *Proceedings of the 2008 ACM SIGMOD international conference on Management of* data, pages 1247–1250. Antoine Bordes, Nicolas Usunier, Alberto GarcíaDurán, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In *NIPS*. Yixin Cao, Xiang Ji, Xin Lv, Juanzi Li, Yonggang Wen, and Hanwang Zhang. 2021. Are missing links predictable? an inferential benchmark for knowledge graph completion. *arXiv preprint arXiv:2108.01387*. Long Chen, Fajie Yuan, Joemon M. Jose, and Weinan Zhang. 2017. Improving negative sampling for word representation using self-embedded features. *Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining*. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020. A simple framework for contrastive learning of visual representations. In *International conference on machine learning*, pages 1597–1607. PMLR. Wanyun Cui, Yanghua Xiao, Haixun Wang, Yangqiu Song, Seung won Hwang, and Wei Wang. 2019. Kbqa: Learning question answering over qa corpora and knowledge bases. *ArXiv*, abs/1903.02419. Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. 2017. Convolutional 2d knowledge graph embeddings. In *AAAI Conference on* Artificial Intelligence. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*. Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In *NIPS*. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross B. Girshick. 2019. Momentum contrast for unsupervised visual representation learning. *2020* IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 9726–9735. Qianjiang Hu, Xiao Wang, Wei Hu, and Guo-Jun Qi. 2020. Adco: Adversarial contrast for efficient learning of unsupervised representations from self-trained negative adversaries. *2021 IEEE/CVF Conference on* Computer Vision and Pattern Recognition (CVPR), pages 1074–1083. Jin Huang, Wayne Xin Zhao, Hongjian Dou, Ji rong Wen, and Edward Y. Chang. 2018. Improving sequential recommendation with knowledge-enhanced memory networks. *The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval*. Shaoxiong Ji, Shirui Pan, E. Cambria, Pekka Marttinen, and Philip S. Yu. 2020. A survey on knowledge graphs: Representation, acquisition, and applications. IEEE Transactions on Neural Networks and Learning Systems, 33:494–514. Yannis Kalantidis, Mert Bulent Sariyildiz, No'e Pion, Philippe Weinzaepfel, and Diane Larlus. 2020. Hard negative mixing for contrastive learning. *ArXiv*, abs/2010.01028. Bosung Kim, Taesuk Hong, Youngjoong Ko, and Jungyun Seo. 2020. Multi-task learning for knowledge graph completion with pre-trained language models. In Proceedings of the 28th International Conference on Computational Linguistics, pages 1737–1743, Barcelona, Spain (Online). International Committee on Computational Linguistics. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461. Xin Lv, Yankai Lin, Yixin Cao, Lei Hou, Juanzi Li, Zhiyuan Liu, Peng Li, and Jie Zhou. 2022. Do pretrained models benefit knowledge graph completion? a reliable evaluation and a reasonable approach. In Findings. Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In *International conference on machine learning*, pages 2071– 2080. PMLR. Kelong Mao, Jieming Zhu, Jinpeng Wang, Quanyu Dai, Zhenhua Dong, Xi Xiao, and Xiuqiang He. 2021. Simplex: A simple and strong baseline for collaborative filtering. *Proceedings of the 30th ACM International Conference on Information & Knowledge* Management. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *Advances in neural information processing* systems, 30. Joshua David Robinson, Ching-Yao Chuang, Suvrit Sra, and Stefanie Jegelka. 2021. Contrastive learning with hard negative samples. In International Conference on Learning Representations. Liang Wang, Wei Zhao, Zhuoyu Wei, and Jingming Liu. 2022. Simkgc: Simple contrastive knowledge graph completion with pre-trained language models. *ArXiv*, abs/2203.02167. Apoorv Saxena, Adrian Kochsiek, and Rainer Gemulla. 2022. Sequence-to-sequence knowledge graph completion and question answering. In Annual Meeting of the Association for Computational Linguistics. Quan Wang, Zhendong Mao, Bin Wang, and Li Guo. 2017. Knowledge graph embedding: A survey of approaches and applications. IEEE Transactions on Knowledge and Data Engineering, 29:2724–2743. Apoorv Saxena, Aditay Tripathi, and Partha Pratim Talukdar. 2020. Improving multi-hop question answering over knowledge graphs using knowledge base embeddings. In *Annual Meeting of the Association for Computational Linguistics*. Tongzhou Wang and Phillip Isola. 2020. Understanding contrastive representation learning through alignment and uniformity on the hypersphere. In International Conference on Machine Learning, pages 9929–9939. PMLR. Haitian Sun, Tania Bedrax-Weiss, and William W. Cohen. 2019a. Pullnet: Open domain question answering with iterative retrieval on knowledge bases and text. *ArXiv*, abs/1904.09537. Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, and Jian Tang. 2019b. Rotate: Knowledge graph embedding by relational rotation in complex space. arXiv preprint arXiv:1902.10197. Xiaozhi Wang, Tianyu Gao, Zhaocheng Zhu, Zhiyuan Liu, Juan-Zi Li, and Jian Tang. 2019. Kepler: A unified model for knowledge embedding and pretrained language representation. *Transactions of the* Association for Computational Linguistics, 9:176– 194. Afrina Tabassum, Muntasir Wahed, Hoda Eldardiry, and Ismini Lourentzou. 2022a. Hard negative sampling strategies for contrastive representation learning. *ArXiv*, abs/2206.01197. Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by translating on hyperplanes. In *AAAI Conference on Artificial Intelligence*. Afrina Tabassum, Muntasir Wahed, Hoda Eldardiry, and Ismini Lourentzou. 2022b. Hard negative sampling strategies for contrastive representation learning. *arXiv preprint arXiv:2206.01197*. Kristina Toutanova and Danqi Chen. 2015a. Observed versus latent features for knowledge base and text inference. In *Workshop on Continuous Vector Space* Models and their Compositionality. Kristina Toutanova and Danqi Chen. 2015b. Observed versus latent features for knowledge base and text inference. In Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality, pages 57–66, Beijing, China. Association for Computational Linguistics. Ruobing Xie, Zhiyuan Liu, Jia Jia, Huanbo Luan, and Maosong Sun. 2016. Representation learning of knowledge graphs with entity descriptions. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 30. Bo Wang, Tao Shen, Guodong Long, Tianyi Zhou, Ying Wang, and Yi Chang. 2021. Structure-augmented text representation learning for efficient knowledge graph completion. In *Proceedings of the Web Conference 2021*, pages 1737–1748. George A Miller. 1995. Wordnet: a lexical database for english. *Communications of the ACM*, 38(11):39–41. Claude E. Shannon. 1948. A mathematical theory of communication. *Bell Syst. Tech. J.*, 27:623–656. Xiang Wang, Yaokun Xu, Xiangnan He, Yixin Cao, Meng Wang, and Tat-Seng Chua. 2020. Reinforced negative sampling over knowledge graph for recommendation. Proceedings of The Web Conference 2020. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2019. Huggingface's transformers: State-ofthe-art natural language processing. *arXiv preprint* arXiv:1910.03771. Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul Bennett, Junaid Ahmed, and Arnold Overwijk. 2020. Approximate nearest neighbor negative contrastive learning for dense text retrieval. ArXiv, abs/2007.00808. Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2014. Embedding entities and relations for learning and inference in knowledge bases. *arXiv* preprint arXiv:1412.6575. Liang Yao, Chengsheng Mao, and Yuan Luo. 2019. Kgbert: Bert for knowledge graph completion. *ArXiv*, abs/1909.03193. Rex Ying, Ruining He, Kaifeng Chen, Pong Eksombatchai, William L. Hamilton, and Jure Leskovec. 2018. Graph convolutional neural networks for web-scale recommender systems. Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. Tong Zhang, Wei Ye, Baosong Yang, Long Zhang, Xingzhang Ren, Dayiheng Liu, Jinan Sun, Shikun Zhang, Haibo Zhang, and Wen Zhao. 2021. Frequency-aware contrastive learning for neural machine translation. *ArXiv*, abs/2112.14484. Weinan Zhang, Tianqi Chen, Jun Wang, and Yong Yu. 2013. Optimizing top-n collaborative filtering via dynamic negative item sampling. Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval. Yongqi Zhang, Quanming Yao, Yingxia Shao, and Lei Chen. 2018. Nscaching: Simple and efficient negative sampling for knowledge graph embedding. 2019 IEEE 35th International Conference on Data Engineering (ICDE), pages 614–625. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 7 A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** 1 3 4 6 ✓ B1. Did you cite the creators of artifacts you used? 1 3 4 6 B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? 3 4 B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. 3 ## C ✓ **Did You Run Computational Experiments?** 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 3 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 3 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 4 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 3 ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
hsu-etal-2023-visually
Visually-Enhanced Phrase Understanding
https://aclanthology.org/2023.findings-acl.363
Large-scale vision-language pre-training has exhibited strong performance in various visual and textual understanding tasks. Recently, the textual encoders of multi-modal pre-trained models have been shown to generate high-quality textual representations, which often outperform models that are purely text-based, such as BERT. In this study, our objective is to utilize both textual and visual encoders of multi-modal pre-trained models to enhance language understanding tasks. We achieve this by generating an image associated with a textual prompt, thus enriching the representation of a phrase for downstream tasks. Results from experiments conducted on four benchmark datasets demonstrate that our proposed method, which leverages visually-enhanced text representations, significantly improves performance in the entity clustering task.
# Visually-Enhanced Phrase Understanding Tsu-Yuan Hsu∗ Chen-An Li∗ **Chao-Wei Huang Yun-Nung Chen** National Taiwan University, Taipei, Taiwan {b08201047,b08902123,f07922069}@csie.ntu.edu.tw [email protected] ## Abstract Large-scale vision-language pre-training has exhibited strong performance in various visual and textual understanding tasks. Recently, the textual encoders of multi-modal pre-trained models have been shown to generate highquality textual representations, which often outperform models that are purely text-based, such as BERT. In this study, our objective is to utilize both textual and visual encoders of multimodal pre-trained models to enhance language understanding tasks. We achieve this by generating an image associated with a textual prompt, thus enriching the representation of a phrase for downstream tasks. Results from experiments conducted on four benchmark datasets demonstrate that our proposed method, which leverages visually-enhanced text representations, significantly improves performance in the entity clustering task.1 ## 1 Introduction Recent advances in vision-language pre-training have seen the successful alignment of visual and linguistic inputs through the implementation of cross-modal pre-training objectives, such as language modeling and contrastive learning (Lu et al., 2019; Radford et al., 2021). These pre-trained models have shown impressive performance on downstream vision-language tasks, validating their crossmodal capabilities (Su et al., 2019). While most previous studies focused on multimodal tasks, researchers have shown that pretrained cross-modal encoders are equally proficient at uni-modal language understanding, matching the performance of pre-trained text encoders. Lu et al. (2022) were the pioneers in utilizing machine abstract imagination from pre-trained crossmodal encoders, demonstrating improvement on general NLU tasks. Yan et al. (2022) established that the text encoder of CLIP (Radford et al., 2021) surpasses models designed for producing phrase representations, including Phrase-BERT (Wang et al., 2021) and UCTopic (Li et al., 2022a). They hypothesized that the visual supervision during pre-training empowers CLIP to produce visuallygrounded phrase representations, beneficial for language-only tasks. Such a phenomenon aligns with neuroscience studies, demonstrating that visual and linguistic semantic representations are coordinated in the human brain (Popham et al., 2021). Despite the strong performance of the previous method, it only utilized the text encoder of a crossmodal pre-trained model. In contrast, our study aims to exploit its multi-modal representation capacity, incorporating both text and image encoders. We introduce a **visually-enhanced phrase understanding** framework to exploit multiple modalities for uni-modal tasks. Our framework comprises a text-to-image generator and a text-image crossmodal encoder. We employ a text-to-image generator to produce visual cues for a textual candidate. Subsequently, the generated image and the textual prompt are processed by the cross-modal encoder to create visually-enhanced phrase embeddings. Unlike Lu et al. (2022), our method does not require supervised data for downstream tasks, making it more scalable. Our approach also differs from VOKEN (Tan and Bansal, 2020), as they generated visual cues in tokens and processed the signal solely on the language side, whereas we employ representations directly from different modalities. Therefore, our model can capture more abstract concepts from images, enhancing generalizability. We evaluate our approach on four benchmark phrase understanding datasets. The experiments demonstrate that our proposed visual enhancement significantly outperforms all text-only baselines, demonstrating that abstract visual concepts can provide complementary cues for text understanding. ![1_image_0.png](1_image_0.png) ## 2 Method Our proposed method is illustrated in Figure 1, where we first generate images associated with phrases using a text-to-image diffusion model. Following this, we utilize pre-trained text and image encoders to construct visually-enhanced phrase embeddings for downstream understanding tasks. ## 2.1 Text-To-Image Model Recently, text-to-image models have attracted significant interest. Among these, diffusion models have played an important role in text-to-image generation, showing impressive performance. To more effectively generate visual cues associated with texts, this study adopts stable diffusion (Rombach et al., 2022) as our image generation model. During the training phase, an image autoencoder is trained using an extensive image database. A time-conditional U-Net (Long et al., 2015) forms the core of the diffusion model, learning to denoise image latent representations incrementally. In the sampling procedure, we first obtain a text prompt and derive a text embedding from the text encoder. Subsequently, we use Gaussian noise as the latent representation, and progressively denoise the latent representation via the diffusion model and a scheduler algorithm. Ultimately, an image is generated by reconstructing the latent representation through the image decoder. ## 2.2 Clip (Contrastive Language-Image Pretraining) CLIP (Radford et al., 2021) is a large-scale visionlanguage pre-training model using contrastive learning, which achieves remarkable performance in zero-shot image classification tasks. Given a batch of data D, CLIP jointly trains an image encoder and a text encoder to maximize the similarities of |D| paired text-image representations while minimizing the similarities of other(|D| 2−|D|) unpaired text-image representations. Given the weak alignment between texts and images, this study employs the pre-trained CLIP text encoder E*text* and image encoder E*image* to extract meaningful cues from different modalities. Our experiments focus on showing that the pre-trained CLIP encoders provide superior visual enhancement for texts, compared to separately pre-trained text and image encoders. ## 2.3 Visually-Enhanced Multimodal Representation Given a text sequence with an entity candidate phrase p, we design our text prompt as "A photo of <p>", a proven effective default template that delivers robust zero-shot classification performance (Radford et al., 2021). As depicted in Figure 1, we initially use the text prompt to generate a text-associated image with the text-to-image model G. Following this, we employ the pre-trained text and image encoders of CLIP to extract corresponding representations ri(p) and rt(p) as follows. $$\begin{array}{l l l}{r_{t}(p)}&{=}&{E_{t e x t}(\text{``A photo of}p\text{''})}\\ {r_{i}(p)}&{=}&{E_{i m a g e}(G(\text{``A photo of}p\text{''}))}\end{array}$$ Lastly, we concatenate the two embeddings originating from different modalities to create visuallyenhanced phrase embeddings, which potentially capture richer and more comprehensive information and thus benefit downstream tasks. | CoNLL2003 | BC5CDR | W-NUT 2017 | MIT-Movie | Average | | | | | | | | |---------------------|----------------|--------------|-------------|-----------|------|------|------|------|------|------|------| | ACC | NMI | ACC | NMI | ACC | NMI | ACC | NMI | ACC | NMI | | | | BERT-base | .394 | .021 | .711 | .201 | .252 | .026 | .589 | .014 | .486 | .065 | | | BERT-large | .415 | .020 | .551 | .005 | .318 | .025 | .680 | .013 | .490 | .016 | | | RoBERTa-base | .633 | .362 | .519 | .001 | .425 | .211 | .697 | .227 | .568 | .200 | | | RoBERTa-large | .601 | .241 | .744 | .294 | .379 | .057 | .541 | .005 | .566 | .149 | | | LUKE-base | .653 | .281 | .519 | .006 | .301 | .199 | .843 | .343 | .570 | .207 | | | LUKE-large | .688 | .348 | .756 | .340 | .324 | .208 | .734 | .271 | .625 | .292 | | | Phrase-BERT (2021) | .619 | .339 | .597 | .061 | .423 | .246 | .914 | .559 | .638 | .301 | | | UCTopic (2022a) | .682 | .335 | .933 | .677 | .287 | .140 | .807 | .307 | .677 | .365 | | | + Contextual Prompt | .759 | .425 | .946 | .710 | .391 | .387 | .601 | .107 | .674 | .407 | | | CLIP Text (2022) | .728 | .392 | .521 | .003 | .464 | .320 | .784 | .358 | .624 | .268 | | | + Contextual Prompt | .743 | .460 | .831 | .430 | .420 | .260 | .773 | .340 | .692 | .373 | | | Baselines Ours | Proposed Image | .738 | .414 | .734 | .197 | .432 | .293 | .895 | .525 | .698 | .357 | | Proposed Text-Image | .775 | .457 | .800 | .325 | .446 | .338 | .937 | .647 | .740 | .442 | | OursProposed Image .738 .414 .734 .197 .432 .293 .895 .525 .698 .357 Proposed Text-Image **.775** .457 .800 .325 .446 .338 .937 .647 **.740 .442** Table 1: Entity clustering results on four datasets. Proposed Image uses image representation. Proposed Text-Image uses both text and image representations. The best scores are marked in bold and the second-best ones are underlined. ## 3 Experiments To evaluate whether our visually-enhanced phrase embeddings provide improved semantic cues, we conduct a series of experiments focused on entity clustering, as our primary task is to categorize entity candidates with similar concepts only based on phrase representations in an unsupervised fashion. ## 3.1 Setup Our experiments are conducted on four diverse datasets, each with annotated entities from various domains: - CoNLL2003 (Sang and De Meulder, 2003) comprises 20,744 sentences, incorporating four types of entities: persons (PER), organizations (ORG), locations (LOC), and miscellaneous names (MISC). - BC5CDR (Li et al., 2016) is formed from 1,500 PubMed articles and contains chemical and disease entities. - W-NUT 2017 (Derczynski et al., 2017) is collected from public platforms, including YouTube and Twitter, with a focus on identifying previously unseen entities in emerging discussions. It includes six types of entities. - MIT-Movie (Liu et al., 2013) contains 12,218 sentences featuring title and person entities. Following previous research (Xu et al., 2017; Li et al., 2022b; Yan et al., 2022), we implement K-means clustering on the cross-modal representations to perform unsupervised phrase understanding tasks. In this setup, the number of clusters is set to the number of classes present in the dataset. The Hungarian algorithm (Papadimitriou and Steiglitz, 1998) is employed to optimally allocate each cluster to a class. To evaluate the quality of the representations and compare them fairly with the previous work, we employ accuracy (ACC) and normalized mutual information (NMI) as our evaluation metrics. The results reported are averages over five separate clustering runs. For our proposed image and text-image approaches, we conduct runs over three seeds for diffusion models to generate images. ## 3.2 Baselines We position our model in comparison to various language models and phrase understanding models to validate the effectiveness of our cross-modal framework. The used representations are the same as described in the prior work. - **BERT/RoBERTa** are well-established pretrained language models (Devlin et al., 2019; Liu et al., 2019) capable of distilling intrinsic patterns from input texts into meaningful representations.2 - **LUKE** (Yamada et al., 2020) enhances RoBERTa by introducing entity embeddings to the input, as well as an entity-aware attention mechanism.3 - **Phrase-BERT** (Wang et al., 2021) refines BERT using a contrastive objective to gen- | Proposed | CoNLL2003 | BC5CDR | W-NUT 2017 | MIT-Movie | | | | | |------------|-------------|-----------|--------------|-------------|-----------|-----------|-----------|-----------| | ACC | NMI | ACC | NMI | ACC | NMI | ACC | NMI | | | Image | .738±.025 | .414±.028 | .734±.033 | .197±.069 | .432±.024 | .293±.035 | .895±.034 | .525±.056 | | Text-Image | .775±.009 | .457±.016 | .800±.031 | .325±.070 | .446±.015 | .338±.015 | .937±.001 | .647±.013 | Text Encoder Image Encoder CoNLL2003 BC5CDR W-NUT 2017 MIT-Movie **Average** ACC NMI ACC NMI ACC NMI ACC NMI **ACC NMI** RoBERTa-base - .633 .362 .519 .001 .425 .211 .697 .227 .568 .200 - ViT-B/32 .629 .343 .668 .109 .380 .238 .895 .523 .643 .303 RoBERTa-base ViT-B/32 .656 .361 .668 .109 .386 .237 .894 .521 .651 .307 CLIP Text - .728 .392 .521 .003 **.464** .320 .784 .358 .624 .268 - CLIP ViT-B/32 .749 .423 .757 .197 .426 .279 .928 .600 .710 .375 CLIP Text CLIP ViT-B/32 .771 .451 **.844 .406** .434 .332 .935 .641 **.746 .458** Table 3: Comparison of the separately pre-trained encoders and CLIP over one diffusion model run. CLIP ViT-B/32 is the image encoder of CLIP where the architecture is the same as ViT-B/32. Best results are marked in bold. erate more powerful phrase representations.4 - **UCTopic** (Li et al., 2022a) employs an unsupervised contrastive learning strategy, with LUKE serving as the foundational model, to create robust and context-aware embeddings.5 - **CLIP Text** (Yan et al., 2022) leverages the text encoder of CLIP for understanding.6 ## 3.3 Results The evaluation results are presented in Table 1. Our proposed visually-enhanced representations outperform all baselines on the CoNLL2003 and MIT-Movie datasets, while achieving competitive performance on the BC5CDR and W-NUT 2017 datasets. Moreover, solely utilizing image representations encoded from generated images yields a higher average ACC than all the baselines. This suggests that the visual signal offers valuable cues for enhanced phrase understanding. Hence, we conclude that integrating different modalities can effectively augment phrase representations. For a more granular understanding, we provide detailed scores across multiple turns in Table 2. The lower standard deviation of our proposed text-image approach indicates superior stability. ## 3.4 Analysis Of Different Encoders To further investigate whether the CLIP encoders, pre-trained jointly, are more effective for visual enhancement, we compare them with image and text encoders that have been pre-trained individually. Table 3 presents the experimental results, where we substitute the text and image encoders of CLIP with RoBERTa-base and ViT-B/32 respectively. We notice that phrase representations augmented by ViT-B/32 outperform textual representations, which suggests the richness of information drawn from multiple modalities. It is evident that CLIP encoders surpass individually pre-trained encoders, implying that text and image encoders, when pre-trained together, can more effectively enrich phrase representations by integrating text and image at the representation level. ## 3.5 Contextual Prompt Previous work (Yan et al., 2022) demonstrated that enriching phrase candidates with a large pre-trained language model can yield more domain-specific keywords for textual prompts. Specifically, given a phrase p, the prompt "p is a [MASK]" is fed into a language model, which in turn returns the top K predictions {m1, m2*,...,m*K} for the [MASK] token. Subsequently, we formulate the contextual prompt as "A photo of p, a m1, m2*,...,m*K." In this paper, we set K to 3 for the contextual prompts. Table 1 shows that the addition of such contextual prompts enhances the performance of text-only baselines. We further probe into whether a contextual prompt can boost our performance and present the results in Table 4. Our observation is that utilizing contextual prompts for text embeddings yields comparable performance, indicating that our visual cues already encompass the domain-specific signal. We hypothesize that generating images from | Approach | Text Input | Image Input | CoNLL2003 | BC5CDR | W-NUT 2017 | MIT-Movie | Average | | | | | | |---------------------|--------------|---------------|-------------|----------|--------------|-------------|-----------|------|------|------|------|------| | ACC | NMI | ACC | NMI | ACC | NMI | ACC | NMI | ACC | NMI | | | | | Proposed Text-Image | Vanilla | G(Vanilla) | .771 | .451 | .844 | .406 | .434 | .332 | .935 | .641 | .746 | .458 | | Proposed Text-Image | Contextual | G(Vanilla) | .766 | .445 | .853 | .429 | .424 | .308 | .937 | .643 | .745 | .456 | | Proposed Text-Image | Contextual | G(Contextual) | .742 | .406 | .872 | .503 | .409 | .236 | .888 | .487 | .728 | .408 | | CLIP Text | Contextual | - | .743 | .460 | .831 | .430 | .420 | .260 | .773 | .340 | .692 | .373 | Proposed Text-Image Vanilla G(*Vanilla*) .771 .451 .844 .406 .434 .332 .935 .641 .746 .458 Proposed Text-Image Contextual G(*Vanilla*) .766 .445 .853 .429 .424 .308 .937 .643 .745 .456 Proposed Text-Image Contextual G(*Contextual*) .742 .406 .872 .503 .409 .236 .888 .487 .728 .408 CLIP Text *Contextual* - .743 .460 .831 .430 .420 .260 .773 .340 .692 .373 Table 4: The utility of contextual prompt. *Vanilla*: "A photo of p."; *Contextual*: "A photo of p, a m1, m2, m3." (p is the entity and m1, m2, m3 are the keywords of p.) ![4_image_0.png](4_image_0.png) ![4_image_1.png](4_image_1.png) → location) Figure 2: Our generated images with the associated phrases. contextual prompts may introduce more noise, resulting in difficulty encoding effective visual representations for phrase understanding. Notably, our baseline setting already achieves significantly improved performance compared with earlier work utilizing additional keywords, demonstrating the informativeness of our cross-modal representations. ## 3.6 Qualitative Analysis To further examine how our visual cues enhance text understanding, we present several generated images along with their understanding results in Figure 2. Previous work, CLIP Text, incorrectly classifies "Mpumulanga" and "Golan" as PER (persons). However, with the visual cues generated in our model, shown in Figure 2(a-b), we can correctly classify them as LOC (locations). The images generated by our model, displayed in Figure 2(c-f), further enrich the phrase representations and better understand the concepts. This demonstrates the effectiveness of our multi-modal framework. However, there are cases where the generated image may lead to incorrect categorization, as is the case with "BAYERISCHE VEREINSBANK" in Figure 2(g). The image misled the categorization process, changing the cluster from the correct classification (ORG, or organization) to an incorrect one (LOC, or location). Figure 2(h) displays an instance where the generated image does not provide useful visual information for an unusual entity, and the incorrect classification (group) persists. Therefore, there is still room for enhancement in future work. ## 4 Conclusion This work presents a multi-modal framework that leverages a text-to-image model to bridge between language and visual modalities for enhancing text comprehension. The model effectively transforms text inputs into coherent images, enriching phrase representations by merging outputs from different modalities. Experimental results show our framework surpassing robust phrase understanding models across diverse domains. ## Limitations Due to the maximum input length constraint of both the CLIP text encoder and the text-to-image model, we are unable to process long texts. We are interested in exploring alternative prompt configurations to circumvent this limitation. Our methodology is readily extendable to these settings, making it an intriguing area of study. ## Ethics Statement Our approach leverages a pre-trained text-to-image model to visually enhance representations. However, the text-to-image model may carry over biases and improper content from its training data. This necessitates additional analyses to safeguard against any undue influence of these biases on our method. ## Acknowledgements We thank the reviewers for their insightful comments. This work was financially supported by the Young Scholar Fellowship Program by the National Science and Technology Council (NSTC) in Taiwan, under Grants 111-2222-E-002-013-MY3 and 111-2628-E-002-016. ## References Leon Derczynski, Eric Nichols, Marieke van Erp, and Nut Limsopatham. 2017. Results of the WNUT2017 shared task on novel and emerging entity recognition. In Proceedings of the 3rd Workshop on Noisy User-generated Text, pages 140–147, Copenhagen, Denmark. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Jiacheng Li, Jingbo Shang, and Julian McAuley. 2022a. UCTopic: Unsupervised contrastive learning for phrase representations and topic mining. In *Proceedings of the 60th Annual Meeting of the Association for* Computational Linguistics (Volume 1: Long Papers), pages 6159–6169. Jiacheng Li, Jingbo Shang, and Julian McAuley. 2022b. UCTopic: Unsupervised contrastive learning for phrase representations and topic mining. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6159–6169, Dublin, Ireland. Association for Computational Linguistics. Jiao Li, Yueping Sun, Robin J Johnson, Daniela Sciaky, Chih-Hsuan Wei, Robert Leaman, Allan Peter Davis, Carolyn J Mattingly, Thomas C Wiegers, and Zhiyong Lu. 2016. Biocreative v cdr task corpus: a resource for chemical disease relation extraction. Database, 2016. Jingjing Liu, Panupong Pasupat, Yining Wang, Scott Cyphers, and Jim Glass. 2013. Query understanding enhanced by hierarchical parsing structures. In 2013 IEEE Workshop on Automatic Speech Recognition and Understanding, pages 72–77. IEEE. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*. Jonathan Long, Evan Shelhamer, and Trevor Darrell. 2015. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3431–3440. Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. *Advances in neural information processing systems*, 32. Yujie Lu, Wanrong Zhu, Xin Wang, Miguel Eckstein, and William Yang Wang. 2022. Imaginationaugmented natural language understanding. In *Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational* Linguistics: Human Language Technologies, pages 4392–4402. Christos H Papadimitriou and Kenneth Steiglitz. 1998. Combinatorial optimization: algorithms and complexity. Courier Corporation. Sara F Popham, Alexander G Huth, Natalia Y Bilenko, Fatma Deniz, James S Gao, Anwar O Nunez-Elizalde, and Jack L Gallant. 2021. Visual and linguistic semantic representations are aligned at the border of human visual cortex. *Nature neuroscience*, 24(11):1628–1636. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, pages 8748–8763. PMLR. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. Highresolution image synthesis with latent diffusion models. In *Proceedings of the IEEE/CVF Conference* on Computer Vision and Pattern Recognition, pages 10684–10695. Erik Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Languageindependent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142–147. Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade W Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, et al. 2022. LAION-5B: An open large-scale dataset for training next generation image-text models. In *Thirty-sixth Conference on* Neural Information Processing Systems Datasets and Benchmarks Track. Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, and Jifeng Dai. 2019. VL-BERT: Pretraining of generic visual-linguistic representations. In *International Conference on Learning Representations*. Hao Tan and Mohit Bansal. 2020. Vokenization: Improving language understanding with contextualized, visual-grounded supervision. In *Proceedings of the* 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2066–2080. Shufan Wang, Laure Thompson, and Mohit Iyyer. 2021. Phrase-BERT: Improved phrase embeddings from bert with an application to corpus exploration. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 10837– 10851. Jiaming Xu, Bo Xu, Peng Wang, Suncong Zheng, Guanhua Tian, and Jun Zhao. 2017. Self-taught convolutional neural networks for short text clustering. *Neural Networks*, 88:22–31. Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, and Yuji Matsumoto. 2020. LUKE: Deep contextualized entity representations with entityaware self-attention. In *Proceedings of the 2020* Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6442–6454, Online. Association for Computational Linguistics. An Yan, Jiacheng Li, Wanrong Zhu, Yujie Lu, William Yang Wang, and Julian McAuley. 2022. CLIP also understands text: Prompting clip for phrase understanding. *arXiv preprint* arXiv:2210.05836. ## A Datasets - **CoNLL2003**: This dataset comprises 20,744 sentences with four distinct types of entities - persons (PER), organizations (ORG), locations (LOC), and miscellaneous names (MISC). Our experiments utilize 30,027 entities that are labeled as PER, ORG, or LOC. - **BC5CDR**: This dataset features 1,500 PubMed articles that are populated with chemical and disease entities, adding up to a total of 28,354 entities. - **W-NUT 2017**: This dataset is an accumulation of data collected from public platforms like YouTube and Twitter, with a focus on distinguishing previously unseen entities within emerging discussions. It includes six types of entities: person, location, group, corporation, creative_work, and product. The dataset contains a total of 3,890 entities. - **MIT-Movie**: This dataset includes 12,218 sentences populated with title and person entities, accounting for a total of 9,920 entities. ## B Implementation Details In our work, we use the Huggingface models to generate all the representations: - **BERT/RoBERTa**: We take pooler_output as the representations, where pooler_output is the classification token after processing through a linear layer and an activation function.7 The linear layer weights are learned by next sentence prediction during pre-training. - **LUKE**: entity_last_hidden_states is used as the representation, which is the last hidden states of the input entity.8 - **Phrase-BERT**: Phrase representations can be easily acquired by calling model.encode(). 9 - **UCTopic**: We obtain the phrase representations with the released source code.10 - **CLIP**: pooler_output is taken as the representation for both the text encoder11 and the image encoder.12 ## C Pre-Trained Models For the pre-trained CLIP model, we adopt the version ViT-B/32, which consists of a ViT-B/32 image encoder and a 12-layer Transformer text encoder. 7https://huggingface.co/docs/transformers/ main_classes/output\#transformers.modeling_ outputs.BaseModelOutputWithPooling.pooler_output 8https://huggingface.co/docs/transformers/ model_doc/luke\#transformers.LukeModel 9https://huggingface.co/whaleloops/ phrase-bert 10https://github.com/JiachengLi1995/UCTopic/ blob/main/clustering.py\#L43 11https://huggingface.co/docs/transformers/ model_doc/clip\#transformers.CLIPTextModel 12https://huggingface.co/docs/transformers/ model_doc/clip\#transformers.CLIPVisionModel | Approach | Inference steps | CoNLL2003 | BC5CDR | W-NUT 2017 | MIT-Movie | Average | | | | | | |---------------------|-------------------|-------------|----------|--------------|-------------|-----------|------|------|------|------|------| | ACC | NMI | ACC | NMI | ACC | NMI | ACC | NMI | ACC | NMI | | | | 10 | .746 | .420 | .722 | .153 | .446 | .308 | .932 | .615 | .712 | .374 | | | Proposed Image | 30 | .745 | .420 | .759 | .198 | .435 | .285 | .929 | .604 | .717 | .377 | | 50 | .749 | .423 | .757 | .197 | .426 | .279 | .928 | .600 | .715 | .375 | | | 10 | .783 | .474 | .824 | .348 | .427 | .315 | .940 | .652 | .744 | .447 | | | Proposed Text-Image | 30 | .771 | .450 | .849 | .430 | .443 | .341 | .937 | .646 | .750 | .467 | | 50 | .772 | .451 | .844 | .406 | .434 | .332 | .935 | .641 | .746 | .458 | | Table 5: Comparison on different inference steps of stable diffusion. The reported numbers are run over one Stable Diffusion seed. For the text-to-image diffusion model, we use stable diffusion v2-base13 trained on the subset of LAION-5B (Schuhmann et al., 2022) in our experiments. ## D Inference Details We conduct our experiments on single V100 GPU. - Generation time of stable diffusion v2-base with respect to inference steps is elaborated in Appendix E. - Each clustering experiment takes no more than 10 minutes to run. ## D.1 Licenses - BERT (Apache License Version 2.0) - RoBERTa (MIT License) - LUKE (Apache License Version 2.0) - Phrase-BERT (T License) - UCTopic (MIT License) - CLIP (MIT License) - vit-base-patch32-224-in21k (Apache License Version 2.0) - stable-diffusion-2 (CreativeML Open RAIL++-M License) ## E Efficiency Vs. Efficacy Results over different inference steps of stable diffusion v2-base are shown in Table 5. It took 0.84 seconds per image for inference step 10, 2.02 seconds per image for inference step 30, and 3.24 seconds per image for inference step 50. The balance between efficiency and efficacy depends on application usage. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? section: Limitations (between Conclusion and Reference) ✓ A2. Did you discuss any potential risks of your work? Ethnics Statement ✓ A3. Do the abstract and introduction summarize the paper's main claims? section: Abstract, 1. Introduction ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section: 3.1 3.2 Appendix ✓ B1. Did you cite the creators of artifacts you used? section: 3.1 3.2 Appendix Reference ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? section: Appendix ✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? We have followed the licenses and don't have a conflict with the artifacts' intended use. ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? We have checked that we don't use images related to people in our paper. ✗ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? We have referenced the datasets by citing them and attaching the URLs in the paper. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. section: Appendix ## C ✓ **Did You Run Computational Experiments?** Section: 3. Experiments ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? section: 3. Experiments ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? section: 3. Experiments ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? section: Appendix D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
gaur-saunshi-2023-reasoning
Reasoning in Large Language Models Through Symbolic Math Word Problems
https://aclanthology.org/2023.findings-acl.364
Large language models (LLMs) have revolutionized NLP by solving downstream tasks with little to no labeled data. Despite their versatile abilities, the larger question of their ability to reason remains ill-understood. This paper addresses reasoning in math word problems (MWPs) by studying symbolic versions of the numeric problems, since a symbolic expression is a {``}concise explanation{''} of the numeric answer. We create and use a symbolic version of the SVAMP dataset and find that GPT-3{'}s davinci-002 model also has good zero-shot accuracy on symbolic MWPs. To evaluate the faithfulness of the model{'}s reasoning, we go beyond accuracy and additionally evaluate the alignment between the final answer and the outputted reasoning, which correspond to numeric and symbolic answers respectively for MWPs. We explore a self-prompting approach to encourage the symbolic reasoning to align with the numeric answer, thus equipping the LLM with the ability to provide a concise and verifiable reasoning and making it more interpretable. Surprisingly, self-prompting also improves the symbolic accuracy to be higher than both the numeric and symbolic accuracies, thus providing an ensembling effect. The SVAMP-Sym dataset will be released for future research on symbolic math problems.
# Reasoning In Large Language Models Through Symbolic Math Word Problems Vedant Gaur∗ Aragon High School [email protected] Nikunj Saunshi† Google Research, New York [email protected] ## Abstract Large language models (LLMs) have revolutionized NLP by solving downstream tasks with little to no labeled data. Despite their versatile abilities, the larger question of their ability to reason remains ill-understood. This paper addresses reasoning in math word problems (MWPs) by studying symbolic versions of the numeric problems, since a symbolic expression is a "concise explanation" of the numeric answer. We create and use a symbolic version of the SVAMP dataset and find that GPT-3's davinci-002 model also has good zeroshot accuracy on symbolic MWPs. To evaluate the faithfulness of the model's reasoning, we go beyond accuracy and additionally evaluate the *alignment* between the final answer and the outputted reasoning, which correspond to numeric and symbolic answers respectively for MWPs. We explore a *self-prompting* approach to encourage the symbolic reasoning to align with the numeric answer, thus equipping the LLM with the ability to provide a concise and verifiable reasoning and making it more interpretable. Surprisingly, self-prompting also improves the symbolic accuracy to be higher than both the numeric and symbolic accuracies, thus providing an ensembling effect. The SVAMP-Sym dataset will be released for future research on symbolic math problems. ## 1 Introduction Large language models (LLMs), with hundreds of billions of parameters, can solve a wide range of NLP tasks such as machine translation, question-answering, etc., taking us closer to general-purpose intelligent agents. The initial success of GPT-3 (Brown et al., 2020) has led to many other LLMs (Rae et al., 2021; Smith et al., 2022; Chowdhery et al., 2022) which have, perhaps surprisingly, taken big strides in solving ∗Some clarification on affiliation. † Most of the work was performed while at Princeton University and after graduating, but before joining Google. difficult tasks like common sense reasoning, math and science problems (Lewkowycz et al., 2022), and writing code (Li et al., 2022). Despite the incredible successes, we have little understanding of why LLMs are effective at problems that require reasoning. In fact we have limited techniques to quantifiably study the question of reasoning beyond just evaluating accuracy. Recent ideas like Chain-of-Thought prompting (CoT) (Wei et al., 2022b; Kojima et al., 2022) encourage the model to "think step by step" and output a verbose reasoning in text. However, verifying such reasoning at scale will incur the infeasible cost of manually going over the text outputs. Furthermore, we would like the model's reasoning to be consistent with its outputted answer, in order to trust the presented reasoning. For these considerations, we would like our models to output a *concise* reasoning or explanation for its answer that can be *automatically verified*. In particular, we desire reasoning in the form of explanations that are - Verifiable: For ease of evaluating correctness of the outputted reasoning, and - Concise: For scalability of verification. Manually going through text reasoning can quickly get cumbersome For instance, instead of a text description of an algorithm to solve a problem, a Python implementation of the algorithm would be a more concise explanation for the reasoning behind the algorithm1. Similarly, a simple linear model or decision tree explaining the answers of a black-box neural network also achieves the same goal (Ribeiro et al., 2016). Concise explanations can provide clearer insights into the reasoning abilities of models, and verifiable explanations aid interpretability and help foster trust in models, in 1We can automatically verify the answer not just for one problem, but for all instance of that problem line with explainable AI (Samek et al., 2019). In this work we use concise and verifiable explanations to study reasoning abilities of LLMs in math word problems (MWPs). LLMs have shown to achieve good zero-shot accuracy on many numeric MWP benchmarks (Kojima et al., 2022). Chain-of-thought like ideas encourage LLMs to first general a step-by-step explanation (in text) before generating the answer. However, this does not satisfy the criteria of being concise or easily verifiable2. We address reasoning by considering symbolic versions of numeric MWPs, because a symbolic expression can be viewed as a concise explanation for a numeric answer and can also be automatically evaluated. Thus in this reasoning framework for MWPs, we require an LLM to output both, a numeric answer and a concise symbolic expression, such that we have: (1) high accuracy for the predicted numeric answer, (2) high alignment of the symbolic expression with the predicted numeric answer. While most prior studies focus on goal (1), we argue that goal (2) is equally important for interpretability of these models and to trust the its reasoning. Our main finding is that LLMs can also do reasonably well on goal (2), either by generating a numeric answer and symbolic explanation together, or by generating the answer first and then a post-hoc symbolic explanation. In this context, we make the following contributions: Symbolic evaluation. We construct a symbolic version of the SVAMP dataset (Patel et al., 2021) called SVAMP-Sym to evaluate LLMs. Firstly we find, perhaps surprisingly, that GPT-3's davinci-002 model already achieves good zero-shot accuracy on symbolic problems (64.2%), comparable to the numeric accuracy of 68.9%. Secondly, this observation provides a simple way to get good accuracy and alignment for numeric problems by first solving symbolic versions and then substituting back the values for variables. This approach generates the numeric answer and a symbolic explanation in one go, thus trivially achieving3an accuracy of 64.2% and alignment of 100%. Self-prompting. There are two key drawbacks with the above approach: (a) symbolic accuracy of 64.2% is lower than the numeric accuracy (68.9%), (b) alignment of symbolic expressions, as post-hoc explanation to the original numeric answers, is very low (∼ 50%). To get a better post-hoc explanation, we propose a novel *self-prompting* approach that first prompts the LLM with the numeric problem and its response to the problem, and then asks it to solve the symbolic problem; see Figure 1. Self-prompting significantly improves alignment with numeric answers to 74% (a 24% absolute improvement). Surprisingly, self-prompting also improves the symbolic accuracy to 71.7%, higher than both the raw numeric and symbolic accuracies of 68.9% and 64.2% respectively. This suggests that self-prompting has an ensembling effect. We perform further ablation studies and analyses and hope that these insights will aid future work on using LLMs for reasoning problems. ## 1.1 Related Work Language models like GPT-3 (Brown et al., 2020) and MLMs like BERT (Devlin et al., 2019) have demonstrated impressive emergent behaviors (Wei et al., 2022a) at scale. For math problems, Minerva (Lewkowycz et al., 2022) was fine-tuned from PaLM (Chowdhery et al., 2022) to do well on many MWP benchmarks. Instead of fine-tuning, Wei et al. (2022b) uses in-context learning and finds that asking the model to "think step by step" (CoT prompting) improves few-shot accuracy on MWPs; Kojima et al. (2022) verify this for zero-shot setting as well, which is the focus of our work. There is limited theoretical work for the downstream success of LMs (Saunshi et al., 2021; Xie et al., 2022) and the emergent behaviors of LLMs through scaling laws (Kaplan et al., 2020). Our idea of self-prompting is motivated by the efficacy of in-context learning (Brown et al., 2020) and prompting (Liu et al., 2023) in LMs. The ensembling effect of self-prompting idea could be related to self-calibration abilities of LMs (Kadavath et al., 2022). Finally, Ho et al. (2022) survey the progress of LMs on various notions of reasoning; we consider a weaker notion of "concise post-hoc explanations" here. ## 2 Math Word Problems With Llms 2.1 Svamp-Sym Dataset We choose the SVAMP dataset (Patel et al., 2021) for testing LMs on MWPs because it provides numeric answers in the form of numeric expressions (rather than just numeric values). This ![2_image_0.png](2_image_0.png) lets us automatically convert the dataset into a symbolized version, without manual annotation. The main idea is to replace all occurrences of numbers in the problem statement with newly introduced variables, e.g. (w,x,y,z). Appendix A provides more details on the dataset construction. The dataset is released in https://github.com/ vedantgaur/Symbolic-MWP-Reasoning. ## 2.2 Querying And Evaluating Lms Broadly, our evaluation pipeline has four phases: (1) get a verbose response from the LLM for the math problem, (2) prompt the LLM to extract just the answer (number or symbolic expression) from its initial response, (3) refine the extracted answer using a novel *filtering* step, (4) compare the filtered answer to the ground-truth answer. Initial response. We query the LM with the problem statement and an optional CoT prompt, i.e. "Q: <Problem> A:" or "Q: <Problem> A: Let's think step by step.". <Problem> could either be a numeric or symbolic problem. Table 3 summarizes the prompts used for various settings. Answer extraction. Since the LLM outputs a long text response (Figure 1), we use an extraction prompt to isolate the answer, similar to Kojima et al. (2022). We query the LM with the transcript so far, followed by the question and the prompt "The final answer (only the number) is:" to isolate the numeric answer. Table 3 has the similar prompt for symbolic problems. Answer filtering. The extraction prompt does not always isolate the final answer and sometimes outputs a sentence, especially for symbolic problems. Thus we add a LM-independent filtering step which includes stripping escape sequences, removing commas, de-latexifying equations, picking the longest symbolic expression, among others; more details in Appendix C.2. Answer evaluation. We compare the filtered answer to the ground-truth answer (symbolized expression or numeric value). Since there are multiple ways to express the same symbolic expression (e.g. "w + (y + x)" and "w + x + y"), we compare two expressions through their evaluations on 20 random variable assignments. If they match on all 20 assignments, we adjudge them to be equivalent, making a (reasonable) assumption that 20 random assignments will avoid false positives. ## 3 Experimental Results We pick 150/1000 examples from the SVAMP dataset (due to budget constraints) and run each examples 5 times. We use GPT-3's davinci-002 model with temperature 0.0 for (mostly) deterministic outputs, with a max token length of 256. ## 3.1 Numeric And Symbolic Evaluations We discuss the accuracies for solving numeric and symbolic math problems from SVAMP and SVAMP-Sym respectively. Numeric accuracy. The zero-shot numeric accuracy both with chain-of-thought (CoT) prompt and without (vanilla) are presented in Table 1; they are 68.9% and 65.6% respectively. This good performance is unsurprising given prior work (Kojima et al., 2022). Our accuracies are ∼ 5-7% higher than Kojima et al. (2022), due in part to better answer extraction and filtering. Symbolic accuracy. We also evaluate raw symbolic problems from SVAMP-Sym in the vanilla and CoT settings with 3 natural choices for variables: (w,x,y,z), (i,j,k,l) and (p,q,r,s). Firstly we observe, in Table 1, that GPT-3 can | Numeric | Symbolic | | | | | | | |---------------|-------------|-------------|-------------|-----------|------|------|------| | (w,x,y,z) | (p,q,r,s) | (i,j,k,l) | | | | | | | Evaluation | Raw (-F) | Raw (-F) | SP (-F) | SP + AP | Raw | Raw | | | Accuracy | Vanilla | 65.6 (61.6) | 59.7 (47.6) | 61.9 (40) | 68.3 | 62.3 | 53.5 | | CoT | 68.9 (65.9) | 64.2 (48.8) | 67.9 (48.6) | 71.7 | 64.4 | 58.4 | | | Alignment | Vanilla | - | 52.9 (40.7) | 60.3 (40) | 64.9 | 56.3 | 44.7 | | CoT | - | 51.2 (39.1) | 63.1 (44.9) | 74 | 51.9 | 47.1 | | | Similarity | Vanilla | - | 27.8 | 44.2 | 49.8 | 27.1 | 26.8 | | (BLEU) | CoT | - | 21.3 | 53.9 | 57.6 | 22.7 | 21.4 | | Similarity | Vanilla | - | 56.5 | 65.2 | 71.3 | 56.8 | 55.4 | | (Levenshtein) | CoT | - | 44.9 | 75.6 | 79.8 | 45.4 | 43.9 | achieve pretty high symbolic accuracies with variables (w,x,y,z): vanilla and CoT settings achieve 59.7% and 64.2% respectively, which is just 4-5% lower than numeric accuracy. Furthermore, we notice that variables (i,j,k,l) have slightly worse accuracy than other variable settings, possibly because (w,x,y,z) and (p,q,r,s) are more popular choice for variables in the training data for language models. Effect of filtering. We report the accuracies without the filtering step in Table 1; these are the (-F) entries. While there is a 4-5% drop in the numeric accuracy without filtering, the drop is 12-14% for symbolic problems, suggesting that filtering is much more crucial for symbolic problems4. Our extraction and filtering steps still have issues and there is scope for improvement. ## 3.2 Reasoning And Alignment While prior work only cares about the accuracy on MWPs, we also study of reasoning abilities of LLMs by requiring them to generate a concise explanation for numeric answers in the form of a symbolic expressions. We evaluate "reasoning ability" through an alignment metric that checks if the outputted numeric answer and symbolic expression compute to the same value. In general there is no consistent zero-shot method to return a perfectly aligned symbolic expression. A natural 4Intuitively it makes sense that extracting an expression/equation is harder than extracting a single number attempt to generate such an expression is to directly solve the symbolic versions of numeric problem. However this approach has very low alignment, i.e. the symbolic output does not reflect the way in which the model solved the numeric problem. Specifically in Table 1, the average alignment score for raw symbolic outputs is only 52.9% and 51.2% for Vanilla and CoT respectively. This motivates self-prompting. ## 3.3 Self-Prompting In order to improve alignment, we propose a twostep procedure that first inputs the numeric MWP and the LM's response to it, followed by the symbolic version of the MWP. In particular the prompt looks like "Q: <Numeric Question> A: <Model Response> Q: <Symbolic Question> A:". Given the in-context tendencies of LMs, we hope that this encourages the symbolic response to imitate the numeric response and thus return a well aligned expression. We find in Table 1 that this approach (termed SP) indeed improves the alignment by ∼ 10% over the naive approach. We take this one step further: whenever the numeric and symbolic answers do not align, we add another "alignment prompt" before the symbolic problem that explicitly asks the model to copy the numeric answer; see Table 3 for the exact format. Results in the **SP+AP** column of Table 1 verify that this leads to another 11% improvement over SP and ∼ 22% improvement 5892 over raw symbolic. Surprisingly we find that SP+AP has higher accuracy than raw numeric and raw symbolic, suggesting a "best of both worlds" or ensembling phenomenon in action. Further analysis in Figure 7 reveals how self-prompting combines the benefits of numeric and symbolic. We also compute the similarity between the full numeric and symbolic responses. Table 1 reveals that the average similarity is significantly higher for SP and **SP+AP** compared to raw symbolic. So not only do the answers align more but also the full text responses are very similar. Histograms of similarity scores can be found in Appendix B.1. Additional analyses and results can be found in Appendix B. ## 4 Conclusions And Future Work This paper studies reasoning in LLMs for MWPs and results suggest that LMs are good at zero-shot solving of symbolic MWPs, and that this ability can lead to concise explanations. Self-prompting emerges as a promising idea to generate better explanations and the ensembling effect demonstrated by it can potentially have other applications (left for future work). Alignment with self-prompting, while significantly better than with raw symbolic outputs, still has a lot of scope for improvement. Aspects that are not considered are few-shot learning of explanations and the role of temperature, which could improve accuracy and alignment. Finally the notion of "concise explanation" to study reasoning can have implications beyond MWPs. Broader Impact Statement. Given the incredible successes of LLMs, it is becoming increasingly important to study why they work and how to debug them when they are wrong. There are ongoing debates and discussions about whether LMs are simply "stochastic parrots" (Bender et al., 2021) or they actually "understand" language. Besides there are also privacy concerns (Carlini et al., 2021) associated with LLMs trained on extremely large corpora. Our work attempts to formalize a weak notion of "reasoning" in math problems that could help with improving the intepretability, and thus trustworthiness, of such models. This is extremely important if LLMs are to be deployed in real-life applications. That said, any preliminary notion or definition of "reasoning in LLMs", including the one in this paper, should be taken with a healthy dose of skepticism. ## References Emily M Bender, Timnit Gebru, Angelina McMillanMajor, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In *Advances in Neural Information Processing* Systems. Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, et al. 2021. Extracting training data from large language models. In *30th USENIX Security Symposium* (USENIX Security 21). Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. *arXiv preprint arXiv:2204.02311*. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Association for Computational Linguistics. Vedant Gaur and Nikunj Saunshi. 2022. Symbolic math reasoning with language models. In *2022 IEEE MIT Undergraduate Research Technology Conference (URTC)*. Namgyu Ho, Laura Schmid, and Se-Young Yun. 2022. Large language models are reasoning teachers. arXiv preprint arXiv:2212.10071. Saurav Kadavath, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer, Zac Hatfield Dodds, Nova DasSarma, Eli Tran-Johnson, et al. 2022. Language models (mostly) know what they know. *arXiv preprint arXiv:2207.05221*. Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models. *arXiv preprint* arXiv:2001.08361. Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. *arXiv preprint* arXiv:2205.11916. Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, et al. 2022. Solving quantitative reasoning problems with language models. *arXiv preprint* arXiv:2206.14858. Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, et al. 2022. Competition-level code generation with alphacode. *arXiv preprint arXiv:2203.07814*. Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2023. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. *ACM Computing Surveys*. Arkil Patel, Satwik Bhattamishra, and Navin Goyal. 2021. Are NLP models really able to solve simple math word problems? In *Proceedings of the 2021 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics. Jack W Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, et al. 2021. Scaling language models: Methods, analysis & insights from training gopher. *arXiv preprint* arXiv:2112.11446. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. " why should i trust you?" explaining the predictions of any classifier. In *Proceedings of the* 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. Wojciech Samek, Grégoire Montavon, Andrea Vedaldi, Lars Kai Hansen, and Klaus-Robert Müller. 2019. Explainable AI: interpreting, explaining and visualizing deep learning, volume 11700. Springer Nature. Nikunj Saunshi, Sadhika Malladi, and Sanjeev Arora. 2021. A mathematical exploration of why language models help solve downstream tasks. In International Conference on Learning Representations. Shaden Smith, Mostofa Patwary, Brandon Norick, Patrick LeGresley, Samyam Rajbhandari, Jared Casper, Zhun Liu, Shrimai Prabhumoye, George Zerveas, Vijay Korthikanti, et al. 2022. Using deepspeed and megatron to train megatron-turing nlg 530b, a large-scale generative language model. *arXiv preprint arXiv:2201.11990*. Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, et al. 2022a. Emergent abilities of large language models. *arXiv* preprint arXiv:2206.07682. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022b. Chain of thought prompting elicits reasoning in large language models. *arXiv preprint arXiv:2201.11903*. Sang Michael Xie, Aditi Raghunathan, Percy Liang, and Tengyu Ma. 2022. An explanation of in-context learning as implicit bayesian inference. In *International* Conference on Learning Representations. ## A Symbolized Dataset We employ a multi-step process to convert the original SVAMP (Patel et al., 2021) prompts into its symbolic version SVAMP-Sym, motivated by the symbolic construction in Gaur and Saunshi (2022). The SVAMP dataset is under the MIT License. Our SVAMP-Sym dataset has exactly the same set of 1000 problems as SVAMP. Given a common math word problem (MWP) from SVAMP, we parse through the given text for all numbers, which are stored in a list. Using regex, the index of the numbers can be found and replaced with keys for future replacement with variables. We use <i> (where i ∈ [1, 4] as there are at most four numbers in each problem definition) as keys. As shown in Figure 4, by generalizing the converted prompt, we allow for easy manipulation of prompts to whatever variable a user wants to use and test for downstream tasks. We then convert the keys to their respective variables. For our tests we primarily use the variables (w,x,y,z) for a few main reasons: 1. This set of variables is the most common in general mathematical word problems and thus makes the most sense to use as variables as opposed to an arbitrary sequence of random, or even consecutive letters. 2. We find that the use of variables such as x1, x2, ..., xn (x1, x2, ..., xn when inputted into the model) many times confuses the model into conflating the simulated subscript as a coefficient. 3. We are able to see that the model achieves similar, if not greater accuracies with the use of (w,x,y,z) as opposed to other sequences of variables, see Table 1. Moreover, the use of a predetermined length of variables is also possible due to the aforementioned maximum number of four numbers for each prompt in the SVAMP dataset. See Figure 4 for an example problem, its answer, and our symbolized version of it. ## B Ablations And Analysis B.1 Response Similarity To find the syntactical similarity between the numeric and symbolic responses, we employ two main metrics: BLEU Scores and Levenshtein Distances. BLEU score is a standard metric used to judge similarity between sentences based on the n-grams they share. Levenshtein distance (also known as edit distance) is a standard metric to distance between two strings: the minimum of swap/deletion/insertion operations that are needed to convert one string to another. To measure similarity between s1 and s2, we use (maxlen(s1, s2)− Levenshtein(s1, s2))/maxlen(s1, s2)/ Using the nltk.translate.bleu_score module, we define the average of BLEU-1, BLEU-2 and BLEU-3 metrics by passing weights=[1/3, 1/3, 1/3] in the sentence_bleu function. For computing Levenshtein Distances, we utilize the python-Levenshtein package's distance function. As described in the histograms presented in Figure 5 and Figure 6, we find much higher similarity scores when employing self-prompting. This logically follows the higher alignment values of such runs. More specifically, however, the similarity of the two scores is ultimately more contingent on the verbiage of the output. As indicated in Figure 1, the SP often closely tracks the exact output of the numeric response and simply replaces the numbers with the respective variables/symbolic expressions, and outputs an expression instead of a final number. While metrically evident in the provided plots, we see that this "mirroring" phenomenon occurs frequently with the use of SP, evident through the high density of similarity scores close to 1 in Figure 5. ## B.2 More On Alignment While we find that the use of the alignment prompt is effective in raising both the accuracy and alignment of a symbolic problem, we run a few supplementary experiments to investigate this behavior even further. When giving the model the alignment prompt (see Table 3) from the beginning, not simply when the numeric and symbolic outputs do not align, we actually find a decrease in accuracy from the self-prompting + alignment prompt run. CoT accuracy is 62% and vanilla accuracy is 60.9%. Similarly, alignment accuracies are 61.5% and 60.4% for CoT and vanilla, respectively. When evaluating alignment for the base self-prompting run, we find that the model aligns 83.9% when the numeric output is correct, and 29.7% when it is wrong. Such numbers possibly suggest the model's cognizance of whether or not the numeric evaluation was performed correctly; an implicit understanding of mathematical problem solving. ## B.3 Difficulty Of Problems We highlight a metric for the difficulty of a problem with respect to the primary operation performed in the answer to the prompt. The SVAMP dataset stores a "Type" key that denotes the primary elementary mathematical operation performed to get to the answer (primary in cases where there is more than one operation). We see that when graphing the accuracies of various evaluation methods while isolating the operation of the problem that the numeric and symbolic runs exhibit a somewhat complementary behavior. While numeric does on average better on problems with division, symbolic runs have higher accuracy on multiplication, see Figure 7. Table 2 has breakdowns of the exact accuracies per each tag. Interestingly, the self-prompting approach seems to do well on both multiplication and division, and its performance is close to the max of the numeric and symbolic performance for each category, thus hinting to a "best of both worlds" phenomenon. ## C Additional Details C.1 Prompt Formats In the SVAMP dataset, each problem contains a problem statement and a question. For both raw numeric and symbolic evaluations, we input the problem into the model with the CoT prompt if appropriate. For self-prompting, however, in order to increase alignment between the numeric and symbolic outputs, we add the entire transcript of the numeric evaluation (problem, answer prompting, symbolic problem). A detailed transcript of each of the different prompts and use cases can be found in Table 3. ## C.2 Filtering Since there is high variability in the LM's outputs, due to the necessity to reason when solving a MWP, we employ several filtering techniques in a filter() function that cleans up the extracted numeric or symbolic output. A few main steps in the filtering pipeline are as follows: - Character replacing | Accuracy (%) | | | | | | |----------------|----------|-------------|----------------|----------|------| | Evaluation | Addition | Subtraction | Multiplication | Division | | | Numeric | CoT | 64.7 | 64.3 | 68 | 88.1 | | Vanilla | 54.1 | 62.8 | 68 | 87.4 | | | Symbolic | CoT | 64.1 | 58.8 | 90 | 70.4 | | {w, x, y, z} | Vanilla | 41.2 | 63 | 90 | 62.2 | | Self-prompting | CoT | 67.6 | 66.1 | 94 | 85.2 | | {w, x, y, z} | Vanilla | 60 | 61.3 | 80 | 73.3 | Table 2: While the accuracies presented are fairly consistent within each separate evaluation run, we see that there are clear biases in which the model is able to perform certain types of problems better depending on the context of the run. Significantly, it should be noted that the self-prompting is able to employ both the efficiencies of numeric, and symbolic runs with the increased alignment. | Example | <Numeric Setup> = "Adam had 5 apples. He ate 2 of them for breakfast." <Numeric Question> = "How many apples will he have left if he eats 1 more?" <Symbolic Setup> = "Adam had w apples. He ate x of them for breakfast." <Symbolic Question> = "How many apples will he have left if he eats y more?" | |-------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Prompts | <CoT Prompt> = "Let's think step by step." <Numeric Extract Prompt> = "The final answer (only the number) is:" <Symbolic Extract Prompt> = "The final answer (only the expression in terms xxxxxxxxxxxxxxxxxxxxxxxxxxxx of given variables) is:" <Align Prompt> = "Copy the above numeric response word to word but xxxxxxxxxxxxxxxxx replace numbers with the right symbolic expression." | | Numeric | Q: <Numeric Setup> <Numeric Question> A: <CoT Prompt> <Numeric Response> // language model's verbose response <Numeric Question> <Numeric Extract Prompt> <Numeric Extracted> | | Symbolic | Q: <Symbolic Setup> <Symbolic Question> A: <CoT Prompt> <Symbolic Response> // language model's verbose response <Symbolic Question> <Symbolic Extract Prompt> <Symbolic Extracted> | | Self-prompt | Q: <Numeric Setup> <Numeric Question> A: <CoT Prompt> <Numeric Response> <Align Prompt> // [optional] only if alignment fails without it Q: <Symbolic Setup> <Symbolic Question> A: <CoT Prompt> <Symbolic Response> <Symbolic Question> <Symbolic Extract Prompt> <Symbolic Extracted> | Table 3: We present the prompting pipeline for various methods. Prompts in blue are the ones we pass to the model, while the text in green are the output of the language model. In each of these methods, we include a final filtering step on top of the extracted answers. - Dollar signs - Percentages - Cleaning up the output by removing all words besides the expression and/or final number - Addressing cases of outputs such as code or LATEX - Isolating the outputted/final expression if the answer is given in terms of an equation (say "z = w + x") The detailed (pseudo) code of the function can be found at the end. ![8_image_0.png](8_image_0.png) ![9_image_0.png](9_image_0.png) ![9_image_1.png](9_image_1.png) ![9_image_2.png](9_image_2.png) ![10_image_0.png](10_image_0.png) ![10_image_1.png](10_image_1.png) ![11_image_0.png](11_image_0.png) ## Def Filter_Symbolic ( Response ): response = response . lower () response = response . strip ('\n') print ( f" Original Output : { response }") \# De - latexifying response = LatexNodes2Text () . latex_to_text ( response ) response = response . replace ("$", "") \# Using * as multiplication operator response = response . replace ('.', '*') \# Handling the division symbol response = response . replace ("%", "") response = response . replace ('\ u00F7 ', '/') \# Remove spaces and construct a boolean array denoting whether \# the character is in the set {'w ', 'x ', 'y ', 'z ', '/', '*', '+', '-', '(', ')'} math_sym_set = set(['w', 'x', 'y', 'z', '/', '*', '+', '-', '(', ')'] +\ [str ( a ) for a in **range** ( 10 )]) \# Check for " words " that only contain chars from math_sym_set response = response . replace ("=", " = ") words = response . lower () . split () is_math_sym = np . array ([np . all ([c in math_sym_set for c in word ])* len ( word ) for word in words ]) \# Pick the substring with non - zero entries that has the largest sum , \# i.e. the largest substring of the original string that is an equation / expression idx , len_ = longest_sum ( is_math_sym ) response = ''. join ( words [idx : idx + len_ ]) print ( response ) \# Add multiplication operator * if needed . \# Logic : If neither of two consecutive characters is an operator \# then likely a multiplication operator needs to be added between them . \# Some edges cases like '(p' or 'q)' are handled op_set = set (['/', '*', '+', '-']) digit_set = set ([str ( a ) for a in **range** ( 10 )]) new_response = [] for i in **range** ( len( response ) ): new_response . append ( response [i]) \# Check if '*' needs to be added if i < len ( response )-1 and response [i] not in op_set and response [i+1] not in op_set : \# No need to add '*' if the consecutive chars of the type '(p' or 'q)' of '25 ' if ( response [i] != '(' and response [i+1] != ')') and ( response [i] not in digit_set or response [i+1] not in digit_set ): new_response . append ('*') print ( f" Final Output : { new_response }") return ''. join ( new_response ) return output def filter_numeric ( response ): output = str ( response ) . replace (",", "") output = output . replace ("$", "") output = output . strip ('\n') try : output = int( re . findall ('\d+', output )[0]) except : output = output return output ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 4 A2. Did you discuss any potential risks of your work? Not applicable. The paper mostly deals with fundamental understanding of LLMs, which can help mitigate potential risks of LLMs ✓ A3. Do the abstract and introduction summarize the paper's main claims? Left blank. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 2.1 ✓ B1. Did you cite the creators of artifacts you used? Section 2.1 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Section A ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section A ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? We directly adapted an existing dataset and replaced numbers with variables ✗ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? The dataset is a derivate of another dataset, and thus imports all of its properties ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section A ## C ✓ **Did You Run Computational Experiments?** Left Blank. C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Not applicable. Left blank. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Not applicable. Left blank. ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Left blank. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section B.1 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
george-surdeanu-2023-sexually
It{'}s not Sexually Suggestive; It{'}s Educative | Separating Sex Education from Suggestive Content on {T}ik{T}ok videos
https://aclanthology.org/2023.findings-acl.365
We introduce SexTok, a multi-modal dataset composed of TikTok videos labeled as sexually suggestive (from the annotator{'}s point of view), sex-educational content, or neither. Such a dataset is necessary to address the challenge of distinguishing between sexually suggestive content and virtual sex education videos on TikTok. Children{'}s exposure to sexually suggestive videos has been shown to have adversarial effects on their development (Collins et al. 2017). Meanwhile, virtual sex education, especially on subjects that are more relevant to the LGBTQIA+ community, is very valuable (Mitchell et al. 2014). The platform{'}s current system removes/punishes some of both types of videos, even though they serve different purposes. Our dataset contains video URLs, and it is also audio transcribed. To validate its importance, we explore two transformer-based models for classifying the videos. Our preliminary results suggest that the task of distinguishing between these types of videos is learnable but challenging. These experiments suggest that this dataset is meaningful and invites further study on the subject.
# It'S Not Sexually Suggestive; It'S Educative | Separating Sex Education From Suggestive Content On Tiktok Videos Enfa George Mihai Surdeanu ![0_image_0.png](0_image_0.png) University Of Arizona {enfageorge,msurdeanu}@arizona.edu ## Abstract We introduce SexTok, a multi-modal dataset composed of TikTok videos labeled as sexually suggestive (from the annotator's point of view), sex-educational content, or neither. Such a dataset is necessary to address the challenge of distinguishing between sexually suggestive content and virtual sex education videos on TikTok. Children's exposure to sexually suggestive videos has been shown to have adversarial effects on their development (Collins et al., 2017). Meanwhile, virtual sex education, especially on subjects that are more relevant to the LGBTQIA+ community, is very valuable (Mitchell et al., 2014). The platform's current system removes/punishes some of both types of videos, even though they serve different purposes. Our dataset contains video URLs, and it is also audio transcribed. To validate its importance, we explore two transformer-based models for classifying the videos. Our preliminary results suggest that the task of distinguishing between these types of videos is learnable but challenging. These experiments suggest that this dataset is meaningful and invites further study on the subject. ## 1 Introduction In short-form videos such as in TikTok, accurately identifying sexually suggestive and sex education content amidst a sea of diverse video types poses a significant challenge. In this paper, we delve into this problem, focusing specifically on TikTok, the most downloaded app in 2022, which has a sub stantial user base of early adolescents and young individuals (10-19: 32.5%, 20-29: 29.5%) 1 The distinction between suggestive videos and virtual sex education holds crucial significance on multiple fronts. Adolescent sex education in the United States is delivered in a fragmented and often inadequate system, which has long been the 1https://wallaroomedia.com/blog/social-media/ tiktok-statistics/ Figure 1: Two screenshots from videos in the dataset. On the left, Nyko (@kingnyko2022) addresses a question about his gender transition. The right is from a sexually suggestive video. (1) **Educative** *(Description)* : Video featuring a man discussing a topic while a prominent illustration of a p*n*s with pearly penile papules serves as the background. (2) **Suggestive** *(Description)* : Video shows a man holding a pumpkin over his torso while a woman enthusiastically moves her hand inside, exclaiming, "There is so much in there." (3) **Educative** *(Transcript)* : The average banana in the United States is about 5.5 inches long.That's the perfect size for baking banana bread most of the time because ... (4) **Suggestive** *(Transcript)* : You are such a good boy. Daddy's so proud of you. Table 1: Examples from the dataset, the first two are descriptions, and the latter are video transcripts. subject of intense criticism and is vulnerable to political influence (Fowler et al., 2021).In this context, TikTok presents a novel and promising avenue to conveying comprehensive and accessible sexual health information to adolescents, offering a convenient, private, and inclusive space for learning and discussion (Fowler et al., 2022). At the same time, children's exposure to sexual media content has been found to influence attitudes and contribute to the formation of adversarial sexual beliefs (Collins et al., 2017). Unfortunately, efforts to moderate explicit content had unintended consequences, as studies have demonstrated the misidentification of non-explicit content due to flawed algorithms and filtering tech niques Peters, 2020. In addition to the above issue, video/video creators (referred to as creators from now on) may also be susceptible to mass reporting. Creators from marginalized communities, partic ularly those within the LGBTQIA+ community, face heightened risks of having their educational content wrongfully flagged or removed 2. The classification of sexually suggestive and sex education videos presents a complex task, as demonstrated by the examples shown in Table 1. In example 1, we see that p*n*s illustration is not suggestive, while the video with a man holding a pumpkin in example 2 is suggestive. When we look at the transcripts, we see that in example 3, the creator is talking about myths around p*n*s sizes for pleasurable sex, and in example 4, the audio is suggestive. Considering these complexities, accurately categorizing sexually suggestive and sex education videos necessitates a nuanced understanding of contextual cues, subjectivity, evolving language, and robust algorithmic solutions. The contributions of the paper are as follows: 1. **Introduction of SexTok:** A collection of 1000 TikTok videos labeled as Sexually Suggestive, Sex Education, or Others, along with perceived gender expression and transcription. 2. **Baselines Evaluation:** We evaluate two transformer-based classifiers as baselines for the task of classifying these videos. Our results indicate that accurately distinguishing between these video types is a learnable yet challenging task. ## Trigger Warning: Sexual Content And Explicit Language Please be advised that this research paper and its associated content discuss and analyze sexually suggestive and sex education videos. The examples and discussions within this paper may contain explicit or implicit references to sexual acts, body parts, and related topics. The language used may sometimes be explicit. This material is intended for academic and research purposes and is presented to address challenges in content identification and classification. 2https://mashable.com/article/tiktok-sex-educationcontent-removal ## 2 Related Work Automatic detection of sexually explicit videos is an area of active study. In a recent survey, Cifuentes et al., 2022 classified the methods into four broad separate strategies. Nudity detection, Analysis of image descriptors ( such as Bag of Visual Words), Motion analysis, and other deep learning techniques. Most works around nudity detection are focused on skin-colored region segmentation to identify nudity. This methodology has been extensively explored in the image domain (Fleck et al., 1996), (Wang et al., 2005) (Platzer et al., 2014), (Garcia et al., 2018),(Lee et al., 2006). (Ganguly et al., 2017)'s work, apart from focusing on the percentage of skin exposure, also gave attention to the body posture of the human in the image and the person's gestures and facial expressions. An alternative strategy is the Bag of Visual Words model, in which the idea is to minimize the existing semantic gap between the low-level visual features and the high-level concepts about pornography. (Deselaers et al., 2008), (Lopes et al., 2009), (Ulges and Stahl, 2011), (Zhang et al., 2013). Approaches based on motion analysis, apart from other features, also capture motion, such as using the periodicity in motion, such as in (Rea et al., 2006). (Zuo et al., 2008) uses a Gaussian mixture model (GMM) to recognize porno-sounds, a contour-based image recognition algorithm to detect pornographic imagery, and are combined for the final decision. Yet still, sexual activity where the human is mostly clothed or has minimal movement is still challenging. Peters, 2020 studied issues surrounding publicly deployed moderation techniques and called for reconsidering how platforms approach this area, especially due to it's high false positive rates and/or low precision rates for certain types of actions. ## 3 Sextok Dataset This section presents the SexTok dataset 3, a collection of 1000 TikTok video links accompanied by three key features: Class Label, Gender Expression, and Audio Transcriptions. 3Data and the experiment codebase will be shared at github.com/enfageorge/SexTok. Videos are shared as links to avoid any potential licensing issues. ## 3.1 Terminology And Definitions 3.1.1 Class Label The first feature, Class Label, is a categorical variable with three possible values: *Sexually Suggestive, Sex Education*, and *Others*: Sexually Suggestive: This category encompasses videos that purposefully intend to elicit a sexual response from viewers. Determining the presence of sexually suggestive content is subjective. Sex Education: This category encompasses videos aimed at enhancing viewers' knowledge, skills, and attitudes concerning sexual and reproductive health. It covers various topics, including but not limited to sexual orientation, gender, and gender-affirming care. Others: This category encompasses videos that do not fall within the aforementioned sexually suggestive or sex education categories. ## 3.1.2 Gender Expression Gender expression is a form of self-expression that refers to how people may express their gender identity (Summers, 2016). In this paper, we focus solely on the physical visual cues associated with gender expression. We provide five gender expression labels in the dataset: *Feminine, Masculine, Nonconforming, Diverse, and None*. Feminine and Masculine represent predominantly feminine or masculine expressions, while Non-conforming refers to expressions that deviate from traditional norms. Diverse applies to videos with varying gender expressions among multiple individuals. The None label is for videos without people or only limited visual cues like hands. The information for the vast majority is not selfreported. When available through the video itself, profile descriptions, or hashtags, we incorporate that information. Otherwise, the annotation is based on the perception of the annotator. This feature is provided only to serve the purpose of evaluating bias in models built on the dataset. ## 3.2 Dataset Construction Data Collection The data collection process involved the primary annotator creating a new TikTok account and interacting with the platform in various ways to collect the video links. They carefully watched and hand-selected videos. Two important considerations were taken into account during the dataset | Label | Train | Val | Test | Total | |-----------|---------|-------|--------|---------| | Sugg | 140 | 20 | 40 | 200 | | Educative | 140 | 20 | 40 | 200 | | Others | 420 | 60 | 120 | 600 | | Total | 700 | 100 | 200 | 1000 | construction process: (a) Limit a maximum of five videos per creator in the dataset. (b) Creators appearing in one split of the dataset (train, validation, or test) were excluded from all other splits to ensure independence and prevent data leakage. Detailed information regarding the specific methods used, as well as limitations and ethical considerations, can be found in Appendix A. ## Annotator Agreement A 10% sample of the dataset was independently annotated by a second author to ensure reliability. Cohen's Kappa scores (Cohen, 1960) were used to assess annotator agreement. For Gender Expression, the Kappa score was 0.89, indicating substantial agreement. For Class Label, the Kappa score was 0.93, indicating high agreement. These scores validate the consistency and quality of the dataset's annotations. ## Data Processing: Video Download And Audio Transcription The videos were downloaded without the TikTok watermark using a TikTok downloader.4. The watermark was removed to reduce unnecessary noise in the data. A smaller sample of videos was first transcribed using OpenAI's whisper (medium) (Radford et al., 2022) and was manually checked for accuracy. The transcriptions were mostly perfect, with a word error rate of 1.79%. After this, all the videos were automatically transcribed using Open AI's Whisper (medium). ## 3.3 Dataset Properties In this section, we provide some general statistics about the SexTok dataset. The dataset comprises 1000 TikTok video links with three features: Class 4https://github.com/anga83/tiktok-downloader | Label | Fem | Masc | NC | D | None | |---------|-------|--------|------|-----|--------| | Sugg | 115 | 84 | 0 | 1 | 0 | | Edu | 85 | 84 | 6 | 8 | 17 | | Others | 164 | 170 | 12 | 113 | 141 | | Total | 364 | 338 | 18 | 122 | 158 | Label, Gender Expression, and Audio Transcriptions. A breakdown by label and dataset split is given in Table 1. A separate breakdown by Gender Expression and dataset split is given in Table 2. When the audio was transcribed, a percentage of videos were found not to have any text in the audio transcription, specifically → Suggestive - 15.85%, Educative - 3.97%, Others - 8.4%. We also observe that suggestive videos tend to be shorter (median duration: 7.86 secs), and have shorter audio transcriptions (median number of words: 14 words), compared to educative videos that are longer (median duration: 50.80 secs) and have longer audio transcriptions (median number of words: 171.5 words). Detailed dataset video length and transcription length are given in Appendix A.) ## 4 Experimental Setups In this section, we evaluate the performance of pretrained transformer-based models on the SexTok dataset to assess its significance. The experiments are divided into two subsections: text classification using video transcripts and video classification. For both transformer-based setups, we utilized models downloaded from Hugging Face Transformers (Wolf et al., 2020), initializing them with three random numbers. Details on hyperparameters are in Appendix C. The reported results are the average of three runs. To assess the performance, we employed four sets of metrics: (1) accuracy, (2) micro precision, recall, and F1 (excluding Others as a negative class from the scores), (3) macro precision, recall, and F1, and (4) overall F1 for each class. ## Text Classification Using Video Transcript We fine-tuned bert-base-multilingual-cased (Devlin et al., 2018) to perform text classification on the video transcripts. Since we observed that a small percentage of videos do not yield any text in their transcription, we experimented with two setups. One with all video transcriptions and the other with non-empty transcriptions. ## Video Classification We fine-tuned MCG-NJU/videomae-base, a VideoMAE base model (Tong et al., 2022) for video classification. The image clips were randomly sampled and preprocessed to align with the default configurations of the model. ## 5 Results And Error Analysis The average performance and standard deviation of the models are presented in Tables 4 and 5. Based on these results, we draw the following observations: - The most accurate model is the text classifier that evaluated videos with a transcription (75%). It demonstrates relatively better performance in identifying educative content but often struggles to differentiate between suggestive content and others, and vice versa. However, it should be noted that this implementation is not realistic in a real-world scenario, as TikTok videos can vary in terms of sound presence and spoken language. - Both text-based classifiers exhibit higher F1 scores than the video classifier for the Educative and Others classes. But their performance in detecting suggestive content is is comparatively lower than that of the video classifier. - Notably, neither of the text-based classifiers misclassifies suggestive content as educative, or vice versa, as evident from the confusion matrices in Appendix C. - The video classifier achieves the highest F1 score for the Suggestive class. However, it frequently confuses Educative and Other videos with each other. To further understand the hard examples for the model, we manually categorized the errors in both text and video classification experiment setups. We analysed 54 errors in text classification model. If more than one option was applicable, the video was counted in both: (a) *Audio unrelated* to class label (50.00%): The audio in these videos | Group | Acc | Micro | Macro | | | | | |----------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------| | P | R | F1 | P | R | F1 | | | | Majority | 0.60 | 0.00 | 0.00 | 0.00 | 0.20 | 0.33 | 0.25 | | All Text | 0.68 ± 0.06 | 0.76 ± 0.06 | 0.50 ± 0.06 | 0.60 ± 0.04 | 0.71 ± 0.06 | 0.63 ± 0.03 | 0.64 ± 0.04 | | Non-empty Text | 0.75 ± 0.02 | 0.78 ± 0.07 | 0.54 ± 0.02 | 0.64 ± 0.02 | 0.74 ± 0.04 | 0.65 ± 0.01 | 0.68 ± 0.00 | | Video | 0.70 ± 0.04 | 0.61 ± 0.11 | 0.51 ± 0.07 | 0.55 ± 0.05 | 0.68 ± 0.06 | 0.57± 0.07 | 0.61 ± 0.01 | Group Suggestive Educative Others Majority 0.00 0.00 0.60 All Text 0.30 ± 0.14 0.83 ± 0.01 0.80 ± 0.02 Non-empty Text 0.38 ± 0.03 **0.84** ± 0.01 **0.81** ± 0.02 Video **0.55** ± 0.02 0.63 ± 0.13 0.72 ± 0.15 consisted of popular songs or speeches that did not contain any words typically associated with the class label. (b) Context clues and Euphemism (25.07%) : These videos relied on context clues or employed euphemistic language (9.26%) or required audio analysis considering the tone and intonation to predict the class label (14.81%). (c) No or partial transcription (14.81%): Approximately 9.26% of the videos had no audio that could be transcribed, while 5.56% had only partial transcriptions available. We analyzed 52 errors in video classification. All educative videos that were classified as others, and vice versa, had the same format that both classes do, i.e., a person looking at the camera speaking. Of the 11 suggestive videos that were not classified correctly, in 63% of videos, some or all of the video frames had fully or mostly clothed people featured in the video. A detailed analysis using Transformers-interpret C (Pierse, 2021) also shows that the text classification shows some signs of overfitting to text. ## 6 Discussion The results highlight the complexity of accurately identifying sexually suggestive and educative videos on platforms like TikTok. While the results indicate that text analysis can contribute to detecting educative videos, music clips unrelated to the video topic are commonly used, making reliance on transcription alone insufficient. While existing work in pornographic content detection primarily focuses on visual analysis, our results indicate the need for a multi-modal approach since detecting sexual content requires a more comprehensive understanding encompassing multiple senses, including audio, speech, and text. Addressing these challenges is crucial for developing effective content moderation systems, ensuring appropriate access to sex education, and creating a safer and more inclusive online environment. It is also crucial to be mindful of potential gender expression bias commonly found in visual datasets (Meister et al., 2022). Moreover, for tasks like this, developing scalable solutions suitable for large-scale systems with millions of users is crucial for effective implementation. Further exploration and investigation of these aspects are left for future research and development. ## 7 Conclusion This paper introduces a novel task of identifying sexually suggestive and sex-educative videos and presents SexTok, a multi-modal dataset for this purpose. The dataset includes video links labeled for sexual suggestiveness, sex-educational content, and an other category, along with gender expression and audio transcription. The results highlight the challenging and multi-modal nature of the task and suggest that while the dataset is meaningful and the task is learnable, it remains a challenging problem that deserves future research. This work contributes to promoting online safety and a balanced digital environment. ## 8 Acknowledgement This work was partially funded by the LGBTQ+ Grad Student Research Funds by The Institute for LGBTQ Studies at the University Of Arizona. We deeply appreciate the invaluable contributions of Shreya Nupur Shakya throughout this work. ## Limitations We address the limitations of the SexTok dataset and the accompanying experiments here. ## Sextok Dataset - The TikTok account was created and used from a specific geographic location (which will be disclosed in the final version if accepted). This is important to note since the content recommendation of TikTok is influenced by geographic location,5among other things; hence a geographic bias may be expected, i.e., certain demographics may be more represented than others, especially in terms of languages used, race, ethnicity, etc. - The data gathered only represents a small sample of the content available on TikTok and may not represent the entire population of TikTok users or videos. - Sexual suggestiveness is treated as a discrete class label in the project, whereas in the real world, it has two important properties. 1) The perception of what is sexually suggestive may vary depending on the individual's sexual orientation, worldview, culture, location, and experiences and is highly subjective. 2) Some are more suggestive than others, and we do not account for the variation in the strength of suggestiveness here. - The dataset is a small snapshot of the TikTok videos from October 2022 to January 2023. Patterns, slang, and other cues may change over time. - Gender expression has many variations but is referred to as discrete labels here, but in real life, it is not. Additionally, this is as perceived by one annotator and, for the majority, not self-reported by the person in the video. Additional expert annotators may be needed to strengthen the confidence in the label. - Despite best efforts, it may be possible that the same creator appears more than five times. This is because creators often create multiple accounts to serve as a backup in case TikTok takes down the original account. This is Figure 2: This is a partial screenshot from an audio ![5_image_0.png](5_image_0.png) profile page on Tiktok. Each rectangle is a cover image of a video that uses the same audio. The text on the bottom left of each video is the username of the creator of that video. We can see that the same person has multiple accounts posting the same video. observed to be increasingly common in the sexually-suggestive and sex-ed domains. We show an example in Figure 2 ## Other Details : The audio content of the TikTok videos comprises various elements, including background music, spoken dialogue (not necessarily from the video creator), or a combination of both. Notably, TikTok provides voice effects that enable users to modify their voices using predefined options. ## Experiments - The audio transcription of the videos was created automatically using Open AI's Whispermedium (Radford et al., 2022). Hence this is subject to errors, which may impact the performance of the models. - For training the models, GPU computing power was used. ## Ethics Statement We address the ethical considerations and consequences of the SexTok dataset and the accompanying experiments here. - The study's focus is on the technical aspects of the problem. It does not address the broader societal and ethical implications of censorship and of regulating sexually suggestive content on social media platforms. The work only aims to detect sexually suggestive content and sex education content against other video topics but makes no stand on censorship or content regulation of sexually suggestive videos. - Sexual suggestiveness, as well as perceived gender expression, is a subjective matter and is hence susceptible to annotators' bias. - Gender expression, specifically visual cues only, was annotated and offered only to evaluate bias based on visual cues since such biases are known to exist within large-scale visual datasets (Meister et al., 2022). The authors do not condone the practice of assigning gender identity based on a person's external appearance since gender is an internal sense of identity (Association, 2015). This dataset is not intended to be used for any such practices. - Due to the nature of the problem, and potential licensing issues, the publicly-collected data is not anonymized. ## References American Psychological Association. 2015. Guidelines for psychological practice with transgender and gender nonconforming people. *American psychologist*, 70(9):832–864. Jenny Cifuentes, Ana Lucila Sandoval Orozco, and Luis Javier García Villalba. 2022. A survey of artificial intelligence strategies for automatic detection of sexually explicit videos. Multimedia Tools and Applications, 81(3):3205–3222. Jacob Cohen. 1960. A coefficient of agreement for nominal scales. *Educational and psychological measurement*, 20(1):37–46. Rebecca L Collins, Victor C Strasburger, Jane D Brown, Edward Donnerstein, Amanda Lenhart, and L Monique Ward. 2017. Sexual media and childhood well-being and health. *Pediatrics*, 140(Supplement_2):S162–S166. Thomas Deselaers, Lexi Pimenidis, and Hermann Ney. 2008. Bag-of-visual-words models for adult image classification and filtering. In *2008 19th International Conference on Pattern Recognition*, pages 1–4. IEEE. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. *CoRR*, abs/1810.04805. Margaret M Fleck, David A Forsyth, and Chris Bregler. 1996. Finding naked people. In *Computer Vision—ECCV'96: 4th European Conference on Computer Vision Cambridge, UK, April 15–18, 1996 Proceedings Volume II 4*, pages 593–602. Springer. Leah R Fowler, Lauren Schoen, and Stephanie R Morain. 2021. Let's tok about sex. *Journal of Adolescent* Health, 69(5):687–688. Leah R Fowler, Lauren Schön, Hadley Stevens Smith, and Stephanie R Morain. 2022. Sex education on tiktok: a content analysis of themes. *Health promotion* practice, 23(5):739–742. Debashis Ganguly, Mohammad H Mofrad, and Adriana Kovashka. 2017. Detecting sexually provocative images. In *2017 IEEE Winter Conference on Applications of Computer Vision (WACV)*, pages 660–668. IEEE. Manuel B Garcia, Teodoro F Revano, Beau Gray M Habal, Jennifer O Contreras, and John Benedic R Enriquez. 2018. A pornographic image and video filtering application using optimized nudity recognition and detection algorithm. In *2018 IEEE 10th* International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment and Management (HNICEM), pages 1–5. IEEE. Hogyun Lee, Seungmin Lee, and Taekyong Nam. 2006. Implementation of high performance objectionable video classification system. In *2006 8th International* Conference Advanced Communication Technology, volume 2, pages 4–pp. IEEE. Ana PB Lopes, Sandra EF de Avila, Anderson NA Peixoto, Rodrigo S Oliveira, and Arnaldo de A Araújo. 2009. A bag-of-features approach based on hue-sift descriptor for nude detection. In 2009 17th European Signal Processing Conference, pages 1552–1556. IEEE. Nicole Meister, Dora Zhao, Angelina Wang, Vikram V Ramaswamy, Ruth Fong, and Olga Russakovsky. 2022. Gender artifacts in visual datasets. arXiv preprint arXiv:2206.09191. Kimberly J Mitchell, Michele L Ybarra, Josephine D Korchmaros, and Joseph G Kosciw. 2014. Accessing sexual health information online: use, motivations and consequences for youth with different sexual orientations. *Health education research*, 29(1):147– 157. Jonathan Peters. 2020. Sexual content and social media moderation. *Washburn LJ*, 59:469. Charles Pierse. 2021. Transformers Interpret. Christian Platzer, Martin Stuetz, and Martina Lindorfer. 2014. Skin sheriff: a machine learning solution for detecting explicit images. In *Proceedings of the 2nd* international workshop on Security and forensics in communication systems, pages 45–56. Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever. 2022. Robust speech recognition via large-scale weak supervision. *arXiv preprint arXiv:2212.04356*. N. Rea, G. Lacey, R. Dahyotit, and R. Dahyot. 2006. Multimodal periodicity analysis for illicit content detection in videos. In The 3rd European Conference on Visual Media Production (CVMP 2006) - Part of the 2nd Multimedia Conference 2006, pages 106– 114. Randal W Summers. 2016. Social Psychology: How Other People Influence Our Thoughts and Actions [2 volumes]. ABC-CLIO. Zhan Tong, Yibing Song, Jue Wang, and Limin Wang. 2022. Videomae: Masked autoencoders are data-efficient learners for self-supervised video pretraining. *arXiv preprint arXiv:2203.12602*. Adrian Ulges and Armin Stahl. 2011. Automatic detection of child pornography using color visual words. In *2011 IEEE international conference on multimedia and expo*, pages 1–6. IEEE. Donghui Wang, Miaoliang Zhu, Xin Yuan, and Hui Qian. 2005. Identification and annotation of erotic film based on content analysis. In Electronic Imaging and Multimedia Technology IV, volume 5637, pages 88–94. SPIE. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Jing Zhang, Lei Sui, Li Zhuo, Zhenwei Li, and Yuncong Yang. 2013. An approach of bag-of-words based on visual attention model for pornographic images recognition in compressed domain. *Neurocomputing*, 110:145–152. Haiqiang Zuo, Ou Wu, Weiming Hu, and Bo Xu. 2008. Recognition of blue movies by fusion of audio and video. In 2008 IEEE International Conference on Multimedia and Expo, pages 37–40. IEEE. ## A Details Of Methods Used To Collect Videos For sexually suggestive and sex education videos, the annotator interacted with the platform to collect the data in many ways, including search (hashtags, names of people), people reusing the same audio, stitches, duets, the public liked videos of certain profiles pages and the "For you" page. Any video that did not appear to belong to either sexually suggestive or sex education was collected and labeled as Others. A.1 Sexually Suggestive and Sex ed Videos Videos - **Search :** Hashtags ( including slang usages like \#spicyaccountant), Phrases, and Names of popular creators in a domain (discovered through blogs that talk on the subject). - **Audio Sharing:** TikTok offers multiple people to share and reuse the same audio. So, when a video is found to be, say, sexually suggestive, new creators were discovered by looking into who else used this audio for their video. - Stitches and Duets: A **Duet** allows one creator to post their video side-by-side with a video from another creator on TikTok. A duet contains two videos on a split screen that play at the same time.A **Stitch** is a creation tool on Tiktok that allows a creator to combine another video on TikTok with the one they are creating. Certain videos added in the dataset were discovered as stitches or duets with another creator. - **Public liked videos:** It is possible to see all videos a certain profile likes by visiting that tab on their profile. By default, this is private but can be set to public. Some profiles share videos of a topic by redirecting visitors to their liked videos. Many videos were found and added to the dataset through this method. - **"For you" Page:** It's a recommended feed of videos from creators the user might not follow. The annotator liked and saved videos of sexually suggestive nature, so some similar videos were recommended on the For you Page. ## A.2 Other Videos There are three main strategies for collecting these videos. - Videos that appeared on the TikTok home page when no user was logged in - Videos shared with \#learnontiktok hashtag - Videos that reused audio that was also used in a sexually suggestive video. Each makes up one-third of the total videos collected. ## B Detailed Stats For Transcript Length And Video Length | Parameter | Sugg | Edu | Others | Total | |-------------|--------|--------|----------|---------| | Mean | 16.46 | 231.18 | 82.18 | 98.83 | | Median | 14.00 | 171.50 | 31.00 | 33.00 | | Std | 14.33 | 220.81 | 126.37 | 156.08 | Table 6: Mean, Median, and Standard Deviation of words present in video transcripts. Words were tokenized using the NLTK package. Sugg stands for Suggestive, and Edu stands for educative. Suggestive videos tend to be significantly shorter than the other classes. Table 7: Mean, Median, and Standard Deviation of videos in the dataset in seconds. Sugg stands for Suggestive, and Edu stands for educative. Suggestive videos tend to be significantly shorter than the other classes. | Parameter | Sugg | Edu | Others | Total | |-------------|--------|-------|----------|---------| | Mean | 8.96 | 66.41 | 39.99 | 39.06 | | Median | 7.86 | 50.80 | 28.30 | 23.16 | | Std | 3.82 | 56.92 | 37.88 | 42.90 | ## C Hyperparameters Hyperparameters not mentioned below, are default values from Huggingface. Table 8: Hyperparameters used for the Text Classification Task Table 9: Hyperparameters used for the Video Classification Task | Parameter | Value | |-------------------------|---------| | Batch size | 16 | | Initial Learning Rate | 1e-5 | | Weight Decay | 0.01 | | Warmup Ratio | 0.1 | | Learning Rate Optimiser | AdamW | ## D Transformer Interpret | Parameter | Value | |-------------------------|---------| | Batch size | 8 | | Initial Learning Rate | 5e-5 | | Warmup Ratio | 0.1 | | Learning Rate Optimiser | AdamW | Refer to Figure 3 on the next page. Legend: - Negative □ Neutral - Positive The True Label Predicted Label Attribution Label Attribution Score ![9_image_0.png](9_image_0.png) ![9_image_1.png](9_image_1.png) Word Importance [CLS] thanks for watching ! [SEP] ] _ ![9_image_2.png](9_image_2.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Yes. We have. It's not a numbered section and comes right after the conclusion. ✓ A2. Did you discuss any potential risks of your work? Yes, these are discussed in Ethics on Page 5. Unnumbered section, but immediately follows Limitations ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1 A4. Have you used AI writing assistants when working on this paper? Not applicable. Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3 Introduces The Dataset. B1. Did you cite the creators of artifacts you used? Not applicable. Left blank. ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Section 1, page one, footnotes ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section 3 introduces and details the dataset. Ethics discussed ethical uses of the dataset. ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? We state that due to the nature of the task and licensing, it is not possible to anonymize people in the dataset. But the data collected are public information. The dataset contains sexually suggestive content, and this has been repeated throughout the paper. ✗ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Such information is not available/not collected. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Tables 1 and 2 ## C ✓ **Did You Run Computational Experiments?** Section 4 ✗ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? We note that we needed GPU for computation, but the number of hours was not recorded. The paper's focus is on the dataset itself, and details of baselines used were described in detail. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? We discuss the experimental setup, including model sources, in Section 4 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Results are reported in Tables 3 and 4 and are made clear how we reached them. ( average of three random runs are reported with the standard deviation. The codebase will be shared if the paper is accepted - for reproducibility and testing. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Sections 3 and 4. ## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Section 3 D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. ✗ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? The data collected is public. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. ✗ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? May de-anonymise the paper. It will be shared once when the paper is accepted.
miyamoto-etal-2023-dynamic
Dynamic Structured Neural Topic Model with Self-Attention Mechanism
https://aclanthology.org/2023.findings-acl.366
This study presents a dynamic structured neural topic model, which can handle the time-series development of topics while capturing their dependencies. Our model captures the topic branching and merging processes by modeling topic dependencies based on a self-attention mechanism. Additionally, we introduce citation regularization, which induces attention weights to represent citation relations by modeling text and citations jointly. Our model outperforms a prior dynamic embedded topic model regarding perplexity and coherence, while maintaining sufficient diversity across topics. Furthermore, we confirm that our model can potentially predict emerging topics from academic literature.
## Dynamic Structured Neural Topic Model With Self-Attention Mechanism Nozomu Miyamoto1 Masaru Isonuma1 **Sho Takase**2 Junichiro Mori1,3**Ichiro Sakata**1 1 The University of Tokyo 2 Tokyo Institute of Technology 3 RIKEN Center for Advanced Intelligence Project {nmiyamoto, isonuma, isakata}@ipr-ctr.t.u-tokyo.ac.jp [email protected] [email protected] ## Abstract This study presents a *dynamic structured neural topic model*, which can handle the timeseries development of topics while capturing their dependencies. Our model captures the topic branching and merging processes by modeling topic dependencies based on a selfattention mechanism. Additionally, we introduce citation regularization, which induces attention weights to represent citation relations by modeling text and citations jointly. Our model outperforms a prior dynamic embedded topic model (Dieng et al., 2019) regarding perplexity and coherence, while maintaining sufficient diversity across topics. Furthermore, we confirm that our model can potentially predict emerging topics from academic literature. ## 1 Introduction Topic models are dominant tools for discovering the underlying semantic structure in a collection of documents. As a part of such topic models that can capture the chronological transition of topics have been intensively studied in recent years. The dynamic topic model (DTM; Blei and Lafferty, 2006) is a pioneering work that captures the time-series evolution of topics. It successfully visualizes the changes in the topic proportion and the word distributions of each topic over time. Recently, neural networks have empowered topic models to handle a significant collection of documents. The dynamic embedded topic model (**D-ETM**; Dieng et al., 2019) introduces word embeddings and amortized variational inference into DTM, which significantly improves topic quality while reducing computational time. D-ETM is widely applied to large-scale time-series documents, such as scientific papers and social media (Churchill and Singh, 2022; Murakami et al., 2021). However, DTM and D-ETM assume that topics evolve independently without interaction. This assumption is inappropriate, particularly for mod5916 ![0_image_0.png](0_image_0.png) eling scientific papers, where documents are dependent on each other through citation relations. For example, the recent *text-to-image* techniques are evolved from multiple topics such as image processing, natural language processing, and deep learning. Conventional dynamic topic models cannot capture how past topics contributed to the emergence of new topics. Furthermore, these models cannot predict emerging topics because the posterior word distribution of topics is parameterized for each time step, as explained later in detail. To overcome these challenges, we propose a dynamic structured neural topic model (**DSNTM**), which captures the dependencies among topics over time (Fig. 1). Specifically, DSNTM models topic dependencies based on a self-attention mechanism (Vaswani et al., 2017; Lin et al., 2017), which reveals how past topics branches or merges into new topics. We can quantitatively evaluate which past topics contributes to the emergence of new topic by observing attention weights. In addition, the selfattention mechanism shares the parameters used for inferring topics at each time step, enabling the model to predict emerging topics. Additionally, we introduce citation regularization, which induces attention weights to reflect the citation relations among documents. Citation regularization enables DSNTM to model text and citation jointly, improving the inferred topics' quality and accurately capturing their transitions. Due to the high expressive power of the self-attention mechanism and additional citation information, our model can capture the complex branching and merging processes of topics over time. In the experiment, we used datasets consisting of over 20,000 scientific papers on computer science and natural language processing retrieved from the Semantic Scholar Open Research Corpus (**S2ORC**; Lo et al., 2020). Experimental results show that DSNTM outperforms recent neural topic models (Dieng et al., 2019, 2020) regarding perplexity and coherence, while maintaining sufficient diversity across topics. We also confirmed that DSNTM accurately captures the topic branching and merging processes and can potentially predict emerging topics in the academic literature. ## 2 Related Work Extending the dynamic topic model (Blei and Lafferty, 2006), several variants have been proposed, such as the dependent Dirichlet processes mixture model (Lin et al., 2010), infinite dynamic topic model (Ahmed and Xing, 2010), and D-ETM (Dieng et al., 2019). However, these studies treat timeseries changes in topics independently and cannot capture dependencies among topics. Relating to structured topic models, several models have been proposed such as tree-structured topic model (Griffiths et al., 2003; Isonuma et al., 2020; Chen et al., 2021) and pachinko allocation models (Li and McCallum, 2006; Mimno et al., 2007). In addition, several studies have modeled the structure among documents by jointly modeling citation network and text (Nallapati et al., 2008; Tu et al., 2010; Chang and Blei, 2010; Lim and Buntine, 2015). However, these studies were not intended to track the time series transitions of topics. The dynamic and static topic model (**DSTM**; Hida et al., 2018) extends the pachinko allocation model to capture the dynamic structure over time and the static structure among topics at each time step. DSTM has several drawbacks against our model. It cannot capture topic dependencies across multiple time steps, and thus, incorporating citation information is challenging. In addition, it cannot predict emerging topics as topic transitions are parameterized for each time step. Moreover, collapsed Gibbs sampling is used to infer posteriors, which is not scalable for large datasets. The dynamic topic model on networked documents (**NetDTM**; Zhang and Lauw, 2022) models time-series documents and citation networks simultaneously. However, NetDTM does not capture the relations between topics nor predict emerging topics, which significantly differ from ours. Contrary to the aforementioned studies, our work introduces amortized variational inference using the self-attention mechanism. This inference technique enables us to capture the topic branching and merging process across multiple time steps with significant scalability. Furthermore, our model can predict emerging topics from past topics. ## 3 Background We first review the embedded topic model (ETM; Dieng et al., 2020), and then explain D-ETM, which combines ETM with DTM before introducing our DSNTM. ## 3.1 Embedded Topic Model (Etm) ETM (Dieng et al., 2020) is a topic model that introduces word embeddings into LDA. The generative process of documents is the following: 1. For each document index $d\in\{1,\ldots,D\}$: Draw topic proportion: $\mathbf{\theta}_{d}\sim\mathcal{LN}(\mathbf{0},\mathbf{I})$ (1) 2. For each word index $n\in\{1,\ldots,N_{d}\}$ in $d$: Draw topic assignment: $z_{d,n}\sim\text{Cat}(\mathbf{\theta}_{d})$ (2) Draw word: $w_{d,n}\sim\text{Cat}(\mathbf{\beta}_{z_{d,n}})$ (3) Here, Cat(·) and LN (·, ·) denote the categorical distribution and logistic-normal distribution (Atchison and Shen, 1980), respectively. βk ∈ R Vrepresents the word distribution of the k th topic computed as follows: $$\beta_{k}=\mathrm{softmax}(\rho^{\top}\alpha_{k}).$$ $$(4)$$ ⊤αk). (4) where ρ ∈ R L×V denotes the L-dimensional word embeddings of the entire vocabulary. The ρv ∈ R L corresponds to the v th word embedding. αk ∈ R L denotes the embedding representation of the k th topic in the semantic space of words, which is called topic embedding. ## 3.2 Dynamic Embedded Topic Model (D-Etm) D-ETM (Dieng et al., 2019) analyzes time-series documents by changing the topics over time. Contrary to ETM, D-ETM assumes a discrete-time Markov chain for the topic embedding in Eq. (5) and the topic proportion mean in Eq. (7). The generative process of documents is described as follows: 1. For each time step t ∈ {1*, . . . , T*}: Draw word distribution for each topic k: α (t) k ∼ N (α (t−1) k, σ2I) (5) β (t) k = softmax(ρ ⊤α (t) k ) (6) Draw topic proportion mean: ηt ∼ N (ηt−1, δ2I) (7) 2. For each document index d∈ {1, . . . , D}: Draw topic proportion: θd ∼LN (ηtd , γ2I) (8) 3. For each word index n∈ {1, . . . , Nd} in d: Draw topic assignment: zd,n ∼ Cat(θd) (9) Draw word: wd,n ∼ Cat(β (td) zd,n ) (10) where α (t) kand β (t) kare the topic embedding and word distribution assigned to the k th topic in the t th time step, respectively. σ, δ , and γ are model hyperparameters, which control the variance of normal distributions. Dieng et al. (2019) approximate the posterior distribution of α, η and θ with amortized variational inference (Kingma and Welling, 2014; Rezende et al., 2014). Particularly, for the topic embeddings α, the mean-field family is used for the approximation as follows: $$\begin{array}{l c r}{{q(\mathbf{\alpha}_{k}^{(t)})={\mathcal{N}}(\mathbf{\mu}_{k}^{(t)},\mathbf{\sigma}_{k}^{(t)})}}&{{}}&{{(11)}}\\ {{}}&{{q(\mathbf{\alpha})=\prod_{k}\prod_{t}q(\mathbf{\alpha}_{k}^{(t)})}}&{{}}&{{(12)}}\end{array}$$ where µ (t) k ∈ R L and σ (t) k ∈ R L are learnable vectors representing the mean and variance of α (t) k , respectively. However, this mean-field approximation has two limitations. Dependencies among topics cannot be modeled Eq. (12) assumes that topics are independent of each other. This assumption is typically inappropriate for time-series documents. For instance, academic topics sometimes emerge from interactions among several past topics. Topic dependencies must be modeled to consider such interactions. Emerging topics cannot be predicted D-ETM infers a topic by parameterizing µ (t) kand σ (t) kfor each time step t. As documents in the t th time step are used to infer these parameters, topics cannot be inferred for the time steps that are not contained in the dataset. The parameters must be shared across all time steps to predict emerging topics. To overcome these limitations, our DSNTM introduces the self-attention mechanism to infer those parameters. The self-attention mechanism enables DSNTM to capture the topic dependencies, while sharing the parameters across all time steps. $$\mathrm{{\boldmath~\cal~T\;e a c h\;t o p i c i c\;}}k\mathrm{{:}}$$ ## 4 Dynamic Structured Neural Topic Model (Dsntm) This section describes the proposed DSNTM. The generative process of documents is the same as that of D-ETM. ## 4.1 Inference Of Topic Embeddings Contrary to D-ETM, we use structured variational inference to infer the topic embeddings. We compute a topic embedding from all previous topic embeddings using the self-attention mechanism. $$\tilde{\mathbf{\alpha}}_{k}^{(t)}=\mathrm{self-attention}(\tilde{\mathbf{\alpha}}_{1:K}^{(1:t-1)})\tag{13}$$ $$q(\mathbf{\alpha}_{k}^{(t)}|\tilde{\mathbf{\alpha}}_{1:K}^{(1:t-1)})=\mathcal{N}(f_{\mu}(\tilde{\mathbf{\alpha}}_{k}^{(t)}),f_{\sigma}(\tilde{\mathbf{\alpha}}_{k}^{(t)}))\tag{14}$$ where α˜ (t) k ∈ R L denotes the transformed topic embedding. fµ and fσ are multi-layer perceptrons (MLP) that convert α˜ (t) kto a variational normal distribution. Computation of Self-attention We present an outline of the self-attention mechanism in Fig. 2. To calculate Eq. (13), we obtain the key K (1:t−1) 1:K ∈ R K(t−1)×L and *value* V (1:t−1) 1:K ∈ R K(t−1)×L from all previous transformed topic embeddings α˜ (1:t−1) 1:K . On the other hand, the *query* q (t−1) k ∈R L is obtained from the k th transformed topic embedding α˜ (t−1) kat time step t−1. $$\begin{array}{c}{{K_{1:K}^{(1:t-1)}=f_{k}(\tilde{\alpha}_{1:K}^{(1:t-1)})}}\\ {{V_{1:K}^{(1:t-1)}=f_{v}(\tilde{\alpha}_{1:K}^{(1:t-1)})}}\\ {{q_{k}^{(t-1)}=f_{q}(\tilde{\alpha}_{k}^{(t-1)})}}\end{array}$$ k) (17) where fk, fv and fq denote MLPs. Subsequently, we compute the attention weight a (t) k ∈ R K(t−1) between each past topic and the k th topic at time step t, similar to Vaswani et al. (2017). $$\mathbf{a}_{k}^{(t)}=\operatorname{softmax}({\frac{\mathbf{q}_{k}^{(t-1)}\mathbf{K}_{1:K}^{(1:t-1)\top}}{\sqrt{L}}})$$ $$(18)$$ ) (18) ![3_image_0.png](3_image_0.png) Then, we obtain α˜ (t) kby calculating the sum of V (1:t−1) 1:K weighted by the attention weights. We use a residual connection to obtain α˜ (t) kfor preventing gradient explosion and disappearance (Simonyan and Zisserman, 2014). $$\begin{array}{c}{{\Delta\tilde{\alpha}_{k}^{(t)}=a_{k}^{(t)}V_{1:K}^{(1:t-1)}}}\\ {{\tilde{\alpha}_{k}^{(t)}=\mathrm{LayerNorm}(\Delta\tilde{\alpha}_{k}^{(t)}+\tilde{\alpha}_{k}^{(t-1)})}}\end{array}$$ 1:K (19) k) (20) Here, we use the layer normalization (Ba et al., 2016) to compute the transformed topic embeddings. At the time step t = 1, we initialize the transformed topic embeddings α˜ (1) kfrom a normal distribution. $$\tilde{\alpha}_{k}^{(1)}\sim{\cal N}({\bf0},\sigma^{2}I)$$ k ∼ N (0, σ2I) (21) Note that the prior distribution assumes that α (t) k is drawn from a normal distribution as follows: α (t) k ∼ N (α (t−1) k, σ2I). This assumption regularizes α (t) kto be close to its previous topic α (t−1) k. Motivations behind Self-attention We use the self-attention mechanism for the following reasons: (1) Instead of the self-attention mechanism, we can also capture the dependencies among topics by simply parameterizing the topic embeddings as follows: $$\tilde{\alpha}_{1:K}^{(t)}=W\tilde{\alpha}_{1:K}^{(t-1)}$$ 1:K (22) Here, W ∈ R K×K is a learnable weight matrix, where wi,j represents the dependency between topic i and j. However, this parameterization cannot capture the dependencies across multiple time steps, and a method that allows an arbitrary number of inputs is required. The self-attention mechanism, which handles an arbitrary-length sequence, satisfies this requirement and captures the topic dependencies across multiple time steps. (2) The self-attention mechanism parameterizes MLP fq, fk, and fv to compute the embeddings of subsequent topics from previous topics. As the parameters of MLPs are shared over time, the selfattention mechanism allows DSNTM to predict the emerging topic embeddings. $$\begin{array}{l}{(19)}\\ {\quad(20)}\end{array}$$ ## 4.2 Overall Inference And Elbo Under our proposed probabilistic model, the likelihood of documents is given by $$(21)^{\frac{1}{2}}$$ p(w1:D|σ, δ, γ) = Z nY d Y n X zd,n p(wd,n|β (td) zd,n )p(zd,n|θd)p(θd|ηtd ) o nY t Y k p(ηt|ηt−1)p(α t k|α t−1 k) odθdηdα = Z nY d Y n (β (td)· θd)wd,n p(θd|ηtd ) o nY t Y k p(ηt|ηt−1)p(α t k|α t−1 k) odθdηdα (23) Subsequently, let q(θ, η, α) be the variational distribution of the posterior distribution p(θ, η, α|w1:D). Following D-ETM, q(θ, η, α) is computed as follows: $$(22)^{\frac{1}{2}}$$ q(θ, η, α) $$\begin{array}{l}\mbox{$\langle\theta,\eta,\alpha\rangle$}\\ \mbox{$=\prod_{d}q(\theta_{d}|\eta_{t_{d}},\mathbf{w}_{d})\times\prod_{t}q(\eta_{t}|\eta_{1:t-1},\tilde{\mathbf{w}}_{t})$}\\ \mbox{$\times\prod_{t}\prod_{k}q(\alpha_{k}^{(t)}|\tilde{\mathbf{\alpha}}_{1:K}^{(1:t-1)})$}\end{array}\tag{24}$$ $$\begin{array}{l}{{q(\theta_{d}|\eta_{t_{d}},w_{d})={\mathcal{L}}{\mathcal{N}}(f_{\mu}(\tilde{\theta}_{t_{d}}),f_{\sigma}(\tilde{\theta}_{t_{d}}))}}\\ {{\tilde{\theta}_{t_{d}}=f_{\theta}([\eta_{t_{d}};w_{d}])}}\\ {{q(\eta_{t}|\eta_{1:t-1},\tilde{w}_{t})={\mathcal{N}}(f_{\mu}(\tilde{\eta}_{t}),f_{\sigma}(\tilde{\eta}_{t}))}}\\ {{\tilde{\eta}_{t}=[h_{t};\eta_{t-1}]}}\end{array}$$ $$(25)$$ $$(26)$$ where wd is the bag-of-words (BoW) representation of document d. htis the hidden state of a longshort term memory network (**LSTM**; Hochreiter and Schmidhuber, 1997) that uses the normalized BoW representation w˜t of all documents at time t as input. [·; ·] denotes the concatenation of vectors. The evidence lower bound (**ELBO**) for the document log-likelihood is derived as follows: Ldoc = X d Eq(θd)q(ηt)q(α (t) k ) hw⊤ d log(β (td)· θd) i − X t X k DKLhq(α (t) k|α (1:t−1) 1:K )||p(α (t) k|α (t−1) k) i − X d DKLhq(θd|ηtd , wd)||p(θd|ηtd ) i − X t DKLhq(ηt|η1:t−1, w˜t)||p(ηt|ηt−1) i(27) ## 5 Citation Regularization DSNTM has difficulty interpreting the attention weights among topics. Ideally, the attention should model the dependency among topics, representing citation relations between documents. Therefore, we let the attention weights to be interpretable by regularizing them to correspond with citation relations. Citation regularization also improves the quality of the inferred topics by jointly modeling the text and citations. To regularize the attention weights, we model the citations between documents based on the topic proportion θ, the attention weights a, and the paper proportion ϕ as shown in Fig. 3. Formally, for each document pair (i, j) ∈ {1, . . . , D} × {1*, . . . , D*}, the citation is modeled as follows: 1. Draw citing topic assignment: $z_{i}\sim\text{Cat}(\mathbf{\theta}_{i})$ (28) $\mathbf{z}^{(t_i)}_{z_i}$ (29) (30) ... 2. Draw cited topic assignment: $z_{j}\!\sim\!\mbox{Cat}(\mathbf{a}_{z_{i}}^{(t_{i})})$ (29) 3. Draw cited document: $d_{j}\sim\mbox{Cat}(\mathbf{\phi}_{z_{j}})$ (30) where a (ti) k ∈ R K(ti−1) is the attention weight, which denotes the probability distribution across all previous topics zj ∈ {1, . . . , K}×{1*, . . . , t*i− 1}. The paper proportion ϕk ∈ R D denotes the probability distribution of cited documents where a topic k is assigned, as explained in next section. ![4_image_0.png](4_image_0.png) ## 5.1 Obtaining Paper Proportion From Bayes' theorem, we calculate the probability where a paper dj ∈ {1*, . . . , D*} is cited according to a topic zj as follows: $$\begin{array}{c c}{{p(d_{j}|z_{j})=\frac{p(z_{j}|d_{j})p(d_{j})}{p(z_{j})}}}&{{}}\\ {{}}&{{\propto p(z_{j}|d_{j})}}\end{array}\qquad\qquad(31)$$ Here, we assume that the prior p(dj ) is uniformly distributed, and p(zj ) can be ignored because it is constant regardless of dj . As p(dj |zj )=ϕ (dj ) zjand p(zj |dj )=θ (zj ) dj, ϕ (dj ) zjcan be simply computed by normalizing θ (zj ) dj: $$\phi_{z_{j}}^{(d_{j})}=\frac{\theta_{d_{j}}^{(z_{j})}}{\sum_{d_{j}}\theta_{d_{j}}^{(z_{j})}}$$ $$(32)$$ ## 5.2 Overall Inference And Elbo Hence, under our modeling assumption, the likelihood of a citation ci,j ∈ {0, 1} is given by $$p(c_{i,j}=1|\theta,\alpha)$$ $$=\sum_{k}^{K}\sum_{k^{\prime}}^{K}p(d_{j}|\phi_{k^{\prime}})p(z_{j}=k^{\prime}|\mathbf{a}_{k}^{(t_{i})})p(z_{i}=k|\theta_{i})\tag{33}$$ where ci,j = 1 indicates that the document di cites the document dj . Finally, the likelihood of both documents and citations can be described as follows: $$p(\mathbf{w}_{1:D},c_{1,1},\ldots,c_{D,D}|\sigma,\delta,\gamma)$$ $$=\int\biggl{\{}\prod_{i}\prod_{n}(\mathbf{\beta}^{(t_{i})}\cdot\mathbf{\theta}_{i})_{w_{i,n}}p(\mathbf{\theta}_{i}|\mathbf{\eta}_{t_{i}})\biggr{\}}$$ $$\biggl{\{}\prod_{t}\prod_{k}p(\mathbf{\eta}_{t}|\mathbf{\eta}_{t-1})p(\mathbf{\alpha}_{k}^{t}|\mathbf{\alpha}_{k}^{t-1})\biggr{\}}$$ $$\biggl{\{}\prod_{i}\prod_{j}p(c_{i,j}|\mathbf{\theta},\mathbf{\alpha})\biggr{\}}d\mathbf{\theta}d\mathbf{\eta}d\mathbf{\alpha}\tag{34}$$ The ELBO for both document and citation loglikelihood is derived as follows: i Eq(θi) hw⊤ ilog(β (ti)· θi) i L= X − X t X k DKLhq(α (t) k|α (1:t−1) 1:K )||p(α (t) k|α (t−1) k) i − X i DKLhq(θi|ηti , wi)||p(θi|ηti ) i − X t DKLhq(ηt|ηt−1, w˜t)||p(ηt|ηt−1) i j Eq(θi)q(ηt)q(α (t) k ) hlog p(ci,j |θ, α) i + X X i = Ldoc + Lcit (35) Here, Ldoc is defined in Eq. (27), and Lcit is defined as the following equation: $$L_{c i t}=\sum_{i}^{D}\sum_{j}^{D}\mathrm{BCE}\left[p(c_{i,j}|\mathbf{\theta},\mathbf{\alpha}),c_{i,j}\right]\quad\mathrm{(36)}$$ where BCE denotes the binary cross entropy. ## 6 Experiment 6.1 Experimental Setup1 Dataset In our experiments, we used ACL and CS dataset, which were based on the Semantic Scholar Open Research Corpus (S2ORC; Lo et al., 2020) 2. S2ORC contains over 136 million academic papers, each of which contains publication year, abstract text, cited paper's data, ACL ID, and field of study. We used the abstracts of papers that are published at *ACL conferences (ACL ID is not "None") from 2006 to 2019 for ACL dataset. For CS dataset, we used the abstracts of papers where the field of study includes "Computer Science" and are published from 2006 to 2019. We used the top 40,000 papers w.r.t. the number of citations for CS dataset. 1https://github.com/miyamotononno/DSNTM 2https://github.com/allenai/s2orc | Dataset | ACL | CS | |--------------------------|--------|--------| | # of time steps | 7 | 7 | | # of words in vocabulary | 5,540 | 10,449 | | # of docs for training | 14,110 | 23,991 | | # of docs for validation | 4,704 | 7,997 | | # of docs for evaluation | 4,704 | 7,998 | Table 1: Summary statistics of the datasets. The papers were randomly splitted into 3:1:1 ratio for training, validation, and evaluation. We also filtered out stop words, i.e., words with a document frequency of 70% or above, words appearing in less than ten documents, numbers, punctuation marks, and stop words used in Dieng et al. (2019). The papers published in two consecutive years were grouped into a single time step so that each time step contained a sufficient number of papers. For example, papers published between 2006 and 2007 were grouped into t= 1. The statistics of the datasets are summarized in Table 1. Baseline Methods As baseline methods, we measured the performance of ETM (Dieng et al., 2020) and D-ETM (Dieng et al., 2019) by using the published code of ETM3and D-ETM4. To evaluate the effectiveness of the self-attention mechanism, we also compared DSNTM that adopts a linear layer instead of the self-attention as shown in Eq. (22). We denote it by "DSNTM w/o self-attention." Implementation Details The number of topics was set to 20 for all models and kept constant over time for fair comparison with the baseline models. Hyperparameters of each model were tuned based on the validation perplexity in ACL dataset. Further details are provided in Appendix A. ## 6.2 Experimental Results We quantitatively evaluated the performance of topic models using the following three criteria. We run each model eight times and show the average performance and its 95% confidence interval in Table 2 and 3. Lower is better for perplexity, while higher is better for coherence and diversity. Perplexity We used perplexity (Rosen-Zvi et al., 2004) to evaluate the generalization ability of topic models as a generative model. It measures the 3https://github.com/adjidieng/ETM 4https://github.com/adjidieng/DETM | Perplexity | Coherence | Diversity | | |----------------------------------|--------------|-------------|-------------| | ETM (Dieng et al., 2020) | 1,590.6± 2.4 | 0.023±0.003 | 0.911±0.010 | | D-ETM (Dieng et al., 2019) | 1,187.8± 7.4 | 0.091±0.003 | 0.788±0.016 | | DSNTM w/o self-attention | 1,260.5±37.8 | 0.054±0.005 | 0.631±0.043 | | DSNTM | 1,079.9± 8.9 | 0.084±0.006 | 0.851±0.009 | | DSNTM w/ citation regularization | 1,054.0± 7.4 | 0.101±0.006 | 0.895±0.014 | Table 2: Evaluation of each model for ACL dataset. ETM (Dieng et al., 2020) 3,011.8± 4.4 0.022±0.002 0.956±**0.006** D-ETM (Dieng et al., 2019) 2,519.0±41.8 0.078±0.004 0.954±0.010 DSNTM w/o self-attention 2,195.2±25.2 0.078±0.004 0.904±0.019 DSNTM 2,185.0±19.2 0.079±0.009 0.929±0.009 DSNTM w/ citation regularization 2,156.8±30.5 0.105±**0.005** 0.948±0.006 Perplexity Coherence Diversity Table 3: Evaluation of each model for CS dataset. ability to predict words in unseen documents. Perplexity is computed as follows: $$\mbox{Perplexity}=\exp\biggl{(}-\frac{\sum_{d=1}^{D}\log\,p(\mathbf{w}_{d})}{\sum_{d=1}^{D}N_{d}}\biggr{)}\tag{37}$$ where Nd is the number of words in the test document d. As computing p(wd) is intractable, we calculated perplexity using ELBO following Miao et al. (2017); Srivastava and Sutton (2017). Across the two datasets, our DSNTM achieved a lower perplexity than the baseline models. DSNTM outperformed D-ETM by a large margin specifically for CS dataset. In addition, DSNTM with citation regularization outperformed DSNTM, indicating that citation information contributed to the generalization ability of the topic model. Coherence We measured topic coherence by calculating the average pointwise mutual information (Mimno et al., 2011) to assess the interpretability of topics. Specifically, we used the normalized pointwise mutual information (**NPMI**; Lau et al., 2014) of the two words included in the top 10 most likely words of the topic k. $${\mathrm{Coherence}}$$ Conference $ =\frac{1}{K}\sum_{k}^{K}\frac{1}{45}\sum_{i=1}^{10}\sum_{j=i+1}^{10}\text{NPMI}(w_i^{(k)},w_j^{(k)})$ (2) . j) (38) NPMI is calculated using the following formula: $$\text{NPMI}(w_{i},w_{j})=\frac{\log\frac{P(w_{i},w_{j})}{P(w_{i})P(w_{j})}}{\log P(w_{i},w_{j})}\tag{39}$$ where P(wi, wj ) is the probability where words wi and wj co-occurs in a document, and P(wi) is the marginal probability of word wi. Our DSNTM significantly outperformed ETM and DSNTM without self-attention, while achieving a slightly lower score than D-ETM. However, the citation regularization let DSNTM outperform D-ETM. This result demonstrates that the topic interpretability is sufficiently ensured by the citation regularization. Diversity We calculated the percentage of unique words in the top 25 frequent words of all topics to measure the diversity of topics following Dieng et al. (2020, 2019). $$\mathrm{Diversity}={\frac{N_{u}}{25K}}$$ i.e., the number of ways. $$(40)$$ 25K(40) where Nu denotes the number of unique words that appear in all topics. Both models achieved competitive scores with the baseline models. This result indicates that our models improves the topic quality, while ensuring sufficient topic diversity. ## 7 Discussion 7.1 Visualization Of Topic Transition $${\mathfrak{i}}{\mathfrak{s}})$$ We discuss that the attention weights capture academic topics merging and branching processes. Fig. 4 presents an example of topic merging on CS dataset. We show a topic about *motion tracking* in 2018-2019 (i.e., citing topic) and the two most influential topics on its emergence with respect to the attention weights in 2016-2017 (i.e., ![7_image_0.png](7_image_0.png) cited topic). Each cited topic represents *motion* tracking and *reinforcement learning*. To investigate the validity of this topic merging, we checked the citation relations between the top 50 papers w.r.t. the citing topic's paper proportion and the top 50 papers w.r.t. the cited topic's paper proportion using the test dataset. While many papers on the citing topic refer to papers on motion tracking in previous years, some papers refer to papers on reinforcement learning. As reinforcement learning is used as the learning method of a tracker to achieve a light computation and satisfactory tracking accuracy for object tracking, the topic of reinforcement learning greatly influences the topic of object tracking. The attention weights reveal the merging process of academic topics. Subsequently, we present an example of topic branching using ACL dataset (Fig. 5). We show a topic about *machine translation* in 2014-2015 (i.e., cited topic) and three subsequent topics that are most highly influenced by the cited topic (i.e., citing topic). Each citing topic describes *machine* translation (2016-2017) and *neural network* (20162017 and 2018-2019). We assessed the validity of this branching in the same manner as topic merging. As of 2014-2015, statistical machine translation (SMT) was predominant, whereas neural machine translation (NMT) was a nascent area in machine translation research. After 2016, NMT was intensively studied by incorporating SMT knowledge of SMT, while NMT models were imported into other text generation tasks (e.g., summarization). This trend induced the topics on neural network in 20162019. DSNTM successfully captures such topic branching processes in the academic literature. Finally, we present an overview of the topic transition process using ACL dataset (Fig. 6). The topics in the first, second, and third rows represent graph, *neural network*, and *social media*, respectively. We can follow the prevalence of the neural ![7_image_1.png](7_image_1.png) ![7_image_2.png](7_image_2.png) network techniques to other research areas, such as graphs and social media, by observing the frequent words in the topic and attention weights. DSNTM enables us to grasp the current trends in the research area without following the citations of articles. ## 7.2 Prediction Of Emerging Topics In this section, we discuss the predictive performance of emerging topics using DSNTM on CS dataset. We trained DSNTM on the papers in 20062017 and predicted topics in 2018-2019 by computing the posterior word distribution for each topic using the self-attention mechanism (Eq. (6), (13), and (14)). We then prepared another model trained on the papers from 2006-2019. The topics inferred by this model are regarded as a proxy for the ground truth of predicted topics. We discuss the quality of the predicted topics by comparing the inferred topics in 2018-2019 across two models. To measure the prediction performance, we calculated the minimum KL divergence between the word distribution of the predicted topics and ground truth topics: PK i=1 minj DKL[β pred i, β truth j]/K. This value measures the difference between the predicted topics and their nearest ground truth topics. We computed this value for topics in 2018-2019 and compared it with the average value computed for topics in 2006-2017, which provides the baseline of the predictive performance. We show that the average KL divergence for 2018-2019 (i.e., predictive performance) is 5.33, whereas that for 2006-2017 (i.e., baseline) is 5.24. This result indicates that the predicted topics are sufficiently close to the ground truth topics. Although further studies are needed, this result suggests that DSNTM can potentially predict emerging topics in the academic literature. ## 8 Conclusion In this study, we proposed a novel dynamicstructured neural topic model, DSNTM, which captures dependencies among topics using the selfattention mechanism. We also introduced a citation regularizer, which induces the attention weights to correspond to citation relations. Experimental results demonstrated that DNSTM outperforms previous dynamic topic models regarding perplexity and coherence while maintaining sufficient diversity across topics. In addition, DSNTM can identify the process of topic merging and branching while showing the potential to predict emerging topics. We expect that DSNTM will make it easier for non-specialists to keep track of the evolution of topics in a given research area without retracing the citations of copious articles and assist their search for a novel topic. ## Limitations As a limitation of the modeling assumption, DSNTM assumes that the number of topics is constant over time; however, this assumption is inappropriate for some time-series documents, such as scientific papers. As the number of scientific papers is increasing annually, increasing the number of topics over time would be appropriate for modeling the time-series evolution of academic literature. We used the abstracts of the papers as text, and the attention was computed using textual information. However, citations mainly appear in the body text when a paper cites other papers. Therefore, there might be a discrepancy between the attention among topics and the citation relation among papers because the attention cannot not consider information in the body text. In future work, it would be desirable to evaluate our model using a corpus containing the body text of the papers. Generally, topic models sometimes infer the incorrect information about topics, such as the frequent words appearing in topics, the topic proportion in each document, and the dependencies among topics. It would be the potential risk to induce the misunderstanding of users. ## Ethics Statement Our study complies with the ACL Ethics Policy. We used S2ORC (Lo et al., 2020, CC BY-NC 4.0), PyTorch (Paszke et al., 2019, BSD-style license) as scientific artifacts. Our study is conducted under the licenses and terms of the scientific artifacts. S2ORC is a collection of academic papers and generally does not contain any information that uniquely identifies individual people or offensive content. We did not use the author's information in our experiments. ## Acknowledgements We would like to thank the anonymous reviewers for their valuable feedback. This work was supported by NEDO JPNP20006, JST ACT-X JPMJAX1904, JST CREST JPMJCR21D1, Japan. ## References Amr Ahmed and Eric P Xing. 2010. Timeline: a dynamic hierarchical dirichlet process model for recovering birth/death and evolution of topics in text stream. In Proceedings of the 26th Conference on Uncertainty in Artificial Intelligence, pages 20–29. J Atchison and Sheng M Shen. 1980. Logistic-normal distributions: Some properties and uses. *Biometrika*, 67:261–272. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450. David M Blei and John D Lafferty. 2006. Dynamic topic models. In Proceedings of the 23rd international conference on Machine learning, pages 113–120. Jonathan Chang and David M Blei. 2010. Hierarchical relational models for document networks. The Annals of Applied Statistics, 4(1):124–150. Ziye Chen, Cheng Ding, Zusheng Zhang, Yanghui Rao, and Haoran Xie. 2021. Tree-structured topic modeling with nonparametric neural variational inference. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th* International Joint Conference on Natural Language Processing, pages 2343–2353. Rob Churchill and Lisa Singh. 2022. Dynamic topicnoise models for social media. In *Pacific-Asia Conference on Knowledge Discovery and Data Mining*, pages 429–443. Adji B Dieng, Francisco JR Ruiz, and David M Blei. 2019. The dynamic embedded topic model. *arXiv* preprint arXiv:1907.05545. Adji B Dieng, Francisco JR Ruiz, and David M Blei. 2020. Topic modeling in embedding spaces. *Transactions of the Association for Computational Linguistics*, 8:439–453. Thomas Griffiths, Michael Jordan, Joshua Tenenbaum, and David Blei. 2003. Hierarchical topic models and the nested chinese restaurant process. *Advances in* neural information processing systems, 16. Rem Hida, Naoya Takeishi, Takehisa Yairi, and Koichi Hori. 2018. Dynamic and static topic model for analyzing time-series document collections. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 516–520. Association for Computational Linguistics. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. *Neural computation*, 9(8):1735– 1780. Masaru Isonuma, Junichiro Mori, Danushka Bollegala, and Ichiro Sakata. 2020. Tree-structured neural topic model. In *Proceedings of the 58th Annual Meeting of* the Association for Computational Linguistics, pages 800–806. Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. International Conference on Learning Representations, pages 1– 15. Diederik P Kingma and Max Welling. 2014. Autoencoding variational bayes. In Proceedings of the 2nd International Conference on Learning Representations. Jey Han Lau, David Newman, and Timothy Baldwin. 2014. Machine reading tea leaves: Automatically evaluating topic coherence and topic model quality. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 530–539. Wei Li and Andrew McCallum. 2006. Pachinko allocation: Dag-structured mixture models of topic correlations. In Proceedings of the 23rd international conference on Machine learning, pages 577–584. Kar Wai Lim and Wray Buntine. 2015. Bibliographic analysis with the citation network topic model. In Proceedings of the Sixth Asian Conference on Machine Learning, volume 39 of *Proceedings of Machine Learning Research*, pages 142–158. PMLR. Dahua Lin, Eric Grimson, and John Fisher. 2010. Construction of dependent dirichlet processes based on poisson processes. *Advances in neural information* processing systems, 23. Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A structured self-attentive sentence embedding. *arXiv preprint arXiv:1703.03130*. Kyle Lo, Lucy Lu Wang, Mark Neumann, Rodney Kinney, and Daniel Weld. 2020. S2ORC: The semantic scholar open research corpus. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4969–4983. Yishu Miao, Edward Grefenstette, and Phil Blunsom. 2017. Discovering discrete latent topics with neural variational inference. In *Proceedings of the 34th International Conference on Machine Learning*, pages 2410–2419. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. Advances in neural information processing systems, 26. David Mimno, Wei Li, and Andrew McCallum. 2007. Mixtures of hierarchical topics with pachinko allocation. In *Proceedings of the 24th international conference on Machine learning*, pages 633–640. David Mimno, Hanna Wallach, Edmund Talley, Miriam Leenders, and Andrew McCallum. 2011. Optimizing semantic coherence in topic models. In Proceedings of the 2011 conference on empirical methods in natural language processing, pages 262–272. Riki Murakami, Basabi Chakraborty, and Yukari Shirota. 2021. Dynamic topic tracking and visualization using covid-19 related tweets in multiple languages. In *2021 International Conference on Artificial Intelligence and Big Data Analytics*, pages 16–21. Ramesh M Nallapati, Amr Ahmed, Eric P Xing, and William W Cohen. 2008. Joint latent topic models for text and citations. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 542–550. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32. Danilo J Rezende, Shakir Mohamed, and Daan Wierstra. 2014. Stochastic backpropagation and approximate inference in deep generative models. In *Proceedings of the 31st International Conference on Machine* Learning, pages 1278–1286. Michal Rosen-Zvi, Thomas Griffiths, Mark Steyvers, and Padhraic Smyth. 2004. The author-topic model for authors and documents. In Proceedings of the 20th Conference on Uncertainty in Artificial Intelligence, page 487–494. Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. *arXiv preprint arXiv:1409.1556*. Akash Srivastava and Charles Sutton. 2017. Autoencoding variational inference for topic models. In Proceedings of the 5th International Conference on Learning Representations. Yuancheng Tu, Nikhil Johri, Dan Roth, and Julia Hockenmaier. 2010. Citation author topic model in expert search. In *The 23rd International Conference* on Computational Linguistics: Posters, pages 1265– 1273. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *Advances in neural information processing* systems, 30:5998–6008. Delvin Ce Zhang and Hady Lauw. 2022. Dynamic topic models for temporal document networks. In *International Conference on Machine Learning*, pages 26281–26292. PMLR. ## A Implementation Details The hyperparameters of each model were tuned based on the perplexity of the validation set in the ACL. The model was trained using Adam (Kingma and Ba, 2015) with a batch size of 512. DSNTM and ETM were trained for 200 epochs with a learning rate of 6.0 × 10−4. As D-ETM was slow to converge, D-ETM was trained for 600 epochs with a learning rate of 8.0 × 10−4. We applied the learning rate decay for each model. To infer the topic embedding q(α (t) k|α (1:t−1) 1:K ), we used a linear layer for fq, fk, fv, fµ, fσ to compute the self-attention and the variational distribution in Eq. (14). The dimension of the topic embedding was set L = 300. We used the multihead attention (Vaswani et al., 2017) for the selfattention mechanism, where the number of parallel attention heads is 10. Regarding the following hyperparameters, we set the same hyperparameters as those used in DETM. To infer the topic proportion q(θd|ηtd , wd), We used one-hidden-layer MLPs with 800 hidden units and ReLU activation for fθ and a linear layer for fµ and fσ to compute the variational distribution in Eq. (25). To construct the inference of the topic proportion mean q(ηt|η1:t−1, w˜t), we first applied a linear layer to the BoW representation of documents at the time step t and obtain 200-dimensional input vector for LSTM. Then, we applied LSTM with three layers of 200 hidden units to the input, and obtain the hidden states of each time step ht. We used a linear layer for fµ and fσ to compute the variational distribution in Eq. (26). The variances of the priors were set to δ 2 =σ 2 = 0.005 and γ 2 = 1. We used 300-dimensional word embeddings pretrained with a skip-gram (Mikolov et al., 2013) used in ETM and D-ETM. We ran experiments with a single NVIDIA GeForce RTX 2080 Ti for each model. The computational cost and parameters of each model are reported in Table 4. Our DSNTM converged faster than D-ETM regardless citation regularization. The training time was longer when using the citation regularization as it calculates the loss in Eq. (36) with time complexity O(D2). Our code is implemented with Python v3.9.13, PyTorch v1.9.0 (Paszke et al., 2019). We use the pretrained word embeddings published by Dieng et al. (2019) 5. NPMI is computed using the code distributed by Lau et al. (2014) 6. ## B **Computing The Influence Among Topics** In Fig. 5, we do not directly use the attention weights to represent how much a past topic influences the emergence of new topics. This section describes its reason and how to calculate the influence of a topic on emerging topics. We assume that the influence of a topic zj on the emergence of topic zi can be represented by the probability where zi emerges given zj . From Bayes' theorem, we can calculate its probability as follows: $$\begin{array}{c}{{p(z_{i}|z_{j})=\frac{p(z_{j}|z_{i})p(z_{i})}{p(z_{j})}}}\\ {{\propto p(z_{j}|z_{i})p(z_{i})}}\end{array}\tag{41}$$ where p(zj ) can be ignored because it is constant regardless of zi. p(zj |zi) is represented by the attention weight from zito zj , denoted as azi→zj . p(zi) indicates the marginal probability where topic zi appears across all documents, which is calculated by the sum of its topic proportions across all documents. $$\begin{array}{c}{{p(z_{i})=\sum_{d}p(z_{i}|d)p(d)}}\\ {{=\sum_{d}\theta_{d}^{(z_{i})}}}\end{array}$$ $$(42)$$ where we assume that p(d) is uniformly distributed. Thus, we can obtain probability p(zi|zj ) as follows: $$v_{i,j}=a_{z_{i}\to z_{j}}\sum_{d}\theta_{d}^{(z_{i})}\qquad\qquad(43)$$ $$p(z_{i}|z_{j})=\frac{v_{i,j}}{\sum_{j}v_{i,j}}\qquad\qquad(44)$$ Therefore, we calculate the influence of a topic zj on the emergence of topic zi by considering the marginal probability of topic zi. Dataset ACL CS Parameters Time Memory Parameters Time Memory ETM (Dieng et al., 2020) 5,111,640 6 44 9,038,840 25 84 D-ETM (Dieng et al., 2019) 7,287,480 33 22 12,196,480 81 42 DSNTM w/o self-attention 7,480,380 9 22 12,389,380 23 42 DSNTM 8,022,780 9 23 12,931,780 29 43 DSNTM w/ citation regularization 8,022,780 14 23 12,931,780 55 44 Table 4: Computational cost of each model. Parameters, Time, and Memory denote the total number of model parameters, the total training time (minute) and the peak amount of the memory usage (MB), respectively. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section "Limitations" ✗ A2. Did you discuss any potential risks of your work? Section "Limitations" ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract, Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 6,7, Appendix A ✓ B1. Did you cite the creators of artifacts you used? Section 6.1, Appendix A B2. Did you discuss the license or terms for use and / or distribution of any artifacts? No response. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section "Ethics Statement" ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Section "Ethical Statement" ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 6.1 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 6.1 ## C ✓ **Did You Run Computational Experiments?** Section 6, 7 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix A The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 6.1, Appendix A ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Appendix A ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Appendix A D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
wang-etal-2023-hybrid
Hybrid-Regressive Paradigm for Accurate and Speed-Robust Neural Machine Translation
https://aclanthology.org/2023.findings-acl.367
This work empirically confirms that non-autoregressive translation (NAT) is less robust in decoding batch size and hardware settings than autoregressive translation (AT). To address this issue, we demonstrate that prompting a small number of AT predictions can significantly reduce the performance gap between AT and NAT through synthetic experiments. Following this line, we propose hybrid-regressive translation (HRT), a two-stage translation prototype that combines the strengths of AT and NAT. Specifically, HRT first generates discontinuous sequences via autoregression (e.g., make a prediction for every $k$ tokens, $k>1$) and then fills in all previously skipped tokens at once in a non-autoregressive manner. Experiments on five translation tasks show that HRT achieves comparable translation quality with AT while having at least 1.5x faster inference regardless of batch size and device. Additionally, HRT successfully inherits the sound characteristics of AT in the deep-encoder-shallow-decoder architecture, allowing for further speedup without BLEU loss.
# Hybrid-Regressive Paradigm For Accurate And Speed-Robust Neural Machine Translation Qiang Wang1,2, Xinhui Hu2**, Ming Chen**2∗ 1Zhejiang University, Hangzhou, China 2RoyalFlush AI Research Institute, Hangzhou, China {wangqiang3, huxinhui}@myhexin.com, [email protected] ## Abstract This study provides empirical evidence that non-autoregressive translation (NAT) is less robust in decoding batch size and hardware settings than autoregressive translation (AT). To address this issue, we demonstrate that incorporating a small number of AT predictions can significantly reduce the performance gap between AT and NAT through synthetic experiments. In line with this, we propose hybridregressive translation (HRT), a two-stage translation prototype that combines the strengths of AT and NAT. Specifically, HRT initially generates discontinuous sequences using autoregression (e.g., making predictions for every k tokens, k > 1), and then fills in all previously skipped tokens simultaneously in a nonautoregressive manner. Experimental results on five translation tasks show that HRT achieves comparable translation quality to AT while providing at least 1.5x faster inference, irrespective of batch size and device. Moreover, HRT successfully retains the desirable characteristics of AT in the deep-encoder-shallow-decoder architecture, enabling further speed improvements without sacrificing BLEU scores.1 ## 1 Introduction Autoregressive translation (AT) such as Transformer has been the *de facto* standard for Neural Machine Translation (NMT) (Vaswani et al., 2017). However, AT predicts only one target word at a time, resulting in slow inference speed. To overcome this limitation, non-autoregressive translation (NAT) attempts to generate the entire target sequence in one step by assuming conditional independence among target tokens (Gu et al., 2018). While NAT offers efficiency, it often suffers from significant degradation in translation quality. Achieving a better trade-off between inference speed and translation quality remains an active area ∗Corresponding author. 1https://github.com/wangqiangneu/hrt of research for NAT (Wang et al., 2018a; Ran et al., 2020; Qian et al., 2021; Huang et al., 2022b,a). One of the most successful approaches to this issue is the iterative refinement mechanism (IRNAT) proposed by Lee et al. (2018), which has been widely adopted by several leading systems (Ghazvininejad et al., 2019; Kasai et al., 2020a; Guo et al., 2020; Saharia et al., 2020; Geng et al., 2021; Huang et al., 2022b). Specifically, IR-NAT, also known as multi-shot NAT, takes the translation hypothesis from the previous iteration as a reference to refine the new translation until it reaches the predefined iteration count I or no translation changes. Although a larger I can improve translation accuracy, it may also lead to a speedup degradation (Kasai et al., 2020b). In this work, we build upon the findings of Kasai et al. (2020b) and examine the robustness of IR-NAT compared to AT. Our comprehensive examinations confirm that the inference speed of IRNAT is consistently less robust than that of AT when involving various decoding batch sizes and computing hardware. For example, when using a GPU, the ten-iteration non-autoregressive model has 1.7/1.2/0.7/0.4 times the inference speed of the AT model for decoding batch sizes of 1/8/16/32, respectively. However, when switching to CPU, the relative speed ratio drops to 0.8/0.4/0.3/0.3 times. Previous studies have highlighted the complementary nature of AT and NAT in terms of both translation quality (AT being superior) and inference speed (NAT being superior) (Wang et al., 2018a; Ran et al., 2020). Our findings, however, suggest that there is also complementary robustness in inference speed (AT being superior). Taking a further step, we investigate how much target context (i.e., the number of target tokens) is sufficient for one-shot NAT to rival multi-shot NAT through synthetic experiments. Our findings suggest that given a well-trained CMLM model, even if 70% of AT translations are masked, the ![1_image_0.png](1_image_0.png) remaining target context can help the CMLM1 with greedy search compete with the standard CMLM10 with beam search (see Figure 2). This could enable us to build the desired target context more cheaply, replacing expensive multiple iterations. To our best knowledge, this is the first study of the masking rate issue in the inference phase of NAT. Based on the observations from these experiments, we have proposed a novel two-stage translation prototype called hybrid-regressive translation (HRT). This method combines the advantages of autoregressive translation (AT) and nonautoregressive translation (NAT) by first using an autoregressive decoder to generate a discontinuous target sequence with an interval of k (k > 1), and then filling the remaining slots in a lightweight non-autoregressive manner. We have also created a multi-task learning framework, enhanced by curriculum learning, for effective and efficient training without adding any model parameters. Results on WMT En↔Ro, En↔De, and NIST Zh→En show that HRT outperforms prior work combining AT and NAT, and is competitive with state-of-theart IR-NAT models. Specifically, HRT achieved a BLEU score of 28.27 on the WMT En→De task, and is 1.5x faster than AT regardless of batch size and device. Additionally, HRT equipped with a deep-encoder-shallow-decoder architecture achieved up to 4x/3x acceleration on GPU/CPU, respectively, without sacrificing BLEU. ## 2 Background Given a source sentence x = {x1, x2*, . . . , x*M} and a target sentence y = {y1, y2*, . . . , y*N }, there are several ways to model P(y|x): Autoregressive Translation (AT) AT is the predominant technique in NMT, decomposing P(y|x) using the chain rule: P(y|x) = QN t=1 P(yt|x, y<t), where y<t denotes the prefix translation generated before time step t. Nevertheless, autoregressive models must wait for yt−1 to be generated before predicting yt, thus hindering parallelism over the target sequence. Non-Autoregressive Translation (NAT) NAT has been proposed to generate target tokens simultaneously (Gu et al., 2018). This approach replaces the traditional autoregressive formulation of y<t with a target-independent input z, resulting in the following formulation: P(y|x) = P(N|x) ×QN t=1 P(yt|x, z). Various approaches have been proposed for modeling z, such as using source embedding (Gu et al., 2018; Guo et al., 2019), reordering the source sentence (Ran et al., 2019), or using a latent variable (Ma et al., 2019; Shu et al., 2019). Iterative Refinement based Non-Autoregressive Translation (IR-NAT) IR-NAT extends the traditional one-shot NAT by introducing an iterative refinement mechanism (Lee et al., 2018). We choose CMLM (Ghazvininejad et al., 2019) as the representative of IR-NAT due to its excellent performance and simplification. During training, CMLM randomly masks a fraction of tokens on y as the alternative to z, and is trained as a conditional masked language model (Devlin et al., 2019). Denoting y m/y ras the masked/residual tokens of y, we have: P(y|x) = Q|ym| t=1 P(y m t|x, y r). At inference, CMLM deterministically masks tokens from the hypothesis in the previous iteration yˆ (i−1) according to prediction confidences. This process is iterated until yˆ (i−1)=yˆ (i) or i reaches the maximum iteration count. ## 3 Acceleration Robustness Problem In this section, we comprehensively analyze the inference acceleration robustness problem in IRNAT. Without loss of generality, we take CMLM as the agency of IR-NAT.2 Problem Description The inference overhead of the autoregressive translation model mainly con-2From the perspective of inference speed, we note that most one-shot NAT models are closed to CMLM1. Especially, existing one-shot NAT models with CTC loss, such as GLAT and Fully-NAT, are theoretically slower than CMLM1 because they require a longer target sequence for inference. centrates on the decoder side(Hu et al., 2020). Suppose that the decoder's computational cost is proportional to the size of its input tensor (*B, N, H*), where B is the batch size, N is the target sequence length, and H is the network dimension. We omit H for convenience due to its invariance in NAT and AT. Thus, the total cost of AT model is about Cat ∝ N × O(B × 1) 3. Likely, the cost of Iiteration NAT is Cnat ∝ I × O(B × N). Given a fixed test set, We can use TD(·) to represent the translation time on computing device D. This allows us to calculate the relative speedup ratio α between I-iteration NAT and AT as: $$\alpha=\frac{T_{D}(C_{a t})}{T_{D}(C_{n a t})}\propto\frac{N}{I}\times\mathcal{E}(B,D),\qquad(1)$$ where E(*B, D*)= TD(O(B×1)) TD(O(B×N)) ≤ 1, denotes the parallel computation efficiency over sequence under batch size B and device D. When fixing N and I, α is completely determined by E(*B, D*). We note that most previous NAT studies only report the inference speed with D=GPU and B=1, without considering cases where B or D change. Setup We systematically investigate the inference speed of CMLM 4and AT under varying environments, including batch size B ∈ {1, 8, 16, 32}, device D ∈ {GPU, CPU} 5, and the number of iterations I ∈ {1, 4, 10}, using a beam size of 5. We test inference speed on the widely used WMT En→De *newstest2014* test set and report the average results over five runs (see Appendix A for details). Results We plot the curve of relative speedup ratio (α) in Figure 1 and observe that: i. α decreases as decoding batch size increases regardless of the number of iterations, as noted by Kasai et al. (2020b). ii. α on CPU generally performs worse than GPU, except when using one iteration. For instance, when decoding a single sentence on the GPU, the inference speed of the ten-iteration non-autoregressive model is 170% that of the autoregressive model. However, when switching to batches of 32 on CPU, the IR-NAT model only reaches 30% of the AT model's inference speed. These results demonstrate that AT and NAT possess different strengths, and combining the advantages of both models could be an effective way to achieve robust acceleration. ## 4 Synthetic Experiments According to Equation 1, reducing the iteration count I helps to increase α. Recalling the refinement process of IR-NAT, we hypothesize that the essence of multiple iterations is to provide the decoder with a good enough target context (deterministic target tokens). This raises the question of *how many target tokens need to be provided to* make one-shot NAT competitive with IR-NAT? To answer it, we conduct synthetic experiments on WMT En→Ro and En→De to control the size of the target context by masking the partial translations generated by a pre-trained AT model. We then use a pre-trained CMLM model to predict these masks and observe the BLEU score curves under different masking rates. Models We use the official CMLM models. Since the authors did not release the AT baselines, we used the same data to retrain AT models with the standard Transformer-Base configuration (Vaswani et al., 2017) and obtain comparable performance with the official ones (see Appendix B for details). Decoding AT models decode with beam sizes of 5 on both tasks. Then we replace a certain percentage of AT tokens with [MASK] and feed them to CMLM. The used CMLM model only iterates once with beam size 1. We substitute all [MASK]s with CMLM's predictions to obtain the final translation. We report case-sensitive tokenized BLEU scores by *multi-bleu.perl*. Mask Strategies We tested four strategies to mask AT results: HEAD, TAIL, RANDOM, and CHUNK. Given the masking rate p*mask* and the translation length N, the number of masked tokens is N*mask*=max(1, ⌊N×p*mask*⌋). Then HEAD/TAIL always masks the first/last N*mask* tokens, while RANDOM masks the translation randomly. CHUNK is slightly different from the above strategies. It first divides the target sentence into C chunks, where ![3_image_0.png](3_image_0.png) C = Ceil(N/k) and k is the chunk size. Then in each chunk, we retain the first token but mask other k−1 tokens. Thus, the actual masking rate in CHUNK is 1− 1/k instead of p*mask*. We ran RANDOM three times with different seeds to exclude randomness and report the average results. Results The experimental results in Figure 2 demonstrate that CHUNK is moderately and consistently superior to RANDOM, and both strategies significantly outperform HEAD and TAIL. We attribute this success to the use of (1) bidirectional context (Devlin et al., 2019) (vs. HEAD and TAIL), and (2) the uniform distribution of deterministic tokens (vs. RANDOM) 6. Furthermore, when using the CHUNK strategy, we find that exposing 30% AT tokens as the input of the decoder is enough to make our CMLM1(beam=1) competitive with the official CMLM10(beam=5), which emphasizes the importance of a good partial target context. ## 5 Hybrid-Regressive Translation We propose a novel two-stage translation paradigm, Hybrid-Regressive Translation (HRT), which imitates the CHUNK process. In HRT, a discontinuous sequence with a chunk size of k is autoregressively generated in stage I, followed by nonautoregressive filling of the skipped tokens in stage II. ## 5.1 Architecture Overview HRT consists of three components: encoder, Skip-AT decoder (for stage I), and SkipCMLM decoder (for stage II). All components adopt the Transformer architecture (Vaswani et al., 2017). The two decoders have the same network structure, and we share them to make the parameter size of HRT the same as the vanilla Transformer. The only difference between the two decoders lies in the masking pattern in self-attention: The Skip-AT decoder masks future tokens to guarantee strict left-to-right generation like the autoregressive Transformer. In contrast, the Skip-CMLM decoder eliminates it to leverage the bi-directional context like CMLM (Ghazvininejad et al., 2019). No Target Length Predictor Thanks to Skip-AT, we can obtain the translation length as a by-product: Nnat=k × Nat, where Nat is the sequence length produced by Skip-AT. Our approach has two major advantages over most NAT models, which jointly train both the translation length predictor and the translation model. Firstly, there is no need to carefully adjust the weighting coefficient between the sentence-level length prediction loss and the wordlevel target token prediction loss. Secondly, the length predicted by Skip-AT is more accurate due to its access to the already-generated sequence information. ![4_image_0.png](4_image_0.png) ## 5.2 Training Next, we elaborate on how to train the HRT model efficiently and effectively. Please refer to Appendix C for the entire training algorithm. Multi-Task Framework We learn HRT by jointly training four tasks: two primary tasks (TASK-SKIP-AT, TASK-SKIP-CMLM) and two auxiliary tasks (TASK-AT, TASK-CMLM). All tasks use cross-entropy as the training objective. Figure 3 illustrates the differences in training samples among these tasks. Notably, TASK-SKIP-AT shrinks the sequence length from N to N/k, while preserving the token positions from the original sequence. For example, in Figure 3 (c), the position of TASK-SKIP-AT input ([B2], y2, y4) is (0, 2, 4). Auxiliary tasks are necessary to leverage all tokens in the sequence, as the two primary tasks are limited by the fixed k. For example, in Figure 3 (c) and (d), y1 and y3 cannot be learned as the decoder input of either TASK-SKIP-AT or TASK-SKIP-CMLM. Curriculum Learning To ensure the model is not overly biased towards auxiliary tasks, we propose gradually transferring the training tasks from auxiliary tasks to primary tasks through curriculum learning (Bengio et al., 2009). We start with a batch of original sentence pairs B, and let the proportion of primary tasks in B be pk=0. We construct the training samples of TASK-AT and TASK-CMLM for all pairs, then gradually increase pk to introduce more learning signals for TASK-SKIP-AT and TASK-SKIP-CMLM until pk=1. We schedule pk by pk = (t/T) λ, where t and T are the current and total training steps, and λ is a hyperparameter set to 1 for linear increase. ![4_image_1.png](4_image_1.png) ## 5.3 Inference As illustrated in Figure 4, HRT adopts a two-stage generation strategy. In the first stage, the Skip-AT decoder autoregressively generates a discontinuous target sequence yˆat = (z1, z2*, . . . , z*m) with chunk size k, starting from [BOSk] and ending with [EOS]. Then, the input of Skip-CMLM decoder ynat is constructed by appending k − 1 [MASK]s before every zi. The final translation is generated by replacing all [MASK]s with the predicted tokens after one iteration of the Skip-CMLM decoder. If multiple [EOS]s exist, we truncate to the first [EOS]. The beam sizes bat and bnat can be different from each other, as long as bat ≥ bnat. In our implementation, we use standard beam search in Skip-AT (bat >1) and greedy search in Skip-CMLM (bnat=1). Table 3 provides more details on the beam size setting in HRT. The translation hypothesis with the highest score S(ˆy) is chosen by summing the Skip-AT score and the Skip-CMLM score: ![5_image_2.png](5_image_2.png) where zi=yˆi×k. ## 5.4 Discussion # We and the skip-CNN score: $\log P(z_i|\pmb{x},\pmb{z}_{<i})+\underbrace{\sum_{i=0}^{m-1}\sum_{j=1}^{k-1}\log P(\hat{y}_{i\times k+j}|\pmb{x},\pmb{y}_{n\times t})}_{\text{Skip-AT score}}$ (2) - The basic idea of HRT is to apply Autoregressive Transformation (AT) and Non-Autoregressive Transformation (NAT) in sequence. This concept has been investigated before (Kaiser et al., 2018), Ran et al. (2019), and Akoury et al. (2019). The main differences between these methods lie in the content of the AT output, such as latent variable (Kaiser et al., 2018), reordered source tokens (Ran et al., 2019), and syntactic labels (Akoury et al., 2019). In contrast, our approach uses the deterministic target token following Ghazvininejad et al. (2019). HRT is related to chunk-wise decoding, another line incorporating AT and NAT. Table 1 shows the differences between HRT and prior studies, including SAT (Wang et al., 2018a), RecoverSAT (Ran et al., 2020), and LAT (Kong et al., 2020). SAT and LAT follow the generation order of from-leftto-right, behaving similarly to HEAD as described in Section 4. In contrast, RecoverSAT and HRT generate discontinuous target contexts, which have been shown to perform better than HEAD according to our synthetic experiments. However, RecoverSAT cannot accurately generate the discontinuous context "a,c,e" via non-autoregression, resulting in error propagation for generating "b,d,f". However, HRT produces "a,c,e" through accurate autoregression. Additionally, although HRT requires more decoding steps, its non-autoregressive process is inexpensive due to greedy searches. In contrast, other methods require larger beams to explore translations of different lengths. ## 6 Experimental Results Setup We conduct experiments on five tasks, including WMT'16 English↔Romanian (En↔Ro, 610k), WMT'14 English↔German (En↔De, 4.5M) and long-distance language pair NIST Chinese-English (Zh→En, 1.8M). For fair comparisons, we replicate the same data processing as Ghazvininejad et al. (2019) in four WMT tasks and follow the setup of Wang et al. (2018b) for Zh→En. Like previous work, we train | Method | Generation | |------------|------------------------| | SAT | a, b → c, d → e, f | | RecoverSAT | a, c, e → b, d, f | | LAT | {a → b → c, d → e → f} | | HRT (Our) | a → c → e 99K b, d, f | ![5_image_0.png](5_image_0.png) ![5_image_1.png](5_image_1.png) | bat | bnat | En→Ro | En→De | α(AG) | α(AC) | |-------|--------|---------|---------|---------|---------| | 1 | 1 | 34.16 | 28.19 | 2.1 | 2.8 | | 5 | 1 | 34.37 | 28.27 | 1.7 | 1.6 | | 5 | 5 | 34.36 | 28.50 | N/A | 1.1 | HRT through sequence-level knowledge distillation (Kim and Rush, 2016). Specifically, we use the standard Transformer-Base as teacher models for En↔Ro and Zh→En, while we use the deep PreNorm Transformer-Base with a 20-layer encoder for harder En↔De. We run all experiments on four 2080Ti GPUs. Unless noted otherwise, we use the chunk size k=2. We fine-tune HRT models on pre-trained AT models and take 100k/300k/100k training steps for En↔Ro/En↔De/Zh→En, respectively. Other training hyperparameters are the same as Vaswani et al. (2017) or Wang et al. (2019) (deep-encoder). We report both case-sensitive tokenized BLEU scores and SacreBLEU 7. We also report COMET as suggested by Helcl et al. (2022). Beam Size on HRT We first verify the influence of two beam sizes of HRT (bat and bnat) on the BLEU score and relative speedup ratio by testing three different setups. The results are listed in Table 3. Consistent with our observations in synthetic experiments, using bnat=1 only slightly reduces BLEU but significantly improves decoding efficiency. Considering the trade-off between translation quality and speed, and for a fair comparison with other baselines (most prior related work uses beam size of 5), we use bat=5 and bnat=1 unless 7Signature: BLEU+case.mixed+lang.source-*target*+numrefs.1+smooth.exp+tok.13a+version.1.5.1 | System | Param. | Iter. | WMT'16 | WMT'14 | COMET | | | | |--------------------------------------|------------------------------|----------|---------------|---------------|---------------|---------------|---------------|--------| | En-Ro | Ro-En | En-De | De-En | | | | | | | Existing systems | | | | | | | | | | AXE (Ghazvininejad et al., 2020a) | - | 1 | 30.75 | 31.54 | 23.53 | 27.90 | - | | | NAT | GLAT+CTC (Qian et al., 2021) | - | 1 | 32.79 | 33.84 | 26.39 | 29.54 | - | | Fully-NAT (Gu and Kong, 2021) | - | 1 | 33.79 | 34.16 | 27.49 | 31.39 | - | | | DA-Transformer (Huang et al., 2022a) | 73M | 1 | - | - | 27.91 | 31.95 | - | | | CMLM (Ghazvininejad et al., 2019) | 76M | 10 | 33.08 | 33.31 | 27.03 | 30.53 | 0.4338 | | | LevTransformer (Gu et al., 2019) | - | Adaptive | - | - | 27.27 | - | - | | | Iterative NAT | JM-NAT (Guo et al., 2020) | - | 10 | 33.52 | 33.72 | 27.69 | 32.24 | - | | SMART (Ghazvininejad et al., 2020b) | - | 10 | - | - | 27.65 | 31.27 | - | | | DisCO (Kasai et al., 2020a) | - | Adaptive | 33.22 | 33.25 | 27.34 | 31.31 | - | | | Imputer (Saharia et al., 2020) | - | 8 | 34.40 | 34.10 | 28.20 | 31.80 | - | | | RewriteNAT (Geng et al., 2021) | - | Adaptive | 33.63 | 34.09 | 27.83 | 31.52 | - | | | CMLMC (Huang et al., 2022b) | - | 10 | 34.57 | 34.13 | 28.37 | 31.41 | - | | | Semi-NAT | SAT (Wang et al., 2018a) | - | N/2 | - | - | 26.90 | - | - | | SynST (Akoury et al., 2019) | - | N/6 + 1 | - | - | 20.74† | 25.50† | - | | | ReorderNAT (Ran et al., 2019) | - | N + 1 | 31.70 | 31.99 | 26.49 | 31.13 | - | | | RecoverSAT (Ran et al., 2020) | - | N/2 | 32.92 | 33.19 | 27.11 | 31.67 | - | | | Our implementations | | | | | | | | | | AT (teacher for En↔Ro) | 61M | N | 34.25(34.2† ) | 34.40(34.0† ) | 27.45(26.9† ) | 31.86(31.6† ) | 0.4779 | | | Raw | AT20−6 (teacher for En↔De) | 105M | N | - | - | 28.79(28.2† ) | 33.02(32.8† ) | 0.5201 | | HRT | 61M | N/2 + 1 | 33.59(33.5† ) | 32.98(32.9† ) | 26.69(26.2† ) | 30.58(30.3† ) | 0.4331 | | | AT | 61M | N | 34.14(33.9† ) | 34.06(33.8† ) | 28.24(27.7† ) | 31.95(31.7† ) | 0.4922 | | | Distillation | SAT | 61M | N/2 | - | - | 26.47(25.9† ) | 29.40(29.1† ) | 0.1848 | | GLAT-CTC | 62M | 1 | - | - | 26.59(26.0† ) | 29.73(29.4† ) | 0.1712 | | | HRT | 61M | N/2 + 1 | 34.37(34.2† ) | 34.14(33.9† ) | 28.27(27.7† ) | 32.02(31.7† ) | 0.4881 | | | HRT20−6 | 105M | N/2 + 1 | - | - | 29.06(28.5† ) | 33.20(32.9† ) | 0.5098 | | | Model | MT04 | MT05 | MT08 | |--------------|--------|--------|--------| | AT (teacher) | 43.86 | 52.91 | 33.94 | | CMLM10 | 42.47 | 52.16 | 33.09 | | HRT | 43.81 | 52.99 | 34.17 | ## Otherwise Stated. Main Results We compare the performance of HRT with existing systems in different translation paradigms on four WMT tasks, as shown in Table 2. HRT with distillation data consistently outperforms that of raw data and most existing NAT, IR-NAT, and Semi-NAT models, obtaining a BLEU score of 28.27 on the widely used En→De task. Compared to the re-implemented typical semi-autoregressive model (SAT) and one-shot non-autoregressive model (GLAT-CTC), HRT obtains an improvement of approximately 1.7 BLEU points, with a more significant margin in COMET score. Moreover, HRT20−6 can improve by 0.7 BLEU and 0.02 COMET when using a deeper encoder. Interestingly, the evaluation results of BLEU and COMET are inconsistent, as observed by Helcl et al. (2022). For instance, HRT20−6 has higher BLEU score than AT20−6 on En→De, but its COMET score is still lower. Furthermore, the experimental results on the Zh→En task, as reported in Table 4, demonstrate that the effectiveness of HRT is agnostic to language pairs, as it is close or superior to the original AT and CMLM model. We attribute this to two reasons: (1) HRT is fine-tuned on a well-trained AT model; (2) Multi-task learning on autoregressive and non-autoregressive tasks has better regularization than training alone. ## 7 Analysis Impact of Chunk Size We tested chunk size k on the En→De task, as shown in Table 5. We observed that larger values of k had a more significant speedup on the GPU, as fewer autoregressive steps were required. However, as k increased, the performance of HRT dropped sharply; for example, k=4 was about 1.24 BLEU points lower than k=2 on the test set. This suggests that the training difficulty of Skip-AT increases as k becomes larger. Further investigation into more sophisticated training algorithms to address this is left for our future work. | Chunk | Valid | Test | α(AG) | α(AC) | |---------|---------|--------|---------|---------| | 2 | 26.44 | 28.27 | 1.7 | 1.6 | | 3 | 26.34 | 27.92 | 2.5 | 2.3 | | 4 | 25.60 | 27.03 | 3.2 | 2.9 | | Stage I | Stage II | BLEU | ∆ | |-----------|------------|--------|-------| | HRT | HRT | 28.27 | ref. | | HRT20−6 | HRT20−6 | 29.06 | +0.79 | | HRT | HRT20−6 | 28.42 | +0.15 | | HRT20−6 | HRT | 28.88 | +0.61 | Table 5: Effects of chunk size (k) on BLEU and α. ## Which Decoding Stage Is More Important? To understand the importance of the two decoding stages of HRT, we exchange the intermediate results of two HRT models (A and B). Specifically, we use the Skip-AT decoder of A to generate its discontinuous target sequence, which is then forced decoded by B's Skip-AT decoder to obtain corresponding encoding representations and autoregressive model scores. Finally, B's Skip-CMLM decoder generates the complete translation result based on these. We can reverse the order of A and B as well. We use two models (HRT and HRT20−6) with a large performance gap as A and B, respectively. As shown in Table 6, we find that using the result of stage I of the strong model brings a greater improvement (+0.61 BLEU) than that of stage II (+0.15 BLEU). This result supports our hypothesis that a good partial target context is essential. Deep-encoder-shallow-decoder Architecture Kasai et al. (2020b) showed AT with deepencoder-shallow-decoder architecture can speed up translation without sacrificing accuracy, while CMLM fails. To validate whether HRT can inherit this, we compared HRT and AT with a 12-layer encoder and 1-layer decoder (HRT12−1 and AT12−1), using the same distillation data. As Table 7 shows, both AT12−1 and HRT12−1 benefit from the layer allocation, achieving comparable BLEU scores and double the decoding speed of the vanilla models. Specifically, HRT12−1 achieved an average acceleration of 4.2x/3.1x over the AT baselines. This suggests HRT12−1's success was due to Skip-AT rather than Skip-CMLM. However, its COMET scores are lower than 6-6 architecture. Table 8: Ablation study on En→De task. | System | BLEU | ∆ | |--------------|--------|-------| | HRT (T=300k) | 28.27 | ref. | | −FT | 28.00 | -0.27 | | −CL (pk=1) | 27.53 | -0.74 | | −CL (pk=0.5) | 27.75 | -0.52 | | −TS (T=100k) | 27.82 | -0.45 | | −ALL | 26.59 | -1.68 | | Model | BLEU | COMET | α(AG) | α(AC) | |---------|--------|---------|---------|---------| | AT | 28.24 | 0.4922 | ref. | ref. | | AT12−1 | 28.40 | 0.4539 | 2.7 | 2.1 | | HRT | 28.27 | 0.4881 | 1.7 | 1.6 | | HRT12−1 | 28.24 | 0.4152 | 4.2 | 3.1 | Further research into decoder depth and COMET correlation will be conducted. Ablation Study In Table 8, we conduct an ablation study on the En→De task to investigate the contribution of fine-tuning from pre-trained AT (FT), training steps (TS), and curriculum learning (CL). We test two settings about CL: Fixing pk=1 is equivalent to removing auxiliary tasks; Fixing pk=0.5 assigns the same probability to the primary and auxiliary tasks. The results show that all components contribute to the performance, but CL and TS are the most critical, with a reduction of 0.74 and 0.45 BLEU points, respectively. Excluding all components from the vanilla HRT (-ALL) leads to a total reduction of 1.68 BLEU points. Case study Table 9 presents a translation example from En→De validation set. Comparing CMLM5 and HRT, both having the same masking rate (50%), two main distinctions can be observed: (1) The distribution of masked tokens in CMLM is more discontinuous than in HRT (as indicated by the blue marks); (2) The decoder input of HRT contains more accurate target tokens than CMLM, due to the Skip-AT decoder (as indicated by the wavy marks). These differences make our model more effective in producing high-quality translations than CMLM, and suggest that our model can generate appropriate discontinuous sequences. | Source | Also problematic : civil military jurisdiction will continue to be uph@@ eld . Auch problematisch : Die zivile Militär@@ geri@@ chts@@ barkeit soll weiter | | | | |----------------------------------------------------|--------------------------------------------------------------------------------------------|--------------------------------------|--------|----| | Reference | aufrechterhalten bleiben . Problem@@ atisch : | Die | zivile | mil | | itärische Gerichts@@ ✿✿✿✿✿ | | | | | | CMLM10 | barkeit wird weiterhin | | | | | (5th) | aufrechterhalten . [EOS] Auch ✿✿✿✿✿✿✿✿✿✿ problematisch : ✿✿✿ Die zivile Militär@@ ✿✿✿✿✿✿✿✿ | | | | | HRT | geri@@ | ✿✿✿✿✿✿ chts@@ barkeit wird weiterhin | | | | aufrechterhalten werden . [EOS] [EOS] ✿✿✿✿✿✿✿✿✿✿✿✿ | | | | | Table 9: A case study in En→De validation set. **Blue** denotes the original input is [MASK]. We add a wavy line under the target context tokens (black) that hit the reference translation. We also report the CMLM10 in the 5th iteration that has closing mask rate to HRT. ## 8 Conclusion We noted that IR-NAT has robustness issues with inference acceleration. Inspired by our findings in synthetic experiments, we proposed HRT to take advantage of the strengths of both AT and NAT. Our experiments demonstrated that our approach surpasses existing semi-autoregressive and IR-NAT methods, providing competitive performance and consistent speedup, making it a viable alternative to autoregressive translation. ## 9 Limitations The main limitation of HRT is that its upper bound on the inference speedup is lower than that of single-iteration NAT under the same network architecture. As demonstrated in Appendix A, the average speedup of single-iteration NAT (i.g., CMLM1) is 4.7x/3.2x on GPU/CPU, respectively, while that of HRT is 1.7x/1.6x. To achieve higher acceleration, HRT needs to employ the deep-encodershallow-decoder architecture. Increasing the chunk size is a simple way to reduce the autoregressive cost, yet it results in severe BLEU degradation (see Table 5). Further research should be conducted to maintain high translation performance with fewer autoregressive prompts. ## Acknowledgements We would like to thank the anonymous reviewers for their helpful comments. We also thank Shuqin Pan for the writing suggestions. ## References Nader Akoury, Kalpesh Krishna, and Mohit Iyyer. 2019. Syntactically supervised transformers for faster neural machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1269–1281. Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum learning. In Proceedings of the 26th Annual International Conference on Machine Learning, ICML '09, page 41–48, New York, NY, USA. Association for Computing Machinery. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Xinwei Geng, Xiaocheng Feng, and Bing Qin. 2021. Learning to rewrite for non-autoregressive neural machine translation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3297–3308, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Marjan Ghazvininejad, Vladimir Karpukhin, Luke Zettlemoyer, and Omer Levy. 2020a. Aligned cross entropy for non-autoregressive machine translation. In ICML 2020: 37th International Conference on Machine Learning, volume 1, pages 3515–3523. Marjan Ghazvininejad, Omer Levy, Yinhan Liu, and Luke Zettlemoyer. 2019. Mask-predict: Parallel decoding of conditional masked language models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6112–6121, Hong Kong, China. Association for Computational Linguistics. Marjan Ghazvininejad, Omer Levy, and Luke Zettlemoyer. 2020b. Semi-autoregressive training improves mask-predict decoding. arXiv preprint arXiv:2001.08785. Jiatao Gu, James Bradbury, Caiming Xiong, Victor O.K. Li, and Richard Socher. 2018. Non-autoregressive neural machine translation. In International Conference on Learning Representations. Jiatao Gu and Xiang Kong. 2021. Fully nonautoregressive neural machine translation: Tricks of the trade. In ACL 2021: 59th annual meeting of the Association for Computational Linguistics, pages 120–133. Jiatao Gu, Changhan Wang, and Junbo Zhao. 2019. Levenshtein transformer. In Advances in Neural Information Processing Systems, pages 11179– 11189. Junliang Guo, Xu Tan, Di He, Tao Qin, Linli Xu, and Tie-Yan Liu. 2019. Non-autoregressive neural machine translation with enhanced decoder input. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 3723–3730. Junliang Guo, Linli Xu, and Enhong Chen. 2020. Jointly masked sequence-to-sequence model for non-autoregressive neural machine translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 376–385. Jindˇrich Helcl, Barry Haddow, and Alexandra Birch. 2022. Non-autoregressive machine translation: It's not as fast as it seems. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1780–1790, Seattle, United States. Association for Computational Linguistics. Chi Hu, Bei Li, Yinqiao Li, Ye Lin, Yanyang Li, Chenglong Wang, Tong Xiao, and Jingbo Zhu. 2020. The NiuTrans system for WNGT 2020 efficiency task. In Proceedings of the Fourth Workshop on Neural Generation and Translation, pages 204–210, Online. Association for Computational Linguistics. Fei Huang, Hao Zhou, Yang Liu, Hang Li, and Minlie Huang. 2022a. Directed acyclic transformer for nonautoregressive machine translation. In Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 9410–9428. PMLR. Xiao Shi Huang, Felipe Perez, and Maksims Volkovs. 2022b. Improving non-autoregressive translation models without distillation. In International Conference on Learning Representations. Lukasz Kaiser, Samy Bengio, Aurko Roy, Ashish Vaswani, Niki Parmar, Jakob Uszkoreit, and Noam Shazeer. 2018. Fast decoding in sequence models using discrete latent variables. In International Conference on Machine Learning, pages 2390–2399. Jungo Kasai, James Cross, Marjan Ghazvininejad, and Jiatao Gu. 2020a. Non-autoregressive machine translation with disentangled context transformer. In ICML, pages 5144–5155. Jungo Kasai, Nikolaos Pappas, Hao Peng, James Cross, and Noah A. Smith. 2020b. Deep encoder, shallow decoder: Reevaluating the speed-quality tradeoff in machine translation. arXiv preprint arXiv:2006.10369. Yoon Kim and Alexander M Rush. 2016. Sequencelevel knowledge distillation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1317–1327. Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 388–395, Barcelona, Spain. Association for Computational Linguistics. Xiang Kong, Zhisong Zhang, and Eduard Hovy. 2020. Incorporating a local translation mechanism into nonautoregressive translation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1067–1073, Online. Association for Computational Linguistics. Jason Lee, Elman Mansimov, and Kyunghyun Cho. 2018. Deterministic non-autoregressive neural sequence modeling by iterative refinement. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1173–1182, Brussels, Belgium. Xuezhe Ma, Chunting Zhou, Xian Li, Graham Neubig, and Eduard Hovy. 2019. Flowseq: Non-autoregressive conditional sequence generation with generative flow. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4273–4283. Lihua Qian, Hao Zhou, Yu Bao, Mingxuan Wang, Lin Qiu, Weinan Zhang, Yong Yu, and Lei Li. 2021. Glancing transformer for nonautoregressive neural machine translation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1993–2003, Online. Association for Computational Linguistics. Qiu Ran, Yankai Lin, Peng Li, and Jie Zhou. 2019. Guiding non-autoregressive neural machine translation decoding with reordering information. arXiv preprint arXiv:1911.02215. Qiu Ran, Yankai Lin, Peng Li, and Jie Zhou. 2020. Learning to recover from multi-modality errors for non-autoregressive neural machine translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3059–3069, Online. Association for Computational Linguistics. Chitwan Saharia, William Chan, Saurabh Saxena, and Mohammad Norouzi. 2020. Non-autoregressive machine translation with latent alignments. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1098–1108. Raphael Shu, Jason Lee, Hideki Nakayama, and Kyunghyun Cho. 2019. Latent-variable nonautoregressive neural machine translation with deterministic inference using a delta posterior. arXiv preprint arXiv:1908.07181. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000–6010. Chunqi Wang, Ji Zhang, and Haiqing Chen. 2018a. Semi-autoregressive neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 479–488. Qiang Wang, Bei Li, Tong Xiao, Jingbo Zhu, Changliang Li, Derek F. Wong, and Lidia S. Chao. 2019. Learning deep transformer models for machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1810–1822, Florence, Italy. Qiang Wang, Fuxue Li, Tong Xiao, Yanyang Li, Yinqiao Li, and Jingbo Zhu. 2018b. Multi-layer representation fusion for neural machine translation. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3015–3026. ## A Detailed Inference Speed In Table 10, we list the exact decoding time and relative speedup ratio of different models under varying environments on the En→De test set. When changing the batch size from 1 to 32, the decoding time of AT reduces 20.4x/4.6x on GPU/CPU, respectively, while that of CMLM10 only reduces 3.7/0.8x. In contrast, HRT inherits the good character of AT and achieves an 18.7x/3.8x speedup. On the other hand, HRT has more robust acceleration than multi-shot NAT, such as CMLM4, CMLM10. When using the deep-encoder-shallow-decoder architecture, HRT12−1 performance approaches the one-shot NAT (CMLM1) on both GPU and CPU. Besides, the overall results of HRT-20L are similar to those of HRT because the translation time is mainly consumed in the decoder. We also report the change of inference speed along with chunk size k. ## B At Transformers In Synthetic Experiments We trained all AT models in the synthetic experiment with the standard Transformer-Base configuration: layer=6, dim=512, ffn=2048, head=8. The difference from Ghazvininejad et al. (2019) is that they trained the AT models for 300k steps, but we updated 50k/100k steps on En→Ro and En→De, respectively. Although fewer updates, as shown in Table 11, our AT models have comparable performance with theirs. ## C Training Algorithm Algorithm 1 describes the training process of HRT. The HRT model is pre-initialized by a pretrained AT model (Line 1). Then according to the schedule strategy pk =tT λ, we can divide the training batch B into two parts: Bp for primary tasks and Ba for auxiliary tasks, where |Bp|/|B| = pk (Line 4-5). Next, we construct four kinds of training samples based on corresponding batches: Bat p (TASK-SKIP-AT), Bat a (TASK-AT), Bnat p(TASK-SKIP-CMLM) and Bnat a(TASK-CMLM). Finally, we collect all training samples together and accumulate their gradients to update the model parameters, which results in the batch size being twice that of standard training. | Model | BLEU↑ | B=1 | B=8 | B=16 | B=32 | Avg | | | | | |---------------------|---------|--------|-------|--------|--------|-------|-----|-------|-----|-----| | Time↓ | α ↑ | Time↓ | α ↑ | Time↓ | α ↑ | Time↓ | α ↑ | α ↑ | | | | On GPU | | | | | | | | | | | | AT(raw data) | 27.45 | 857.2 | 1.0 | 137.8 | 1.0 | 73.1 | 1.0 | 40.1 | 1.0 | 1.0 | | AT12−1 | 28.40 | 294.7 | 2.9 | 49.1 | 2.8 | 28.1 | 2.6 | 16.7 | 2.4 | 2.7 | | CMLM1 | 18.05 | 89.4 | 9.6 | 28.8 | 4.8 | 26.3 | 2.8 | 26.2 | 1.5 | 4.7 | | CMLM4 | 25.94 | 223.5 | 3.8 | 59.2 | 2.3 | 52.0 | 1.4 | 52.4 | 0.8 | 2.1 | | CMLM10 | 27.03 | 492.7 | 1.7 | 116.0 | 1.2 | 106.1 | 0.7 | 105.0 | 0.4 | 1.0 | | SAT | 26.47 | 523.0 | 1.6 | 87.1 | 1.6 | 48.0 | 1.5 | 26.2 | 1.5 | 1.6 | | HRT (bat=1, bnat=1) | 28.19 | 377.5 | 2.3 | 66.4 | 2.1 | 34.9 | 2.1 | 20.5 | 2.0 | 2.1 | | HRT | 28.27 | 478.9 | 1.8 | 77.8 | 1.8 | 41.9 | 1.7 | 24.3 | 1.7 | 1.7 | | HRT (bat=5, bnat=5) | 28.50 | 482.4 | 1.8 | 81.2 | 1.7 | 46.5 | 1.6 | N/A | N/A | N/A | | HRT12−1 | 28.24 | 192.5 | 4.6 | 31.4 | 4.3 | 18.4 | 4.0 | 11.1 | 3.7 | 4.2 | | HRT (k=3) | 27.92 | 323.9 | 2.6 | 54.9 | 2.5 | 29.5 | 2.5 | 18.2 | 2.2 | 2.5 | | HRT (k=4) | 27.03 | 256.0 | 3.3 | 43.1 | 3.2 | 23.3 | 3.1 | 12.7 | 3.2 | 3.2 | | On CPU | | | | | | | | | | | | AT (raw data) | 27.45 | 1118.0 | 1.0 | 314.1 | 1.0 | 246.3 | 1.0 | 201.3 | 1.0 | 1.0 | | AT12−1 | 28.40 | 405.4 | 2.8 | 149.0 | 2.1 | 130.4 | 1.9 | 110.7 | 1.8 | 2.1 | | CMLM1 | 18.05 | 207.3 | 5.4 | 116.0 | 2.7 | 97.6 | 2.5 | 85.9 | 2.3 | 3.2 | | CMLM4 | 25.94 | 635.1 | 1.8 | 341.7 | 0.9 | 329.8 | 0.7 | 319.4 | 0.6 | 1.0 | | CMLM10 | 27.03 | 1390.9 | 0.8 | 820.1 | 0.4 | 789.3 | 0.3 | 776.9 | 0.3 | 0.4 | | SAT | 26.47 | 737.5 | 1.5 | 248.7 | 1.3 | 205.6 | 1.2 | 158.9 | 1.3 | 1.3 | | HRT (bat=1, bnat=1) | 28.19 | 457.1 | 2.4 | 116.1 | 2.7 | 82.4 | 3.0 | 65.9 | 3.1 | 2.8 | | HRT | 28.27 | 663.1 | 1.7 | 186.3 | 1.7 | 157.8 | 1.6 | 138.0 | 1.5 | 1.6 | | HRT (bat=5, bnat=5) | 28.50 | 811.0 | 1.4 | 294.5 | 1.1 | 247.6 | 1.0 | 235.2 | 0.9 | 1.1 | | HRT12−1 | 28.24 | 249.6 | 4.5 | 111.5 | 2.8 | 85.1 | 2.9 | 83.9 | 2.4 | 3.1 | | HRT (k=3) | 27.92 | 448.7 | 2.5 | 134.8 | 2.3 | 111.7 | 2.2 | 90.7 | 2.2 | 2.3 | | HRT (k=4) | 27.03 | 360.0 | 3.1 | 111.4 | 2.8 | 85.8 | 2.9 | 71.9 | 2.8 | 2.9 | Table 10: Compare the BLEU score, elapsed time, and relative speedup ratio (α) of decoding En→De *newstest14* under different settings. We use bat=5, bnat=1 and k=2 for HRT unless otherwise stated. HRT(bat=5, bnat=5) cannot decode data with batch size 32 (denoted by N/A) on GPU due to insufficient GPU memory. We bold the best results. Green denotes the result is worse than AT baseline. | AT Transformer | En-Ro | En-De | |-----------------------------|---------|---------| | Vaswani et al. (2017) | - | 27.3 | | Ghazvininejad et al. (2019) | 34.28 | 27.74 | | Our implementation | 34.25 | 27.45 | Table 11: The performance of autoregressive models in the synthetic experiment. Input: Training data D, pretrained AT model Mat, chunk size k, schedule coefficient λ Output: Hybrid-Regressive Translation model Mhrt 1: Mhrt ← Mat ▷ fine-tune on pre-trained AT 2: for t in 1, 2*, . . . , T* do 3: B = ⟨xi, yi⟩|n i=1 ▷ fetch a batch B from D 4: pk ← ( t T ) λ ▷ curriculum learning 5: Bp, Ba ← B -: ⌊n × pk⌋ , B -⌊n × pk⌋ : ▷ split batch for different tasks 6: Bat p, Bnat p ← construct training samples of primary tasks based on Bp 7: Bat a, Bnat a ← construct training samples of auxiliary tasks based on Ba 8: Optimize Mhrt using Bat p ∪ Bat a ∪ Bnat p ∪ Bnat a ▷ joint training 9: **end for** ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Left blank. A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Left blank. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? No response. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? No response. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? No response. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No response. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? No response. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. No response. ## C ✓ **Did You Run Computational Experiments?** Left Blank. ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Footnote 5 in section 3 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 6 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 3 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 6 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
zhou-etal-2023-commonsense
Commonsense Knowledge Transfer for Pre-trained Language Models
https://aclanthology.org/2023.findings-acl.368
Despite serving as the foundation models for a wide range of NLP benchmarks, pre-trained language models have shown limited capabilities of acquiring implicit commonsense knowledge from self-supervision alone, compared to learning linguistic and factual knowledge that appear more explicitly in the surface patterns in text. In this work, we introduce commonsense knowledge transfer, a framework to transfer the commonsense knowledge stored in a neural commonsense knowledge model to a general-purpose pre-trained language model. It first exploits general texts to form queries for extracting commonsense knowledge from the neural commonsense knowledge model and then refines the language model with two self-supervised objectives: commonsense mask infilling and commonsense relation prediction, which align human language with the underlying commonsense knowledge. Empirical results show that our approach consistently improves the model{'}s performance on downstream tasks that require commonsense reasoning. Moreover, we find that the improvement is more significant in the few-shot setting. This suggests that our approach helps language models better transfer to downstream tasks without extensive supervision by injecting commonsense knowledge into their parameters.
# Commonsense Knowledge Transfer For Pre-Trained Language Models Wangchunshu Zhou∗ 1 Ronan Le Bras 2 Yejin Choi 2 3 1ETH Zurich 2Allen Institute for AI 3Paul G. Allen School of Computer Science & Engineering, University of Washington [email protected] ## Abstract Despite serving as the foundation models for a wide range of NLP benchmarks, pre-trained language models have shown limited capabilities of acquiring implicit commonsense knowledge from self-supervision alone, compared to learning linguistic and factual knowledge that appear more explicitly in the surface patterns in text. In this work, we introduce *commonsense* knowledge transfer, a framework to transfer the commonsense knowledge stored in a neural commonsense knowledge model to a general-purpose pre-trained language model. It first exploits general texts to form queries for extracting commonsense knowledge from the neural commonsense knowledge model and then refines the language model with two selfsupervised objectives: *commonsense mask infilling* and *commonsense relation prediction*, which align human language with the underlying commonsense knowledge. Empirical results show that our approach consistently improves the model's performance on downstream tasks that require commonsense reasoning. Moreover, we find that the improvement is more significant in the few-shot setting. This suggests that our approach helps language models better transfer to downstream tasks without extensive supervision by injecting commonsense knowledge into their parameters. ## 1 Introduction Recent advances in pre-trained language models have transformed the landscape of natural language processing. Self-supervised pre-training objectives including masked language modeling (Devlin et al., 2019) and masked span infilling (Lewis et al., 2020) enable pre-trained models to acquire linguistic (Hewitt and Manning, 2019; Manning et al., 2020) and ![0_image_0.png](0_image_0.png) Figure 1: Illustration of the commonsense knowledge transfer framework. We first extract commonsense knowledge related to sentences in general text corpus from a neural commonsense knowledge model. We then use natural texts and the extracted commonsense knowledge to form self-supervised training data to refine a pre-trained model with commonsense knowledge. factual knowledge (Petroni et al., 2019) by modeling the distribution of naturally occurring texts. However, most of these objectives are limited to exploiting the surface form of human language, and the lack of grounded supervision calls into question how well these representations can ever capture meaning (Bender and Koller, 2020), not to mention the underlying commonsense knowledge which is often reasoned implicitly and does not appear in the surface form of human language (Merrill et al., 2021; Zhou et al., 2020a; Hwang et al., 2021). On the other hand, commonsense reasoning is important for building generalizable models because it enables the model to reason about a great number of events, causes, and effects, while observing only a small fraction of them. The ineffectiveness of self-supervised language model pre-training on acquiring commonsense knowledge makes them require a relatively large number of labeled examples to succeed in a downstream task and prune to overfit task-specific correlations (Tu et al., 2020). Therefore, equipping pre-trained language models with commonsense reasoning ability has attracted much attention. To this end, two distinct lines of research focus on improving commonsense reasoning ability of pre-trained language models. The first one focuses on incorporating external commonsense knowledge graph for commonsense reasoning (Lin et al., 2019; Liu et al., 2021; Cui and Chen, 2021) while the other attempts to inject commonsense knowledge into the parameters of pretrained models (Li et al., 2019; Zhou et al., 2021; Klein and Nabi, 2021). In this work we focus on the second type of method because it alleviates the need for external knowledge bases for training and inference on downstream tasks, thus simpler, more efficient, and not limited by the coverage issue of external knowledge bases. Prior work injects commonsense knowledge into pre-trained models either on symbolic commonsense knowledge graphs with manually defined rules (Li et al., 2019) or masked language modeling (Hosseini et al., 2021) or on general text corpus with concept-centric self-supervised objectives (Zhou et al., 2021). The former method is limited by the coverage of knowledge graphs and human-written rules. It also fails to make use of large-scale diverse natural text corpus. Therefore, the training is limited to short and synthetic commonsense tuples, which affects its generalization ability on diverse downstream tasks. The latter method, however, only captures surface-level order relations between concepts and fails to learn commonsense relations between concepts such as cause, effect, intent, requirement, etc., which are crucial for commonsense reasoning but often implicitly reasoned, thus do not appear in the surface form of natural language. In this work, we propose *commonsense knowledge transfer*, an alternative framework to refine a general purpose pre-trained model's commonsense reasoning ability. In contrast to previous work, it aims to transfer the commonsense knowledge stored in a neural commonsense knowledge model (e.g., COMET (Bosselut et al., 2019)) to a general purpose pre-trained model on large scale general text corpus. In this way, our approach combines the best of both worlds from prior art: the dense and informative commonsense knowledge from commonsense knowledge graphs and the accessibility of large-scale diverse general corpus. Commonsense knowledge transfer is conceptually related to knowledge distillation (KD) (Hinton et al., 2015) since they both aim to transfer knowledge from a knowledge-rich model to another model that lacks it. However, different from conventional KD, in commonsense knowledge transfer, the source model (i.e., neural commonsense model) and the target model (i.e., pretrained model) are heterogeneous. Moreover, instead of simply mimicking the teacher model, commonsense knowledge transfer requires the target model to learn specialized knowledge from the source model while retaining its own capability. This poses unique challenges since the knowledge transfer can not be accomplished by simply matching the logits or feature distribution between the student and the teacher. To this end, we propose to first extract commonsense knowledge in textual form from the source model and then exploit the extracted knowledge to form self-supervised training data for the target model. As illustrated in Figure 1, commonsense knowledge transfer first exploits general texts to form queries for retrieving commonsense knowledge from the neural commonsense knowledge model. Then it refines a pretrained model with two self-supervised objectives that align the surface form of human language with its underlying commonsense inference: *commonsense text infilling* and *commonsense relation prediction*. The former objective concatenates natural text with its commonsense inference to form an input example, masks certain spans in it, and trains the model to reconstruct the original input. The latter method instead trains the model to distinguish valid commonsense inference from carefully constructed spurious commonsense inference given the original text and commonsense relation. Refining a pre-trained model by multi-tasking on both generation (former) and understanding (latter) tasks enables the model to better adapt to different kinds of downstream tasks. We refine T5 (Raffel et al., 2020) with commonsense knowledge transfer and fine-tune the resulting model downstream tasks requiring commonsense reasoning ability in both the fully supervised setting and few-shot settings where only a percentage of labeled examples are available. Experimental results show substantial improvements in downstream tasks requiring commonsense reasoning, especially in the few-shot setting, demonstrating the effectiveness of our approach. ## 2 Methodology Our proposed commonsense knowledge transfer framework consists of a neural commonsense knowledge model (e.g., COMET) and a pre-trained model (e.g., T5). The goal of commonsense knowledge transfer is to transfer the commonsense knowledge from the neural commonsense knowledge model (i.e., source model) to the pre-trained model (i.e., target model) so that it can generalize better to downstream tasks requiring commonsense reasoning ability. Compared to conventional knowledge transfer methods such as knowledge distillation, commonsense knowledge transfer faces a unique challenge: the source model and the target model are heterogeneous because they are trained on different data with different objectives. As such, we can not simply feed a batch of data to both of the models and train the target model to match the source model's logits or feature distribution. To alleviate this problem, we propose a two-stage knowledge transfer scheme as illustrated in Figure 1. To be specific, we first use natural texts to form queries for retrieving commonsense knowledge (in text form) from the neural commonsense knowledge model. We then construct training data with two novel commonsense-related self-supervised objectives based on the retrieved commonsense knowledge and the corresponding natural text. Finally, we train the target model on the constructed training data to inject commonsense knowledge retrieved from the source model. We describe our method to extract commonsense knowledge from a neural commonsense knowledge model and the proposed commonsense-related self-supervised objectives in detail in this section. ## 2.1 Commonsense Knowledge Extraction We first describe the source model, i.e., neural commonsense knowledge model, in the commonsense knowledge transfer framework. It is a transformer (Vaswani et al., 2017) language model trained on commonsense knowledge graphs like ATOMIC (Sap et al., 2019a) and ConceptNet (Speer et al., 2017) with the objective of predicting the object (i.e., commonsense inference) with the subject (i.e., natural text) and relation as input. For example, given a commonsense tuple (s="take a nap", r=Causes, o="have energy"), the neural commonsense knowledge model is trained to generate o given s and r as inputs. After training, it can generate accurate, representative knowledge for new, unseen entities and events. To extract commonsense knowledge stored in a neural commonsense knowledge model, we use a natural sentence as the subject s (e.g., he wants to cook a meal) and concatenate it with a randomly selected commonsense relation r (e.g., xNeed) from a pre-defined set to form a prompt (e.g., he wants to cook a meal xNeed ). We then feed the prompt to the neural commonsense knowledge model and use it to generate a commonsense inference (e.g., to buy ingredients). In this way, the commonsense knowledge generation process resembles the way in which the neural commonsense knowledge model is trained. As such, we can get commonsense inferences of relatively high qualities. Using a neural commonsense knowledge model as a knowledge source has two advantages. On one hand, compared to the previous method (Li et al., 2019) using a symbolic commonsense knowledge graph, a neural commonsense knowledge model can generalize to unseen subjects, thus enabling us to refine the target pre-trained model on large-scale natural text corpus together with its commonsense inferences. As such, the resulting model can better adapt to downstream tasks which are formulated in diverse natural texts. On the other hand, compared to another method (Zhou et al., 2021) that only uses plain text and is thus limited to the surface form of naturally occurring text, the use of a neural commonsense knowledge model provides much denser commonsense knowledge including a diverse set of commonsense relations between natural texts and the underlying commonsense knowledge. ## 2.2 Commonsense Knowledge Injection After commonsense knowledge extraction, we need to inject the extracted commonsense knowledge into the target model. A straightforward solution is to use sequence-level knowledge distillation (Kim and Rush, 2016) and continually train the student to generate retrieved commonsense inference given the original text and commonsense relation. However, this can be sub-optimal due to the domain discrepancy between commonsense knowledge and natural text, which introduces the catastrophic forgetting problem (Kirkpatrick et al., 2017) and hurts the performance on downstream tasks, which is also recently confirmed by Cui and Chen (2021). To better inject the extracted commonsense knowledge into a pre-trained model without suf- ![3_image_0.png](3_image_0.png) fering from catastrophic forgetting so that its capability on general NLP tasks is retained (or even improved), we propose two commonsense-related self-supervised objectives: *commonsense text infilling* and *commonsense relation prediction*. The former objective is generative while the latter is a discriminative objective. We refine the pre-trained model by multi-tasking on both objectives so that the model can better adapt to tasks requiring either generative or discriminative commonsense reasoning ability. Commonsense Text Infilling Commonsense text infilling is a simple extension to the conventional text infilling objective used for pre-training BART and T5. It transforms each sentence to a commonsense tuple similar to that in a commonsense knowledge graph by appending the commonsense relation and the generated commonsense inference. We then mask text spans in the commonsense tuple by randomly selecting one masking scheme among text masking, commonsense masking, *bidirectional masking*, and *relation masking*. As illustrated in Fig 2, these masking strategies selectively mask different components in the input commonsense tuple and lead to different optimization objectives. Specifically, these masking schemes mask either spans in natural text (P(s|*s, r, o* ˜ )), commonsense inference (P(o|*s, r,* o˜)), natural text/commonsense inference (P(s, o|*s, r,* ˜ o˜)), or commonsense relation (P(r|s, *r, o* ˜ )), respectively. We then train the model to predict the masked spans autoregressively. The diverse masking strategies provide more diverse training signals compared to random masking, thus ![3_image_1.png](3_image_1.png) enabling the model to better align the surface form of human language and the underlying commonsense knowledge. In addition, unlike the conventional practice in masked span infilling objective that randomly masks text spans with the same probability, we propose to mask text spans including concepts (tokens recognized as nouns or verbs by a Spacy POS tagger) with a higher probability so that the model will be trained to predict concepts more frequently compared to non-content words that are generally not related to commonsense reasoning. Commonsense Relation Prediction While the commonsense text infilling objective encourages the pre-trained model to align natural texts and their commonsense inferences, it is always trained on *valid* commonsense tuples. This can be suboptimal because we also want the model to be capable of discriminating invalid commonsense inferences, which is important for many commonsenserelated downstream tasks. To this end, we introduce a commonsense relation prediction task that trains the model to distinguish the correct commonsense inference corresponding to the input sentence and the commonsense relation from distractors. To be specific, the commonsense relation prediction objective is formulated as a multi-choice QA problem with an input sentence as the context, a commonsense relation as the question, and a set of four commonsense inferences as options. The set of options consists of one correct commonsense inference, which is generated by the neural commonsense model with the input sentence and commonsense relation as input, and three carefully curated distractors (i.e., negative examples) generated by the | Methods | CSQA | OBQA | PIQA | aNLI | SOCIALIQA | COPA | |-----------------------|--------------|-------------|--------------|--------------|-------------|-------------| | BERT-base | 53.08(±0.16) | 57.60(±0.8) | 64.86(±0.52) | 61.88(±0.56) | 64.3(±0.4) | 67.3(±0.4) | | ERNIE-base | 54.06(±0.12) | 58.90(±0.9) | 66.47(±0.58) | 63.04(±0.46) | 65.1(±0.4) | 68.9(±0.4) | | KnowBERT | 53.88(±0.15) | 58.50(±0.8) | 66.61(±0.63) | 63.18(±0.52) | 65.4(±0.5) | 69.4 (±0.4) | | COMET | 45.32(±0.28) | 51.20(±1.1) | 60.73(±0.51) | 57.63(±0.61) | 60.2(±0.7) | 69.1 (±0.5) | | T5-base | 61.88(±0.08) | 58.20(±1.0) | 68.14(±0.73) | 61.10(±0.38) | 65.1(±0.5) | 71.4 (±0.7) | | T5-base + TI | 62.05(±0.17) | 58.43(±0.8) | 68.32(±0.66) | 61.42(±0.32) | 65.3(±0.4) | 71.8 (±0.8) | | T5-base + SSM | 62.37(±0.25) | 58.60(±0.9) | 68.48(±0.65) | 61.57(±0.44) | 65.5(±0.5) | 72.1 (±0.6) | | T5-base + KD | 61.83(±0.42) | 56.54(±0.7) | 67.35(±0.63) | 60.94(±0.66) | 64.8(±0.5) | 71.0 (±1.0) | | T5-base + CSKG (TI) | 60.22(±0.40) | 56.17(±0.8) | 66.51(±0.57) | 59.92(±0.47) | 62.7(±0.7) | 68.5 (±1.1) | | T5-base + CSKG (Rule) | 63.10(±0.35) | 57.97(±0.8) | 68.27(±0.71) | 60.15(±0.51) | 65.7(±0.4) | 72.4 (±0.9) | | CALM | 63.32(±0.35) | 60.90(±0.4) | 71.01(±0.61) | 63.20(±0.52) | 66.0(±0.5) | 72.2 (±0.8) | | CKT-base | 64.11(±0.31) | 61.58(±0.5) | 72.26(±0.61) | 64.37(±0.49) | 67.3(±0.4) | 73.4 (±0.5) | | CKT w/ GPT-2 | 60.39(±0.61) | 56.95(±0.7) | 68.48(±0.44) | 60.14(±0.52) | 66.2(±0.6) | 72.8 (±1.0) | Table 1: **Experimental results on base-size models.** Best models are bold and second best ones are underlined within each metric. Mean and standard deviation of 3 different runs with different random seeds are reported. same neural commonsense knowledge model with different inputs. As illustrated in Figure 3, among the three distractors, one is generated with an input composed of the same sentence and a different commonsense relation, and another two are generated with an input composed of different sentences with the same commonsense relation. In this way, the model learns to align the natural texts with valid commonsense knowledge while also distinguishing commonsense inferences that do not make sense. Moreover, this objective is formulated as a multi-choice QA task that closely resembles several downstream commonsense-related tasks such as CommonsenseQA and SOCIALIQA, thus enabling easier transfer especially when labeled training examples are scarce. ## 3 Experiments 3.1 Experimental Settings Models In our experiments we apply commonsense knowledge transfer to refine T5 (Raffel et al., 2019), a popular model pre-trained with the text infilling objective. We experiment with both T5base and T5-large, which consist of 220 million and 774 million parameters respectively, as the target model in the commonsense knowledge transfer framework. We do not experiment with extremely large models like T5-11B because of the resource constraints and the fact that these models are hard to deploy in real-world applications. We use COMET-ATOMIC20 20, a state-of-the-art neural commonsense knowledge model that can generate accurate, representative knowledge for new, unseen entities and events, as the source model. It is initialized with BART and continually trained on ATOMIC20 20 (Hwang et al., 2021), a new general purpose commonsense knowledge graph. Data We randomly sample a subset consisting of 10 million sentences from the English Wikipedia and the BookCorpus (Zhu et al., 2015), which is used for pre-training BERT and its variants. We select a set of representative commonsense relations including intent, reason, effect, need, want, and react from relations used to train COMET-ATOMIC20 20. For each sentence, we randomly sample two relations and retrieve the corresponding commonsense explanation from COMET20 20. We randomly select one relation-explanation pair to form the input example and leave another as the distractor for the commonsense relation prediction objective. Training We refine the pre-trained models on the self-supervised examples constructed with the sampled 10 million sentences for 100k steps with a batch size of 1024, a maximum sequence length of 256, and a learning rate of 5e-5/2e-5 for base-size and large-size models respectively with a linear warm-up for the first 8,000 updates. After knowledge transfer, we fine-tune the models on downstream tasks by formulating the tasks into text-totext problems. Pre-training and fine-tuning details are included in the Appendix. Evaluation We evaluate the continual pre-trained models on downstream tasks that require commonsense reasoning including **CommonsenseQA** (Talmor et al., 2018), **OpenbookQA** (Mihaylov et al., 2018), **PIQA** (Bisk et al., 2020), **aNLI** (Bhagavatula et al., 2020), **COPA** (Roemmele et al., 2011), and SOCAILIQA (Sap et al., 2019b) In addition to the conventional fully supervised setting, we also test our approach in the few-shot setting by vary- | Methods | CSQA | OBQA | PIQA | aNLI | SOCIALIQA | COPA | |------------|--------------|-------------|--------------|--------------|-------------|------------| | T5-large | 69.81(±1.02) | 61.40(±1.0) | 72.19(±1.09) | 75.54(±1.22) | 71.3(±0.8) | 83.6(±1.1) | | CALM-large | 71.31(±0.04) | 66.00(±1.0) | 75.11(±1.65) | 77.12(±0.34) | 72.7(±0.7) | 84.9(±1.0) | | CKT-large | 72.15(±0.61) | 66.70(±1.1) | 76.07(±0.95) | 77.94(±0.59) | 73.8(±0.8) | 86.0(±1.2) | Table 2: **Experimental results on large-size models.** Best models are bold and second best ones are underlined within each metric. Mean and variance of 3 different runs with different random seeds are reported. ing the percentage of labeled examples from the original training set used for fine-tuning. The idea is that limited labeled examples can only help the model understand the task but are insufficient for the model to acquire enough commonsense knowledge to solve the task. As such, it requires the model to store enough commonsense knowledge in its parameters to succeed in the few-shot setting. For both the settings, we report the results on the official development set and tune the hyperparameters based on the models' performance on an in-house split dev set. We report the mean and variance of 3 individual runs with different random seeds because most datasets are relatively small, which makes the variance in results non-negligible. Baselines We compare our approach with methods that continually train a pre-trained model with different objectives. We divide the baselines into two categories based on the source of their supervision. The first category includes methods that only exploit general text corpus, including (1) T5 + TI that continually pre-trains the public checkpoint of T5 with the same text infilling objective for more steps, (2) **T5 + SSM** that also continual pretrains T5 with the text infilling objective, but use salient span masking (Roberts et al., 2020) instead of random masking for data construction, (3) (T5 + KD) that uses sequence-level knowledge distillation (Kim and Rush, 2016) for knowledge transfer, where the student model is trained with the teacher output (i.e., P(o|*s, r*)), and (4) **CALM** (Zhou et al., 2021) that uses novel self-supervised objectives to construct concept-centric self-supervision from general text corpus. The second category instead exploits CSKG, including (5) **T5 + CSKG (TI)** train T5 with the text infilling objective on tuples in a CSKG, and (6) **T5 + CSKG (Rule)** (Li et al., 2019) that uses manually defined rules to construct training examples from a CSKG. We also include a **COMET** baseline where we directly fine-tune the pre-trained COMET-ATOMIC20 20 model for the downstream tasks to verify the necessity of commonsense knowledge transfer, and a CKT w/ GPT- ![5_image_0.png](5_image_0.png) 2 baseline where the commonsense inferences are generated by a pre-trained GPT-2 large model to verify whether the gain comes from transferring the commonsense knowledge from COMET, or simply from data augmentation from another generative model. For a fair comparison, we use the same data and training steps compared to our approach for baselines from the first category and use ATOMIC20 20, on which the teacher model in our framework is pre-train on, as the commonsense knowledge graph. For reference, we also include some popular knowledge-enhanced pre-trained models including ERNIE (Zhang et al., 2019) and KnowBERT (Peters et al., 2019). ## 3.2 Fully-Supervised Results We first present results in the fully-supervised setting. Results on base-size models are presented in Table 1. We can see that our approach yields significant improvement compared to the T5 baseline (up to 4 absolute scores) and consistently outperform CALM, the state-of-the-art method of injecting commonsense knowledge into PTLMs. In addition, we observe that simply using continual training with the original text-infilling objective or its variant with salient span masking only marginally improves the performance. Surprisingly, training with text infilling on a commonsense knowledge graph leads to degraded performance compared to the T5 baseline. We suspect this is because the commonsense tuples in commonsense knowledge graphs are generally too short | Methods | CSQA | OBQA | PIQA | aNLI | SIQA | COPA | |---------------------------------------|--------|--------|--------|--------|--------|--------| | T5-base | 61.88 | 58.20 | 68.14 | 61.10 | 65.1 | 71.4 | | CKT-base | 64.57 | 62.77 | 73.26 | 64.75 | 68.3 | 73.4 | | Objective Analysis | | | | | | | | CKT-base w/o CSTI | 62.58 | 60.97 | 70.61 | 62.11 | 66.5 | 72.0 | | CKT-base w/o text masking | 62.98 | 61.74 | 72.55 | 63.81 | 67.7 | 72.8 | | CKT-base w/o commonsense masking | 63.61 | 62.03 | 72.83 | 64.40 | 67.5 | 72.7 | | CKT-base w/o bidirectional masking | 63.52 | 62.11 | 72.30 | 64.24 | 67.6 | 72.9 | | CKT-base w/o relation masking | 64.12 | 62.48 | 73.31 | 64.57 | 67.4 | 72.7 | | CKT-base w/o CSRP | 63.12 | 62.07 | 72.44 | 64.11 | 67.5 | 72.6 | | CKT-base w/ random distractors | 64.04 | 62.29 | 72.95 | 64.48 | 68.0 | 73.1 | | Multi-task versus Sequential Transfer | | | | | | | | CKT-base (CSTI → CSRP) | 64.69 | 62.51 | 73.35 | 64.11 | 67.9 | 73.5 | | CKT-base (CSRP → CSTI) | 63.49 | 61.33 | 71.54 | 63.41 | 67.0 | 72.0 | | Corpus Size | | | | | | | | CKT-base w/ 10% data | 64.18 | 62.21 | 71.86 | 64.31 | 67.7 | 73.1 | | CKT-base w/ 50% data | 64.45 | 62.66 | 73.10 | 64.72 | 68.2 | 73.4 | and simple, making the pre-trained model unable to reason within relatively long contexts which is crucial for most downstream tasks. Moreover, we find that continually pre-training with training data constructed with commonsense tuples in a commonsense knowledge graph following manually designed rules leads to improvements in certain tasks. However, the improvement is inconsistent across different tasks and it even hurts the performance on certain tasks, which may be because the rules for constructing training data are tailored for certain tasks like CSQA. The inferior performance of using commonsense knowledge graphs as data sources also confirms the need of using natural text corpus during continual pre-training for better adapting to diverse downstream tasks. We also find directly applying sequence-level KD and training the student to mimic the teacher on the commonsense tuple generation task fails to improve the performance because the task is not general enough and thus cannot transfer to diverse downstream tasks well. Moreover, directly fine-tuning COMET or using GPT-2 as the commonsense knowledge source results in very poor performance. This confirms the necessity of commonsense knowledge transfer and shows that it is actually transferring commonsense knowledge instead of simple text augmentation. To further confirm the effectiveness of commonsense knowledge transfer, we apply it to T5-large and compare it to competitive baselines in the basesize experiments. The results are presented in Table 2. We can see that our approach consistently outperforms T5-large and CALM-large. This suggests that our approach can successfully generalize to large-size pre-trained models. ## 3.3 Few-Shot Results Injecting commonsense knowledge into pre-trained models is important because it enables the model to reason and generalize to unseen examples while observing only a few labeled examples. To this end, we fine-tune the compared models with different fractions of labeled training data to investigate the transition of the behavior of our model and baselines from the low-resource regime to the fullysupervised setting (Fig. 4). We observe that the performance improvement of our approach compared to the baselines is more significant in the low-resource regime. This shows that commonsense knowledge transfer can successfully transfer commonsense knowledge into pre-trained models so that they can generalize well while seeing only a small part of training data. This may also help the model reduce the risk/tendency of fitting the spurious correlations in the annotated datasets and thus generalize better. ## 3.4 Analysis To better understand the proposed commonsense knowledge transfer framework and the role of its different components, we conduct an ablation study about the impact of different proposed objectives, the impact of multi-tasking the commonsenserelated self-supervised objective versus sequentially training, and the impact of the size of natural text corpus used for transfer (see Table 3). Impact of Objectives We find that both the proposed objectives contribute to the performance improvement of our approach. The commonsense text infilling objective is shown to be more critical than the commonsense relation prediction task. We suspect this is because commonsense text infilling resembles the vanilla text infilling objective with which the T5 models are pre-trained, thus preventing the model from catastrophic forgetting. In addition, all four masking strategies are beneficial, and their contribution varies for different downstream tasks. This confirms the necessity of a diverse masking scheme. Moreover, our strategy for constructing distractors outperforms the random counterpart, demonstrating the necessity of hard negative examples for the commonsense relation prediction task. Multi-task versus Sequential Transfer As for the training order between the two objectives, we find that starting from the commonsense text infilling task and then switching to the commonsense relation prediction task performs similarly with our multi-tasking strategy while significantly outperforming its counterpart training with the reverse direction. We think this is because the commonsense text infilling objective resembles the original pre-training while the commonsense relation prediction is more similar to downstream tasks. We opt for the multi-tasking strategy for simplicity. Impact of Corpus Size We find that commonsense knowledge transfer significantly outperforms both the T5 baseline and the competitive CALM method with only 10 percent of the full data used for distillation. Nevertheless, the performance improvement also confirms that our approach can benefit from the accessibility of large-scale natural texts. For base-size models, the performance improvements seem to saturate after 10 million sentence pairs. However, we anticipate that larger-size models may still benefit from a larger amount of data, and leave this for future work. ## 4 Related Work Knowledge-augmented Pre-trained Models A number of recent works have examined the problem of incorporating world knowledge with the pretrained models. A number of works use an external knowledge base to incorporate entity knowledge with pre-trained models (Zhang et al., 2019; Peters et al., 2019; Wang et al., 2020; Liu et al., 2020). However, these approaches require specialized resources like knowledge bases which are non-trivial to seek, thus limiting the domain they can be applied to. Xiong et al. (2020) proposed a novel entity replacement detection objective that incorporates Wikipedia to encode world knowledge into a BERTlike pre-trained model. He et al. (2020) proposed a generative and discriminative framework that pretrains the model to complete and correct knowledge spans. The aforementioned approaches generally focus on factual knowledge of entities while our work mainly focuses on commonsense knowledge. Commonsense Reasoning for NLP Several recent studies (Talmor et al., 2018; Sap et al., 2019b; Zhou et al., 2020b; Lin et al., 2020; Xu et al., 2021) evaluate the performance of several pre-trained language models on tasks that require commonsense reasoning and find that it is still very hard for pre-trained language models to match or exceed human-level performance. Therefore, approaches to improve the commonsense reasoning ability of pre-trained language models has attracted much attention. These approaches can be divided into two categories. The first category focuses on incorporating an external commonsense knowledge graph for commonsense reasoning. For example, Lin et al. (2019), Cui and Chen (2021), and Liu et al. (2021) propose to exploit structured symbolic commonsense knowledge graphs to perform commonsense reasoning. The second one instead attempts to inject commonsense knowledge into the parameters of pre-trained models. For example, Ye et al. (2019); Li et al. (2019) proposed to use manually designed rules to construct commonsense related training examples from commonsense knowledge graphs. Zhou et al. (2021) instead only relies on general text corpus and proposed two concept-centric self-supervised objectives to refine pre-trained models with commonsense knowledge. ## 5 Conclusion We introduce commonsense knowledge transfer, a framework to transfer the commonsense knowledge stored in a neural commonsense knowledge model to a general-purpose pre-trained model. Our method extracts commonsense knowledge from the source model to construct self-supervised training data for the target model. Empirical results show that our approach consistently outperforms previous methods for improving the commonsense reasoning ability of pre-trained models that exploit either symbolic knowledge graphs or texts alone. ## Limitations In our experiments, we use T5-base and T5-large models as the target model since they are widelyused, representative pre-trained seq2seq models and use COMET-ATOMIC20 20 as the commonsense knowledge source. However, there are other pretrained seq2seq models such as BART, and neural commonsense models such as COMET that we did not experiment with. Moreover, we only experimented with 10 million randomly sampled sentences from the English Wiki and BookCorpus datasets. It would be interesting to investigate whether continually pre-training with a larger scale dataset can further improve the performance. ## Ethical Considerations Our work focuses on improving the commonsense reasoning ability of pre-trained language models. It probably does not introduce extra ethical concerns. However, in commonsense knowledge extraction, the neural commonsense knowledge model may generate unexpected (e.g., biased) commonsense inferences, and training with these inferences may lead to additional bias in the pre-trained model. Nevertheless, all pre-trained language models contain bias and should be examined. ## References Emily M. Bender and Alexander Koller. 2020. Climbing towards NLU: on meaning, form, and understanding in the age of data. In *Proceedings of the* 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 5185–5198. Association for Computational Linguistics. Chandra Bhagavatula, Ronan Le Bras, Chaitanya Malaviya, Keisuke Sakaguchi, Ari Holtzman, Hannah Rashkin, Doug Downey, Scott Wen-tau Yih, and Yejin Choi. 2020. Abductive commonsense reasoning. In *ICLR*. Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, and Yejin Choi. 2020. Piqa: Reasoning about physical commonsense in natural language. In Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI). Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Celikyilmaz, and Yejin Choi. 2019. COMET: commonsense transformers for automatic knowledge graph construction. In *ACL (1)*, pages 4762–4779. Association for Computational Linguistics. Alexis Conneau and Douwe Kiela. 2018. Senteval: An evaluation toolkit for universal sentence representations. In *LREC*. Wanyun Cui and Xingran Chen. 2021. Enhancing language models with plug-and-play large-scale commonsense. *CoRR*, abs/2109.02572. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In *NAACL-HLT (1)*, pages 4171–4186. Association for Computational Linguistics. William B. Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In *IWP@IJCNLP*. WA Falcon. 2019. Pytorch lightning. GitHub. Note: https://github.com/PyTorchLightning/pytorchlightning, 3. Bin He, Xin Jiang, Jinghui Xiao, and Qun Liu. 2020. Kgplm: Knowledge-guided language model pretraining via generative and discriminative learning. CoRR, abs/2012.03551. John Hewitt and Christopher D. Manning. 2019. A structural probe for finding syntax in word representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4129–4138. Association for Computational Linguistics. Geoffrey E. Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the knowledge in a neural network. CoRR, abs/1503.02531. Pedram Hosseini, David A. Broniatowski, and Mona T. Diab. 2021. Commonsense knowledge-augmented pretrained language models for causal reasoning classification. *CoRR*, abs/2112.08615. Jena D. Hwang, Chandra Bhagavatula, Ronan Le Bras, Jeff Da, Keisuke Sakaguchi, Antoine Bosselut, and Yejin Choi. 2021. (comet-) atomic 2020: On symbolic and neural commonsense knowledge graphs. In *AAAI*, pages 6384–6392. AAAI Press. Yoon Kim and Alexander M. Rush. 2016. Sequencelevel knowledge distillation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 1317–1327. The Association for Computational Linguistics. James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A. Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, Demis Hassabis, Claudia Clopath, Dharshan Kumaran, and Raia Hadsell. 2017. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences, 114(13):3521–3526. Tassilo Klein and Moin Nabi. 2021. Towards zero-shot commonsense reasoning with self-supervised refinement of language models. *CoRR*, abs/2109.05105. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. In ACL, pages 7871–7880. Association for Computational Linguistics. Shiyang Li, Jianshu Chen, and Dian Yu. 2019. Teaching pretrained models with commonsense reasoning: A preliminary kb-based approach. *CoRR*, abs/1909.09743. Bill Yuchen Lin, Xinyue Chen, Jamin Chen, and Xiang Ren. 2019. Kagnet: Knowledge-aware graph networks for commonsense reasoning. In EMNLP/IJCNLP (1), pages 2829–2839. Association for Computational Linguistics. Bill Yuchen Lin, Wangchunshu Zhou, Ming Shen, Pei Zhou, Chandra Bhagavatula, Yejin Choi, and Xiang Ren. 2020. Commongen: A constrained text generation challenge for generative commonsense reasoning. In Findings of the Association for Computational Linguistics: EMNLP 2020, Online Event, 16-20 November 2020, volume EMNLP 2020 of Findings of ACL, pages 1823–1840. Association for Computational Linguistics. Ye Liu, Yao Wan, Lifang He, Hao Peng, and Philip S. Yu. 2020. Kg-bart: Knowledge graph-augmented bart for generative commonsense reasoning. Ye Liu, Yao Wan, Lifang He, Hao Peng, and Philip S. Yu. 2021. KG-BART: knowledge graph-augmented BART for generative commonsense reasoning. In AAAI, pages 6418–6425. AAAI Press. Christopher D. Manning, Kevin Clark, John Hewitt, Urvashi Khandelwal, and Omer Levy. 2020. Emergent linguistic structure in artificial neural networks trained by self-supervision. *Proc. Natl. Acad. Sci.* USA, 117(48):30046–30054. William Merrill, Yoav Goldberg, Roy Schwartz, and Noah A. Smith. 2021. Provable limitations of acquiring meaning from ungrounded form: What will future language models understand? *CoRR*, abs/2104.10809. Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. 2018. Can a suit of armor conduct electricity? a new dataset for open book question answering. *arXiv preprint arXiv:1809.02789*. Matthew E Peters, Mark Neumann, Robert L Logan IV, Roy Schwartz, Vidur Joshi, Sameer Singh, and Noah A Smith. 2019. Knowledge enhanced contextual word representations. *arXiv preprint* arXiv:1909.04164. Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick S. H. Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander H. Miller. 2019. Language models as knowledge bases? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 2463–2473. Association for Computational Linguistics. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. *arXiv preprint arXiv:1910.10683*. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21:140:1–140:67. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100, 000+ questions for machine comprehension of text. In *EMNLP*. Adam Roberts, Colin Raffel, and Noam Shazeer. 2020. How much knowledge can you pack into the parameters of a language model? In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 5418–5426. Association for Computational Linguistics. Melissa Roemmele, Cosmin Adrian Bejan, and Andrew S. Gordon. 2011. Choice of plausible alternatives: An evaluation of commonsense causal reasoning. In Logical Formalizations of Commonsense Reasoning, Papers from the 2011 AAAI Spring Symposium, Technical Report SS-11-06, Stanford, California, USA, March 21-23, 2011. AAAI. Maarten Sap, Ronan Le Bras, Emily Allaway, Chandra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A. Smith, and Yejin Choi. 2019a. ATOMIC: an atlas of machine commonsense for if-then reasoning. In *AAAI*, pages 3027–3035. AAAI Press. Maarten Sap, Hannah Rashkin, Derek Chen, Ronan Le Bras, and Yejin Choi. 2019b. Socialiqa: Commonsense reasoning about social interactions. In EMNLP. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In *EMNLP*. Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In *AAAI*, pages 4444–4451. AAAI Press. Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2018. Commonsenseqa: A question answering challenge targeting commonsense knowledge. *arXiv preprint arXiv:1811.00937*. Lifu Tu, Garima Lalwani, Spandana Gella, and He He. 2020. An empirical study on robustness to spurious correlations using pre-trained language models. Trans. Assoc. Comput. Linguistics, 8:621–633. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *NIPS*, pages 5998–6008. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In *ICLR*. Ruize Wang, Duyu Tang, Nan Duan, Zhongyu Wei, Xuanjing Huang, Cuihong Cao, Daxin Jiang, Ming Zhou, et al. 2020. K-adapter: Infusing knowledge into pre-trained models with adapters. arXiv preprint arXiv:2002.01808. Alex Warstadt, Amanpreet Singh, and Samuel R. Bowman. 2019. Neural network acceptability judgments. TACL. Adina Williams, Nikita Nangia, and Samuel R. Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In NAACL-HLT. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2019. Huggingface's transformers: State-of-the-art natural language processing. *ArXiv*, abs/1910.03771. Wenhan Xiong, Jingfei Du, William Yang Wang, and Veselin Stoyanov. 2020. Pretrained encyclopedia: Weakly supervised knowledge-pretrained language model. In *International Conference on Learning* Representations. Canwen Xu, Wangchunshu Zhou, Tao Ge, Ke Xu, Julian J. McAuley, and Furu Wei. 2021. Blow the dog whistle: A chinese dataset for cant understanding with common sense and world knowledge. In *NAACL-HLT*, pages 2139–2145. Association for Computational Linguistics. Zhi-Xiu Ye, Qian Chen, Wen Wang, and Zhen-Hua Ling. 2019. Align, mask and select: A simple method for incorporating commonsense knowledge into language representation models. *CoRR*, abs/1908.06725. Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. Ernie: Enhanced language representation with informative entities. *arXiv preprint arXiv:1905.07129*. Wangchunshu Zhou, Dong-Ho Lee, Ravi Kiran Selvam, Seyeon Lee, and Xiang Ren. 2021. Pretraining text-to-text transformers for concept-centric common sense. In *ICLR*. OpenReview.net. Xuhui Zhou, Yue Zhang, Leyang Cui, and Dandan Huang. 2020a. Evaluating commonsense in pretrained language models. In *AAAI*, pages 9733– 9740. AAAI Press. Xuhui Zhou, Yue Zhang, Leyang Cui, and Dandan Huang. 2020b. Evaluating commonsense in pretrained language models. In *AAAI*, pages 9733– 9740. Yukun Zhu, Ryan Kiros, Richard S. Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015, pages 19–27. IEEE Computer Society. ## A Pre-Training And Fine-Tuning Details A.1 Pre-Training Details We implement our models using Pytorchlightning (Falcon, 2019) and Hugginface's Pytorch Transformers (Wolf et al., 2019). For pre-training phase, we use the AdamW optimizer with maximum sequence length 256, train batch size 8, gradient accumulation 8, warmup steps 8000, weight decay 0.01 and adam epsilon 1e-6. We train the models with 8 V100 GPUs and FP32 precision. The model is pre-trained for 10 epochs. We searched for the best learning rate for our model out of [5e-6, 2e-5, 5e-5, 1e-4]. ## A.2 Fine-Tuning Details For fine-tuning, we use 4 V100 GPUs and use FP32. For all tasks, we use the AdamW optimizer with learning rate from [1e-5, 2e-5, 5e-5, 1e-4, 2e-4], maximum sequence length 256, batch size from [4, 8, 16, 32]. For all tasks, we use a warmup fraction of 0.01, and max epoch of 20. ## B Additional Analysis B.1 Qualitative Analysis To better understand the proposed method, we present a case study in Figure 5. We can see that both the objectives introduced in the CALM model and the salient span masking (SSM) strategy fail to exploit the underlying commonsense rationale beyond the surface form of texts while our approach directly aligns texts with the corresponding commonsense inferences with different commonsense relations. That explains why commonsense knowledge transfer can effectively improve a pre-trained model's performance on downstream tasks requiring commonsense reasoning ability. ## B.2 Experimental Results On Glue To verify that commonsense knowledge transfer is suitable for general-purpose pre-trained models, we fine-tune our model on the GLUE benchmark (Wang et al., 2019). Specifically, we test on MRPC (Dolan and Brockett, 2005), QQP1and STS-B (Conneau and Kiela, 2018) for Paraphrase Similarity Matching; SST-2 (Socher et al., 2013) for Sentiment Classification; MNLI (Williams et al., 2018), QNLI (Rajpurkar et al., 2016) and RTE (Wang et al., 2019) for the Natural Language ![11_image_0.png](11_image_0.png) ![11_image_1.png](11_image_1.png) Inference; CoLA (Warstadt et al., 2019) for Linguistic Acceptability. The results are shown on Table 4, we can see that after commonsense knowledge transfer, the resulting model's general natural language understanding ability is comparable with the original T5-base model. This shows that our approach does not affect the model's general transfer ability and thus can be applied to general-purpose language models. ## B.3 Experiments With Bart To demonstrate the versatility of commonsense knowledge transfer for different backbones, we conduct additional experiments using BART as the backbone model. The results are shown in Table 5. We can see that commonsense knowledge trans- | Methods | CoLA | MNLI | MRPC | QNLI | QQP | RTE | SST-2 | SST-B | Meta Score | |--------------------------------------------------------------------------|--------|--------|--------|--------|-----------|-------|---------|---------|--------------| | BERT-base | 58.9 | 84.7 | 89.6 | 91.2 | 90.0 | 71.4 | 93.0 | 90.0 | 83.6 | | T5-base | 55.9 | 84.5 | 90.3 | 90.5 | 90.2 | 76.2 | 92.8 | 87.8 | 83.5 | | CKT-base | 57.4 | 84.4 | 90.6 | 90.9 | 89.9 | 76.8 | 92.5 | 88.4 | 83.9 | | Table 4: Experimental results of base-size models on the GLUE benchmark. | | | | | | | | | | | Methods | CSQA | OBQA | PIQA | aNLI | SOCIALIQA | COPA | | | | | BART | 72.31 | 65.80 | 74.12 | 78.27 | 71.6 | 85.6 | | | | | CKT-BART | 73.14 | 68.20 | 76.95 | 79.52 | 73.3 | 87.2 | | | | Table 5: Experimental results (mean of 3 random runs) with BART. fer also consistently improves the BART model, demonstrating the versatility of our approach. Methods BLEU-4 METEOR CIDEr SPICE T5-base 24.90 31.20 12.99 32.40 CALM-base 26.40 31.40 13.88 33.00 CKT-base 26.20 **31.40** 13.65 **33.10** Table 6: Experimental results (mean of 3 random runs) with BART. ## B.4 Experiments On Commongen We also experiment on the CommonGEN dataset, a generative commonsense reasoning dataset where the model is required to take several keywords as inputs and output a sentence that makes sense. The results are shown in Table 6. We can see that our approach performs similarly with the CALM model, which includes the CommonGEN task objective as one of the pre-training tasks. ## B.5 Impact Of Pre-Training Data Size We also conduct experiments to investigate the sample-efficiency of commonsense knowledge transfer. We present the trend of performance improvement in Figure 6. We can see that our method achieves significant performance improvement upon the T5 baseline with only 10% of the total training data, which confirms the sampleefficiency of commonsense knowledge transfer. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? limitation ✓ A2. Did you discuss any potential risks of your work? ethical statement ✓ A3. Do the abstract and introduction summarize the paper's main claims? abstract and introduction ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Experiment ✓ B1. Did you cite the creators of artifacts you used? experiment ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? they're commonly used datasets ✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? they're commonly used datasets ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? they're commonly used datasets ✗ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? they're commonly used datasets ✗ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. they're commonly used datasets ## C ✓ **Did You Run Computational Experiments?** Experiment ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? experiment The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? experiment ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? experiment ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? experiment D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. ✗ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Left blank. ✗ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
iskander-etal-2023-shielded
Shielded Representations: Protecting Sensitive Attributes Through Iterative Gradient-Based Projection
https://aclanthology.org/2023.findings-acl.369
Natural language processing models tend to learn and encode social biases present in the data. One popular approach for addressing such biases is to eliminate encoded information from the model{'}s representations. However, current methods are restricted to removing only linearly encoded information. In this work, we propose Iterative Gradient-Based Projection (IGBP), a novel method for removing non-linear encoded concepts from neural representations. Our method consists of iteratively training neural classifiers to predict a particular attribute we seek to eliminate, followed by a projection of the representation on a hypersurface, such that the classifiers become oblivious to the target attribute. We evaluate the effectiveness of our method on the task of removing gender and race information as sensitive attributes. Our results demonstrate that IGBP is effective in mitigating bias through intrinsic and extrinsic evaluations, with minimal impact on downstream task accuracy.
# Shielded Representations: Protecting Sensitive Attributes Through Iterative Gradient-Based Projection Shadi Iskander Kira Radinsky Yonatan Belinkov∗ [email protected] [email protected] [email protected] Technion - Israel Institute of Technology ## Abstract ![0_Image_0.Png](0_Image_0.Png) Natural language processing models tend to learn and encode social biases present in the data. One popular approach for addressing such biases is to eliminate encoded information from the model's representations. However, current methods are restricted to removing only linearly encoded information. In this work, we propose Iterative Gradient-Based Projection (IGBP), a novel method for removing non-linear encoded concepts from neural representations. Our method consists of iteratively training neural classifiers to predict a particular attribute we seek to eliminate, followed by a projection of the representation on a hypersurface, such that the classifiers become oblivious to the target attribute. We evaluate the effectiveness of our method on the task of removing gender and race information as sensitive attributes. Our results demonstrate that IGBP is effective in mitigating bias through intrinsic and extrinsic evaluations, with minimal impact on downstream task accuracy.1 ## 1 Introduction The increasing reliance on natural language processing models in decision-making systems has led to a renewed focus on the potential biases that these models may encode. Recent studies have demonstrated that word embeddings exhibit gender bias in their associations of professions (Bolukbasi et al., 2016; Caliskan et al., 2017) and that learned representations of language models capture demographic data about the writer of the text, such as race or age (Blodgett et al., 2016; Elazar and Goldberg, 2018). Model decisions can be affected by these encoded biases and irrelevant attributes, leading to a wide range of inequities toward certain demographics. For example, a model designed to ∗Supported by the Viterbi Fellowship in the Center for Computer Engineering at the Technion. 1Code is available at https://github.com/ technion-cs-nlp/igbp_nonlinear-removal. review job resumes should not factor in the applicants' gender or race. Consequently, it is desirable to be able to manipulate the type of data encoded within text representations and to exclude any sensitive information in order to create more fair and equitable models. Removing the presence of sensitive attributes from the representations learned by deep neural networks is non-trivial, as these representations are often learned using complex and hard-to-interpret non-linear models. Re-training the language model can be a costly solution, therefore post-hoc removal methods that work at the representation layer have been proposed, such as linear projection of the embeddings on a hyperplane that distinguishes between the sensitive attribute (Bolukbasi et al., 2016; Ravfogel et al., 2020). However, neural networks do not necessarily represent concepts in a linear manner. To address this issue, Ravfogel et al. (2022b) proposed kernelization of a linear minimax game for concept erasure, but this approach is restricted to the selection of kernel and the attribute protection does not transfer to different types of non-linear probes. Accordingly, Ravfogel et al. (2022a,b) considered non-linear concept erasure to 5961 be an open problem. In this paper, we propose a non-linear concept erasure method, IGBP, to eliminate information about the protected attribute from neural representations. We use a trained probe classifier that attempts to predict the protected attribute and a novel loss function suited for the task of concept removal. Then, we leverage the gradients of this loss to guide for projection of the representations to a hypersurface that does not contain information used by the classifier regarding the sensitive-attribute. This is done by projecting the representations to the separating boundary of the classifier. Figure 1 illustrates a 2-dimensional example. Our approach supports the use of non-linear neural classifiers. When used with a linear classifier, it is equivalent to Iterative Null Space Projection (INLP), a popular linear concept removal method (Ravfogel et al., 2020). We perform an empirical evaluation of the proposed method using: (1) intrinsic evaluation of word embeddings measuring word-level gender bias removal and (2) extrinsic fair classification evaluation over tasks that uses contextualized word representations. The empirical results show that the proposed method is successful in sensitiveattribute removal and mitigating bias, outperforming competing algorithms with minimal impact on the downstream task accuracy. ## 2 Related Work Many studies (e.g., Caliskan et al., 2017; Rudinger et al., 2018) investigated social biases in word embeddings and text representations. Recent work have showed how applications that use pre-trained representations reflect and amplify these kinds of social biases (Zhao et al., 2018; Elazar and Goldberg, 2018). The approaches tackling this problem can be categorized into three lines of work: pre-processing methods which manipulate the input distribution before training (e.g., Zhao et al., 2018; Wang et al., 2019), in-processing methods which focus on learning fair models during training (e.g., Xie et al., 2017; Beutel et al., 2017; Zhang et al., 2018; Orgad and Belinkov, 2023) and post-hoc methods (e.g., Ravfogel et al., 2020; Wang et al., 2020; Ravfogel et al., 2022a,b), which assume a fixed, pre-trained set of representations from any encoder and aim to learn a new set of unbiased representations. Since re-training a model can be costly, a lot of focus was given to post-hoc methods, which is the main focus of this work. The most common post-hoc approach to remove sensitive information from word embeddings is to use a linear projection. Bolukbasi et al. (2016) identified a gender subspace, which is a subspace spanned by the directions of embeddings that capture the bias, such as the direction "he" - "she". They suggested projecting all the gender-neutral word embeddings on the gender subspace's first principle component to make neutral words equally distant from male and female-gendered words. However, Gonen and Goldberg (2019) showed that this method only covers up bias and not fully removes it from the representation. Another critical drawback of the method is that it requires user selection of a few gender directions. Ravfogel et al. (2020) tried to overcome this drawback of manually defining gender direction, and presented the Iterative Null-space Projection (INLP) method. It is based on training linear classifiers that predict the attribute they wish to remove, then projecting the representations on the classifiers' null-space. Ravfogel et al. (2022a) aims to linearly remove information from neural representations by using a linear minimax gamebased approach, and derive a closed-form solution for certain objectives. One of the limitations of linear removal methods is their inability to remove non-linear information about the protected attribute, which is often encoded in text representations through complex neural networks. In contrast, our method is capable of removing both linear and non-linear information, resulting in a more effective reduction of extrinsic bias (Section 4.4). Ravfogel et al. (2022b) proposed a nonlinear extension of the concept-removal objective of Ravfogel et al. (2022a). They identify the subspace to be neutralized in kernel space by running a kernelized version of a minimax game as in Ravfogel et al. (2022a). Shao et al. (2023) also use kernels to try and remove non-linear information While this approach aims to remove non-linear information, it can only choose data mapping from a pre-defined set of kernels, and as shown in Ravfogel et al. (2022a), the attribute protection does not transfer to other non-linear kernels . Our approach uses a deep neural network as the bias signal modeling, thus has the potential to express any non-linear function. Our empirical results (Section 4) show that our method significantly outperforms these methods on ## 3 Approach 3.1 Problem Formulation Given a dataset D = {xi, yi, zi} N i=1 which consists of triples of text representation xi ∈ X , downstream task label yi ∈ Y and a protected attribute zi ∈ Z which corresponds to discrete attribute values, such as gender. Our goal is to eliminate the information related to the protected attribute from the representations while minimizing the effect on other relevant information. To achieve this, we intend to learn a non-linear transformation of the representations such that the protected attribute zi cannot be inferred from the transformed representations x clean i, while still preserving the information with regard to the downstream task label yi. ## 3.2 Adversarial Approach Background The core of our approach is to produce projection of the representations such that any classifier is unable to distinguish between the protected attribute groups. To gain some intuition about how such projections are generated, let us first consider a trained probe classifier f that classifies the attribute label z of each representation vector x. By assigning adversarial perturbations and moving in the direction of the gradient of the loss function with respect to the input vector, the representations can be modified such that the classifier's ability to predict the protected attribute is hindered, while minimizing the alteration of other relevant information: $$x_{n e w}=x+\lambda\cdot\nabla_{x}L(f(x),z),\qquad(1)$$ where λ > 0. Elazar and Goldberg (2018) applied a similar approach in removal of demographic attributes from text data during training. In contrast, we apply our method on the representation layer post-training, with a specific loss function and λ. We present a novel loss for L, to which we call the projective loss. It is designed for removing information from neural representation. It allows for a single-step projection of the representations, rendering the probe classifier f oblivious to the protected attribute. Before presenting the projective loss, we explore why the common cross entropy (CE) is not optimal for our task. The CE loss function is defined as: $$L_{CE}(p,y)=\begin{cases}-log(p)&\text{if y=1}\\ -log(1-p)&\text{otherwise.}\end{cases}\tag{2}$$ ![2_image_0.png](2_image_0.png) where y *∈ {±*1} specifies the ground-truth class and p ∈ [0, 1] is the model's estimated probability for the class with label y = 1. For the sake of clarity, we formally define pt: $$p_{t}={\begin{cases}p&{\mathrm{if~y=1~}}\\ 1-p&{\mathrm{otherwise.}}\end{cases}}\quad{\mathrm{(3)}}$$ and rewrite CE(*p, y*) = CE(pt) = −log(pt). The CE loss can be seen in black in Figure 2. A noteworthy characteristic of this loss is that examples which are considered to have a strong signal of the protected attribute (i.e., are easily classified with pt ≫ 0.5) yield low gradients. In Appendix B we demonstrate mathematically that: $$\nabla_{x}L_{C E}=\pm\,(1-p_{t})\nabla_{x}f^{\top}$$ $$(4)$$ ⊤ (4) As pt approaches 1, ∇xLCE tends to 0 and the adversarial perturbation associated with the mostbiased samples is vanishingly small. Hence, the use of gradients of the CE loss for information removal brings about a major disadvantage. ## 3.3 Projective Loss A more effective way to remove the entire signal of bias in the representations would be projecting them on the hypersurface where the classifier is oblivious to the protected attribute. To achieve this, we propose the projective loss: LP (pt) = 12 (log(pt) − log(1 − pt))2(5) Figure 2 illustrates the behavior of the projective loss compared to the more common cross entropy loss. As can be observed, the projective loss gives higher weights to examples where the probe classifier can predict the protected attribute well. The minimum occurs at pt = 0.5, where there is ambiguity for the probe classifier in determining the label. Eq. 1 is now modified as : $\ x_{p}=x-\lambda_{P}\cdot\nabla_{x}L_{P}(f(x),z),$ ### Giant of the projective loss can be expressed. The gradient of the projective loss can be expressed as: If the projective loss can be expressed $\,\nabla_x L_P=f(x)\nabla_x f^\top$ . We show in Appendix C that Eq. 6 with the projective loss and a specific λP =1 ∥∇xf∥ 2 yields a projection of the embedding vectors on the local linear model of each embedding. Special Case of a Linear Probe Classifier. We now analyze the special case where f is a linear classifier. Given a linear classifier f(x) = x⊤θ where θ ∈ R dand a logistic function σ(f) = 1 1+e−x⊤θ to produce the probability pt, we calculate the gradients of the projective loss as: $$\nabla_{x}L_{P}=(x^{\top}\theta)\theta^{\top}$$ Normalizing θ by setting λP =1 ∥∇xf∥ 2 =1 θ⊤θ in Eq. 6 yields the orthogonal projection formula: $$x_{p}=x-(\frac{x^{\top}\theta}{\theta^{\top}\theta})\theta^{\top}$$ This is also known as the null space projection which is used in INLP (Ravfogel et al., 2020). INLP is a special case of our method when using a linear probe classifier. Unlike INLP, which obtains the projected embeddings by identifying the null space of a linear classifier, our method utilizes the gradients of neural network classifiers to obtain the projected embeddings. INLP has been shown to be effective in removing sensitive information from neural representations (Ravfogel et al., 2020). However, as highlighted by Kumar et al. (2022), a limitation of this approach is that each step of the projection operation decreases the norm of the representation, leading to its eventual reduction to zero as the number of steps increases. Our proposed method, IGBP, addresses this issue by utilizing a non-linear probe in the projection process, which does not reduce the rank of the representations. Thus, the removal of sensitive information is performed with minimal loss of other information as demonstrated in Section 4.4. Algorithm 1 Iterative Gradient-Based Projection (IGBP ) Input: Model representations X, protected attribute Z, Stopping Criteria Sc Output: New representations X*clean*, probes list F $F$ $X_0\gets X$ $N\gets0$ $F\gets[\,]$ **while** (not $S_c$) **do** $f\gets$TrainClassifier($X_N,Z$) $F.append(f)$ $X_{N+1}\gets\{\}$ **for** $x\in X_N$ **do** $x_p=x-\frac{\nabla_x L_P(f(x),z)}{\|\nabla_x f\|^2}$ $X_{N+1}\gets\{x_p\}\cup X_{N+1}$ **end for** $N\gets N+1$ **end while** . $$(7)$$ return XN , F ![3_image_0.png](3_image_0.png) $$({\boldsymbol{8}})$$ ## 3.4 Iterative Gradient-Based Projection $$\mathbf{Q}$$ In this section we present our algorithm, Iterative Gradient-Based Projection (IGBP), for removing information of a discrete2attribute Z for a set of vectors X. Algorithm 1 presents the IGBP algorithm, which begins by training a classifier f1 on the original representations X to predict a property Z. The projected representations X1 p are obtained by applying Eq. 6 to the original representations X. Since there are often multiple hypersurfaces that can capture sensitive attribute information, this process is repeated iteratively, each time using a newly trained classifier on the previous projected representations. The optimal number of iterations and the stopping criteria are determined with metrics such as accuracy or fairness. The relationship between the number of iterations and these metrics is explored in Section 5.2. ## 4 Experiments In this section we compare competing methods for bias removal with the IGBP algorithm in both intrinsic (Section 4.3) and extrinsic evaluations (Section 4.4), which are common in the literature on bias removal. 2This work primarily addresses the removal of discrete protected attributes (e.g., gender) information. However, in Appendix A we show it can be adapted for continuous attributes (e.g., age). ## 4.1 Compared Methods We compare IGBP with several methods for bias mitigation, including a baseline (**Original**) without any concept-removal procedure. INLP (Ravfogel et al., 2020), an iterative method that removes the protected information by projecting on the null space of linear classifiers. RLACE (Ravfogel et al., 2022a), which removes linear concepts from the representation space as a constrained version of a minimax game where the adversary is limited to a fixed-rank orthogonal projection. ## Kernelized Concept Erasure (Kce) (Ravfogel et al., 2022b), which proposes a kernelization of a linear minimax game for concept erasure. ## 4.2 Setup In each experiment, we utilize a a one-hidden layer neural network with ReLU activation as the attribute classifier for IGBP algorithm. Then we perform 5 runs of IGBP and competing methods with random initialization and report mean and standard deviations. Further details on implementation and hyperparameter tuning are provided in Appendix D. ## 4.3 Intrinsic Evaluation We begin by evaluating our debiasing method on GloVe (Pennington et al., 2014) word embeddings, as it has been previously shown by Bolukbasi et al. (2016) that these embeddings contain unwanted gender biases. Our goal is to remove these biases. We replicate the experiment performed by Gonen and Goldberg (2019) and use the training and test data of Ravfogel et al. (2020), where the word vectors are labeled with their respective bias: male-biased or female-biased. See Appendix D for more details on the experimental setting. ## 4.3.1 Embeddings Classification After applying the debiasing methods, we follow the evaluation approach proposed by Gonen and Goldberg (2019) and train new classifiers, a linear SVM and a non-linear SVM with RBF kernel, to predict gender from the new representations. We define *leakage* as the accuracy of these classifiers. The results are shown in Table 1. As we can see, | Leakage | Leakage | | |-----------|------------|-------------| | Method | Linear ↓ | Non-Linear↓ | | Original | 100±0.00 | 100±0.00 | | INLP | 55.03±1.29 | 94.42±1.85 | | RLACE | 53.80±1.37 | 92.53±1.87 | | KCE | 60.01±0.03 | 96.20±1.30 | | IGBP | 56.56±4.25 | 69.89±2.81 | all methods are effective at removing linearly encoded information, as the leakage is very low. However, when using non-linear classifiers, all competing methods fail to eliminate leakage, including KCE.3 Even though the adversary classifier used to calculate leakage (SVM-RBF) is different from the ReLU MLP employed in IGBP, our method is still the most effective at removing non-linearly encoded information. The results demonstrate the advantage of IGBP in eliminating non-linear information in word embeddings over competing methods. ## 4.3.2 Weat Analysis The Word Embeddings Association Test (Caliskan et al., 2017) is a measure of bias in static word embeddings, which compares the association of male and female related words with stereotypically male or female professions. We follow Gonen and Goldberg (2019) in defining the groups of male and females associated words. We represent the gender groups with three categories (1) art and mathematics; (2) art and science; and (3) career and family. We present the results of the WEAT test in Table 2, including the d-value and the p-value (refer to Caliskan et al. (2017) for further information). We found that IGBP has the most effective debiasing effect on word embeddings compared to other methods. ## 4.3.3 Semantic Similarity Analysis In addition to mitigating bias in word embeddings, it is important to examine if any semantic content was damaged. We perform a semantic evaluation of the debiased word embeddings using SimLex999 (Hill et al., 2015), an annotated dataset of word | Method | WEAT's d↓ | WEAT's p↑ | |----------|-------------|--------------| | Original | 1.57 ± 0.00 | 0.000 ± 0.00 | | INLP | 1.10 ± 0.10 | 0.016 ± 0.00 | | RLACE | 0.80 ± 0.01 | 0.062 ± 0.00 | | KCE | 0.78 ± 0.01 | 0.067 ± 0.00 | | IGBP | 0.73 ± 0.01 | 0.091 ± 0.00 | | Original | 1.63±0.00 | 0.000±0.00 | | INLP | 1.08±0.00 | 0.011±0.00 | | RLACE | 0.77±0.01 | 0.073±0.003 | | KCE | 0.74±0.00 | 0.08±0.00 | | IGBP | 0.19 ± 0.01 | 0.64 ± 0.01 | | Original | 1.69±0.00 | 0.000±0.00 | | INLP | 1.15±0.07 | 0.007±0.00 | | RLACE | 0.78±0.01 | 0.072±0.00 | | KCE | 0.73±0.01 | 0.090±0.05 | | IGBP | 0.21 ± 0.00 | 0.330 ± 0.00 | pairs with human similarity scores for each pair. As displayed in Table 3, IGBP and other methods yield only a slight reduction in correlation. To qualitatively assess the impact of IGBP on semantic similarity in GloVe word embeddings, we provide a random sample of words and their nearest neighbors before and after debiasing in Appendix D.2. We observe minimal change to the nearest neighbors. ## 4.4 Extrinsic Evaluation In this section we focus on evaluating IGBP in the context of classification tasks. We focus on tasks where we want to eliminate a concept from the representations to prevent the main classifier from using it, thus ensuring fair classification. ## 4.4.1 Evaluation Metrics To measure extrinsic bias, we calculate the True Positive Rate Gap *(TPR GAP)* to measure the differences in performance between the different protected attribute groups. $$\begin{array}{r l}{\mathrm{TPR}_{\mathrm{z,y}}=\mathrm{P}({\hat{\mathrm{Y}}}=\mathrm{y}|\mathrm{Z}=\mathrm{z},\mathrm{Y}=\mathrm{y})}\\ {\quad}&{{}}\\ {\mathrm{GAP_{\mathrm{TPR}}^{\mathrm{z,y}}=\mathrm{TPR}_{\mathrm{z,y}}-\mathrm{TPR}_{\mathrm{z^{\prime},y}}}}\\ {\quad}&{{}}\end{array}$$ To assign a single bias measure across all values of y, we follow Romanov et al. (2019) and calculate | Method | ρ ↑ | |----------|---------------| | Original | 0.400 ± 0.000 | | INLP | 0.389 ± 0.001 | | RLACE | 0.389 ± 0.001 | | KCE | 0.393 ± 0.001 | | IGBP | 0.387 ± 0.001 | the root mean square GAPzTPR in order to obtain a single bias score over all labels y: $$\mathrm{GAP}_{\mathrm{TPR}}^{\mathrm{z}}={\sqrt{\frac{1}{|\mathrm{N}|}}}\sum_{\mathrm{y}\in\mathrm{N}}(\mathrm{GAP}_{\mathrm{TPR}}^{\mathrm{z,y}})^{2}\qquad(10)$$ For example, in a sentiment analysis task, it is important for the model to have equal performance across all demographic groups, as measured by the TPR. This ensures that the model's predictions are fair and not biased towards any particular group. We report two common metrics for measuring bias in representations: : (1) *Leakage*, as described in Section 4.3; (2) Minimum Description Length (MDL) Compression (Voita and Titov, 2020), which serves as an indicator of the extent to which certain biases can be extracted from a model's representations (Orgad and Belinkov, 2022). A higher compression score indicates that it is easier to extract the protected attribute from the model's representation. Orgad et al. (2022) found that this metric highly correlates with extrinsic bias metrics. We use a ReLU MLP of two-hidden layers of size 512 as the probe classifier. We provide more details about these metrics in Appendix D.3. Table 4: Dataset characteristics. Main classification task, protected attribute, and sizes of training, development, and test sets, in each dataset. | DIAL | BIOS | | |-----------|--------------|----------------| | Main Task | Sentiment | Profession | | Attribute | Race | Gender | | Size | 100K/ 8K/ 8K | 255K/ 39K/ 43K | | BERT | RoBERTa | | | | | | | | |----------------------|------------|------------|------------|------------|------------|------------|------------|------------| | Method | Acc ↑ | GAPT P R ↓ | Leakage↓ | C ↓ | Acc ↑ | GAPT P R ↓ | Leakage ↓ | C ↓ | | Original | 79.89±0.06 | 15.55±0.16 | 99.32±0.11 | 30.81±0.18 | 79.08±0.05 | 19.26±0.40 | 97.25±0.11 | 11.09±0.00 | | INLP | 75.65±0.03 | 13.52±0.13 | 95.77±1.42 | 7.76±0.60 | 76.75±0.05 | 10.71±0.05 | 81.29±1.04 | 1.78±0.03 | | RLACE | 79.77±0.07 | 13.54±0.13 | 98.55±0.19 | 13.31±0.99 | 78.57±0.07 | 11.82±0.27 | 90.87±1.90 | 2.80±0.19 | | KCE | 78.16±0.05 | 13.65±0.12 | 97.35±0.15 | 11.67±1.01 | 78.54±0.04 | 13.94±0.18 | 96.60±0.21 | 6.57±0.84 | | IGBP | 78.80±0.19 | 9.87±0.25 | 69.72±2.56 | 1.66±0.08 | 77.49±0.04 | 9.45±0.04 | 65.71±0.44 | 1.54±0.01 | | (a) Frozen models | | | | | | | | | | BERT | RoBERTa | | | | | | | | | Method | Acc ↑ | GAPT P R ↓ | Leakage↓ | C ↓ | Acc ↑ | GAPT P R ↓ | Leakage ↓ | C ↓ | | Original | 85.15±0.04 | 13.45±0.11 | 98.49±0.02 | 13.58±0.08 | 84.09±0.10 | 14.57±0.16 | 99.02±0.01 | 17.28±0.00 | | INLP | 85.08±0.03 | 12.71±0.04 | 97.08±0.00 | 6.01±0.00 | 83.78±0.05 | 14.18±0.10 | 97.74±0.80 | 10.42±0.01 | | RLACE | 85.12±0.04 | 12.93±0.14 | 98.26±0.05 | 8.87±0.01 | 83.85±0.10 | 14.21±0.05 | 98.84±0.02 | 11.31±0.01 | | KCE | 84.86±0.03 | 12.81±0.12 | 98.70±0.04 | 9.43±0.01 | 83.94±0.04 | 14.30±0.08 | 98.33±0.02 | 13.04±0.02 | | IGBP | 83.70±0.05 | 9.63±0.18 | 65.47±0.40 | 1.54±0.01 | 82.88±0.13 | 10.78±0.10 | 65.73±0.40 | 1.53±0.01 | | (b) Finetuned models | | | | | | | | | ## 4.4.2 Datasets We experiment with the following two datasets (Table 4 provides a brief summary): Bios. The Bias in Bios dataset (De-Arteaga et al., 2019) contains 394K biographies. The task is to predict a person's occupation (out of 28 professions) based on their biography. Gender annotations are provided for each biography, and we aim to eliminate any gender-related information encoded in the representations. We split to training, development, and test sets following De-Arteaga et al. (2019). The pre-trained BERT model (Devlin et al., 2019) is used as the encoder and the final hidden layer's [CLS] token is used as a representation for the biography. To ensure that the results are not model-specific, the experiment is replicated using the pre-trained RoBERTa model (Liu et al., 2019) as the encoder. Additionally, the experiment is conducted with fine-tuned models. DIAL. Dialectal tweets (DIAL) is a corpus of tweets collected by Blodgett et al. (2016), where the task is to predict the sentiment of the tweet (positive or negative). Each tweet is associated with the sociolect of the author (African American English or Standard American English), which is a proxy for the racial identity of the author. Following Ravfogel et al. (2020) setup, we filter the corpus and split the data into training, development, and test sets. We use The DeepMoji model (Felbo et al., 2017) as an encoder to produce representations. ## 4.4.3 Results Bios. The results from the Bias in Bios experiment are summarized in Table 5. With both BERT and RoBERTa frozen pre-trained models (Table 5a), it can be observed that while INLP reduces TPR-GAP, it degrades overall performance in the process. This may be due to INLP's limitation of decreasing representation's rank each step. RLACE and KCE lead to a reduction in the TPR-GAP but the value remains elevated. On the other hand, our proposed method IGBP significantly reduces TPRGAP while only causing a slight decrease in main task accuracy. Furthermore, in terms of intrinsic bias, IGBP is distinguished by its effectiveness at decreasing non-linear leakage and compression. As for the results with fine-tuned models (Table 5b), it shows similar results. Other competing methods only exhibit minimal reduction in TPR-GAP, whereas our approach, IGBP, succeeds in enhancing fairness and eliminating leakage. | Method | Accuracy↑ | GAPT P R ↓ | Leakage↓ | C ↓ | |----------|-------------|--------------|------------|-----------| | Original | 73.89±0.04 | 30.19±0.02 | 75.67±0.11 | 2.14±0.00 | | INLP | 69.59±1.14 | 17.59±0.77 | 62.28±1.24 | 1.63±0.00 | | RLACE | 72.98±0.34 | 13.53±1.89 | 61.92±1.67 | 1.62±0.00 | | KCE | 72.92±0.24 | 29.25±0.81 | 73.63±1.66 | 2.12±0.00 | | IGBP | 72.87±0.31 | 9.23±0.04 | 56.53±3.57 | 1.43±0.00 | DIAL. Table 6 presents a summary of the results obtained on DIAL dataset. The results show that applying IGBP leads to a significant reduction in the TPR-GAP, with a statistically significant difference compared to the other methods, while maintaining a level of accuracy comparable to the original model. In terms of intrinsic evaluation of the representations, both INLP and RLACE, reduce leakage and compression, but not to the same extent as IGBP. While KCE fails to reduce bias. On a whole, we found that our proposed method outperforms competing methods empirically in terms of reducing extrinsic and intrinsic bias, and offers a more balanced accuracy–fairness tradeoff. ## 5 Analysis We conduct a series of analyses of our proposed method: an examination of probe's complexity impact on debiasing and an analysis of the effect of number of iterations on performance. ## 5.1 Effect Of Probe Complexity IGBP proved to be superior to linear informationremoval methods in the experiments presented in Section 4. To further investigate the potential of reducing bias, we will explore the use of more complex non-linear probes by varying the width and depth of the neural network used as a probe in IGBP 4. Figure 3 shows the TPR-GAP score after applying 50 iterations of IGBP on Bias in Bios dataset5. As we can see, there is a noticeable reduction in TPR-GAP when using non-linear probes instead of linear probe. Applying IGBP with a growing complexity of probe classifiers (moving from left to right) also result in a lower TPR-GAP. However, the reduction is not significant. We also report that the more complex the probe, the greater 4For further details on the probes architecture used, see App.E.2 5In Appendix E we show similar results on DIAL dataset. the accuracy drop, but not at a significant value: ![7_image_0.png](7_image_0.png) The maximum accuracy drop was 1.20%. To conclude, based on the results in Section 4 and this expirement, one-hidden layer probe is enough to reduce bias related to gender and race in text representation. Using more complex probes may offer some additional benefits, but the improvement will be limited. ## 5.2 Perforamnce - Fairness Tradeoff One of the key factors that influence the effectiveness of our method is the number of iterations used. Varying the number of iterations and measuring the resulting changes in the TPR-GAP and downstream task accuracy on the DIAL development set shows that the number of iterations had a significant impact on attribute removal in the early stages (Figure 4), but eventually reached a plateau. Increasing the number of iterations also harmed downstream task accuracy, but the decrease was gradual. A similar experiment on Bias in Bios (Appendix E) ![7_image_1.png](7_image_1.png) showed the same trend. The results suggest that the balance between performance and fairness can be controlled by adjusting the number of iterations or by implementing appropriate stopping criteria. ## 6 Conclusion We presented a gradient-based method for the erasure of non-linearly encoded concepts in text representations. Its ability to remove non-linear information makes it particularly useful for addressing the complex biases that may be present in text representations learned through complex models. We empirically show the effectiveness of our approach to mitigate social biases in representations, thereby improving fairness in models' decision-making. Beyond mitigating bias, the Iterative GradientBased Projection method has the potential to be applied in a wide range of other contexts, such as increasing model interpretability by applying causal interventions, adapting models to new domains by removing domain-specific information and ensuring privacy by removing sensitive information. In future work, we plan to explore these and other potential applications of the proposed method. ## Limitations The proposed method has limitations in its dependence on the accuracy and performance of the probe classifier as noted in (Belinkov, 2022), and may be limited in scenarios where the dataset is small or lacks sufficient information about the protected attribute. Additionally, this approach increases inference time due to the use of a sequential debiasing classifiers. In future work, we aim to find a single probe that eliminates non-linear leakage. Finally, the proposed method aims to eliminate information about a protected attribute in neural representations. While it may align with fairness metrics such as demographic parity, it is not specifically designed to ensure them. ## Ethical Considerations Ethical considerations are of utmost importance in this work. It is essential to exercise caution and consider the ethical implications when using this method, as it has the potential to be applied in situations where fair and unbiased decision-making is critical. It is important to thoroughly evaluate the effectiveness of the method in the specific context in which it will be used, and to carefully consider the data, fairness metrics, and overall application before deploying it. It is worth noting that our method is limited by the fact that gender is a nonbinary concept and that it does not address all forms of bias, and further research is necessary to identify and address these biases. Additionally, it is important to consider the potential risk of inadvertently increasing bias through reversing the direction of the debiasing operation in the algorithm. It is crucial to be mindful of the potential impact of this method and to approach its use with caution and care. ## Acknowledgment This project was supported by an AI Alignment grant from Open Philanthropy, the Israel Science Foundation (grant No. 448/20), and an Azrieli Foundation Early Career Faculty Fellowship. ## References Yonatan Belinkov. 2022. Probing classifiers: Promises, shortcomings, and advances. *Computational Linguistics*, 48(1):207–219. Alex Beutel, Jilin Chen, Zhe Zhao, and Ed H. Chi. 2017. Data decisions and theoretical implications when adversarially learning fair representations. *CoRR*, abs/1707.00075. Su Lin Blodgett, Lisa Green, and Brendan O'Connor. 2016. Demographic dialectal variation in social media: A case study of African-American English. In *Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing*, pages 1119–1130, Austin, Texas. Association for Computational Linguistics. Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. Advances in neural information processing systems, 29. Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. *Science*, 356(6334):183–186. Maria De-Arteaga, Alexey Romanov, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, and Adam Tauman Kalai. 2019. Bias in bios: A case study of semantic representation bias in a high-stakes setting. In *Proceedings of the Conference on Fairness, Accountability, and Transparency*, FAT* '19, page 120–128, New York, NY, USA. Association for Computing Machinery. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171–4186. Association for Computational Linguistics. Yanai Elazar and Yoav Goldberg. 2018. Adversarial removal of demographic attributes from text data. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 11– 21. Association for Computational Linguistics. Bjarke Felbo, Alan Mislove, Anders Søgaard, Iyad Rahwan, and Sune Lehmann. 2017. Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm. In *Conference on Empirical Methods in Natural Language Processing (EMNLP)*. Hila Gonen and Yoav Goldberg. 2019. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. In Proceedings of the 2019 Workshop on Widening NLP@ACL 2019, Florence, Italy, July 28, 2019, pages 60–63. Association for Computational Linguistics. Felix Hill, Roi Reichart, and Anna Korhonen. 2015. Simlex-999: Evaluating semantic models with (genuine) similarity estimation. *Computational Linguistics*, 41(4):665–695. Abhinav Kumar, Chenhao Tan, and Amit Sharma. 2022. Probing classifiers are unreliable for concept removal and detection. In ICML 2022: Workshop on Spurious Correlations, Invariance and Stability. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. *CoRR*, abs/1907.11692. Ilya Loshchilov and Frank Hutter. 2018. Decoupled weight decay regularization. In *International Conference on Learning Representations*. Michael Mendelson and Yonatan Belinkov. 2021. Debiasing methods in natural language understanding make bias more accessible. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1545–1557, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Hadas Orgad and Yonatan Belinkov. 2022. Choose your lenses: Flaws in gender bias evaluation. In *Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP)*, pages 151–167, Seattle, Washington. Association for Computational Linguistics. Hadas Orgad and Yonatan Belinkov. 2023. Debiasing NLP models without demographic information. In Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics, ACL 2023, July 9-14, 2023. Association for Computational Linguistics. Hadas Orgad, Seraphina Goldfarb-Tarrant, and Yonatan Belinkov. 2022. How gender debiasing affects internal model representations, and why it matters. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2602–2628, Seattle, United States. Association for Computational Linguistics. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. *Journal of Machine Learning Research*, 12:2825–2830. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In *Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)*, pages 1532–1543, Doha, Qatar. Association for Computational Linguistics. Shauli Ravfogel, Yanai Elazar, Hila Gonen, Michael Twiton, and Yoav Goldberg. 2020. Null it out: Guarding protected attributes by iterative nullspace projection. In *Proceedings of the 58th Annual Meeting of* the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 7237–7256. Association for Computational Linguistics. Shauli Ravfogel, Michael Twiton, Yoav Goldberg, and Ryan Cotterell. 2022a. Linear adversarial concept erasure. In International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of *Proceedings of* Machine Learning Research, pages 18400–18421. PMLR. Shauli Ravfogel, Francisco Vargas, Yoav Goldberg, and Ryan Cotterell. 2022b. Adversarial concept erasure in kernel space. In *Proceedings of the 2022 Conference on Empirical Methods in Natural Language* Processing, pages 6034–6055, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky, and Adam Kalai. 2019. What's in a name? Reducing bias in bios without access to protected attributes. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4187–4195, Minneapolis, Minnesota. Association for Computational Linguistics. Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in coreference resolution. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 2 (Short Papers), pages 8–14. Association for Computational Linguistics. Shun Shao, Yftah Ziser, and Shay B. Cohen. 2023. Gold doesn't always glitter: Spectral removal of linear and nonlinear guarded attribute information. In *Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics*, pages 1611–1622, Dubrovnik, Croatia. Association for Computational Linguistics. Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of machine learning research, 9(11). Elena Voita and Ivan Titov. 2020. Information-theoretic probing with minimum description length. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)*, pages 183–196, Online. Association for Computational Linguistics. Tianlu Wang, Xi Victoria Lin, Nazneen Fatema Rajani, Bryan McCann, Vicente Ordonez, and Caiming Xiong. 2020. Double-hard debias: Tailoring word embeddings for gender bias mitigation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5443–5453, Online. Association for Computational Linguistics. Tianlu Wang, Jieyu Zhao, Mark Yatskar, Kai-Wei Chang, and Vicente Ordonez. 2019. Balanced datasets are not enough: Estimating and mitigating gender bias in deep image representations. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 - November 2, 2019, pages 5309–5318. IEEE. Qizhe Xie, Zihang Dai, Yulun Du, Eduard Hovy, and Graham Neubig. 2017. Controllable invariance through adversarial feature learning. Advances in neural information processing systems, 30. Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell. 2018. Mitigating unwanted biases with adversarial learning. In *Proceedings of the 2018 AAAI/ACM* Conference on AI, Ethics, and Society, pages 335– 340. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018. Gender bias in coreference resolution: Evaluation and debiasing methods. In *Proceedings of the 2018 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 2 (Short Papers), pages 15–20. Association for Computational Linguistics. ## Appendix an activation function:6 ## A Continuous Attributes While this work focuses on discrete attribute information-removal, we explain briefly how it can be adapted for regression problems, where the attribute is continuous (e.g., age). In discrete attribute classification tasks, Given that f is the classifier, IGBP is designed to transform each vector x to x′ onto the decision boundary of f such that f(x′) = 0. In the continuous case, where f is the attribute regressor, IGBP aims to achieve a similar result, with the goal of projecting each vector x onto a point x′such that f(x′) = 0. Hence, each input, x, is regressed to a non-informative value of zero, meaning that the input is stripped of its information content. ## B Reversal Gradient Of Cross Entropy Let us consider a non-linear model f(x) followed by a logistic function to obtain the probabilty p = 1 1+e f(x). Then the gradient of LCE(pt) when y = 1 is : $$\nabla_{x}L_{CE}=\frac{\partial L_{CE}}{\partial p_{t}}\frac{\partial p_{t}}{\partial f}\frac{\partial f}{\partial x}\tag{11}$$ $$=\frac{-1}{p_{t}}\;p_{t}(1-p_{t})\;\nabla_{x}f$$ $$=-(1-p_{t})\;\nabla_{x}f$$ and ∇xLCE = +(1 − pt) ∇xf when y = −1. ## C Local Linear Model Projection We will now show how the projective loss update step projects each sample to its local linear model boundary. This will facilitate the probe being oblivious to the protected attribute. Local Linearity. First, we will show that a trained ReLU neural net probe divides the embedding space into sub-regions, where in each subregion it behaves as a linear model. We will demonstrate that we can obtain the local linear model for each embedding. Let us consider a non-linear probe composed of one-hidden layer with ReLU as $z=x^{\top}W$, $$h=ReLU(z),$$ $$f=h^{\top}\theta,$$ $$p=\frac{1}{1+e^{-f(x)}}\tag{12}$$ The activation function ReLU acts as an element The activation function ReLU acts as an elementwise scalar (0 or 1) multiplication, hence h can be written as: h = a ⊙ z (13) where a is a vector with (0,1) entries indicating the slopes of ReLU in the corresponding linear regions where z fall into. Let us define a diagonal matrix D: D = *diag*(a) (14) $$D=d i a g(a)$$ Then, $$h=D z$$ since doing element wise multiplication with a vector a is the same as multiplication by the diagonal matrix D. It is now possible to express the output in each sub-region in matrix form as follows: $$f=h^{\top}\theta$$ $$=(DWT)^{\top}\theta$$ $$=x^{\top}(DW)^{\top}\theta\tag{16}$$ $$(14)$$ $$(15)$$ D expresses the ReLU function, so naturally it depends on W x, but since the weights of the probe are frozen/constant and we are doing the calculation for the sub-region where the slope of the ReLU function is constant, we can assume that D is not dependent on x in this sub-region.Thus, in each sub-region r defined by the classifier, the local linear model for this sub-region is θr defined bellow : $$\theta_{r}=(D W)^{\top}\theta$$ $$(17)$$ ⊤θ (17) We can obtain the vector θr with the gradient of f: $$\nabla_{x}f=(D W)^{\top}\theta$$ ⊤θ (18) 6The extension for multiple hidden layers and different piece-wise linear activation functions is straightforward. Applying the chain rule with LP (pt) as in Eq. 11 for each sub-region: $$\nabla_{x}L_{P}=\frac{\partial L_{P}}{\partial p_{t}}\frac{\partial p_{t}}{\partial p}\frac{\partial p}{\partial f}\frac{\partial f}{\partial x}$$ $$=\frac{log(\frac{p_{t}}{1-p_{t}})}{p_{t}(1-p_{t})}(-1)^{y}p(1-p)\nabla_{x}f^{\top}$$ $$=(-1)^{-y}f(x)(-1)^{y}\nabla_{x}f^{\top}$$ $$=f(x)\nabla_{x}f^{\top}$$ $$=x^{\top}(DW)^{\top}\theta((DW)^{\top}\theta)^{\top}\text{(Using Eq.~{})}$$ $$=(x^{\top}\theta_{r})\theta_{r}^{\top}\tag{19}$$ Again, we can obtain θr from the gradient and divide the term with 1 θ⊤rθr to get the linear projection of each sub-region to its linear model null space $$x_{p}=x-(\frac{x^{\top}\theta_{r}}{\theta_{r}^{\top}\theta_{r}})\theta_{r}^{\top}\qquad\qquad(20)$$ ## D Experiment This section provides additional details on the experimental setup and results. ## D.1 Implementaion Details IGBP stopping criteria. In order to balance the trade-off between reducing extrinsic and intrinsic bias while preserving accuracy (as can be seen in Section 5.2), we have established a stopping criterion for our proposed method, IGBP . The criterion is based on two factors: the accuracy of a newly trained probe classifier on the protected attribute, and the main task accuracy on the development set. Specifically, we run Algorithm 1 until the newly trained probe classifier acheives within 2% abovemajority accuracy, or until the main task accuracy on the development set drops below a threshold of 0.98 of the original main task accuracy. Through empirical analysis on the development set, we have determined that this threshold yields good results for all extrinsic evaluation experiments. However, it is worth noting that this stopping criterion may be adjusted based on specific requirements for each case. IGBP classifier type. For all experiments, we use a a ReLU MLP as the attribute classifier with a single-hidden layer of the same size as the input dimension. We train the classifier with AdamW optimizer (Loshchilov and Hutter, 2018) with learning rate of 2e−4and batch size of 256. Applying the algorithm for training on DIAL dataset takes about 0.5-1 hour and 1-3 hours on Bias in Bios on NVIDIA GeForce RTX 2080 Ti. Compeing methods implementation and hyperparameters. For competing methods, we follow their implementations that can be found here7. We run the algorithms until the specific type of leakage they were trying to eliminate was no longer present. For KCE we choose RBF kernel following their selection in their paper for Bias in Bios task. We tried multiple kernels but found that RBF yeilds better results. The results of RLACE are different than those in the original paper because they used only the first 100K of training samples and applied a PCA transformation to reduce dimensions down to 300 due to the high computation time. However, we wanted to make fair comparison so we did not reduce the size of training set or the dimensionality. Models. We used the pre-trained BERT and RoBERTa base models by Huggingface that have 110M and 123M parameters. They were fine-tuned on the proffesion prediction task in Bias in Bios using a stochastic gradient descent (SGD) optimizer with a learning rate of 5e−4, weight decay of 1e−6, and momentum of 0.90. We trained for 30, 000 batches of size 10. ## D.2 Glove Word Embeddings Experiment We provide details about the experimental settings in the static word vectors experiment 4.3. We follow Ravfogel et al. (2020) and use uncased GloVe word embeddings of 150,000 most common words. We project all vectors on he⃗ − *⃗she* direction, and select the 7500 most male-biased and female biased words. Using the same training–development–test split as Ravfogel et al. (2020), we subtract the gender-neutral words and end up with a training set of 7350, an evaluation set of 3150, and a test set of 4500. ## D.2.1 Additional Intrinsic Evaluation We evaluate bias-by-neighbors which was proposed by (Gonen and Goldberg, 2019) and the list of professions from (Bolukbasi et al., 2016). We determine the correlation between bias-by-projection and bias-by-neighbors by calculating the percent-7https://github.com/shauli-ravfogel/nullspace_ projection https://github.com/shauli-ravfogel/rlace-icml https://github.com/shauli-ravfogel/ adv-kernel-removal age of top 100 neighboring words for each profession that were originally biased-by-projection towards a specific gender. Our results show mean correlation of 0.598, which is lower than the previous correlation of 0.852. In comparison, after applying INLP we find a correlation of 0.73. This suggests that while some bias-by-neighbors still remains, the debiasing effect of IGBP is significant. ## D.2.2 Nearest Neighbors We demonstrated in Section 4.3.3 that debiasing using IGBP did not cause significant harm to the GloVe word embedding space as per the SimLex999 test results. To further support this, in Table 7, we present the closest neighbors to 10 randomly sampled words from the vocabulary, both before and after our debiasing procedure, as a qualitative illustration. ![13_image_0.png](13_image_0.png) ![13_image_1.png](13_image_1.png) ## D.3 Extrinsic Evaluation Experiments D.3.1 Metrics We provide additional details on the metrics used in Section 4.4 Main task model. We use sklearn's SVM (Pedregosa et al., 2011) for the main task predictions on DIAL experiment, and sklearn's logistic regression for Bias in Bios which is a multi-label classification task. Leakage and MDL Compression. MDL is an information-theoretic probing which measures how efficiently a model can extract information about the labels from the inputs . In this work, we employ the online coding approach (Voita and Titov, 2020) to calculate MDL. We estimate MDL following Voita and Titov's online coding L*online* and calcuate the **compression**, C, which is compared against uniform encoding L*unif orm* which does not require any learning from data. $$C={\frac{L_{u n i f o r m}}{L_{o n l i n e}}}$$ We evaluate our models using an online code probe, which is trained on fractions of the training dataset: [2.0, 3.0, 4.4, 6.5, 9.5, 14.0, 21.0, 31.0, 45.7, 67.6, 100]. Then we calculate leakage as the probe's accuracy on test set when trained on the entire training set. We use a MLP with two-hidden layer of size 512 and ReLU activation as the probe classifier. This decision was made to stay consistent with previous work which employed MDL (Mendelson and Belinkov, 2021) and to have a different and more powerful adversary than the one used in IGBP . ## E Analysis E.1 Biographies Representation We present the t-SNE (Van der Maaten and Hinton, ![13_image_2.png](13_image_2.png) 2008) projections of the biographies representations of BERT before and after applying IGBP. ## E.2 Benefits Of Non-Linear Information Removal We present the probe architectures we use in our expirement Section 5.1. These include a linear probe with one layer, and several non-linear probes that use ReLU activations. From left to right: onehidden layer of the same size as the input dimension, two-hidden layers with the same size as the input dimension, one-hidden layer with size of twice the input dimension, three-hidden layers with size of input dimension, one-hidden layer with size of three times the input dimension. Figure 6 shows the results of Section 5.1 experiment on DIAL dataset. We observe the same trend. The maximum accuracy drop is 1.32%. ![14_image_0.png](14_image_0.png) ## Number Of Iterations E.2.1 We conduct the same experiment of Section 5.2 on DIAL dataset and show the result in Figure 7. ![14_image_1.png](14_image_1.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitation section ✓ A2. Did you discuss any potential risks of your work? Ethical consideration section ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** 4 ✓ B1. Did you cite the creators of artifacts you used? 4 ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? It is publicly available ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? 4.4.2 ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? these datasets are publicly available and they are collected from the web. We are investigating gender bias and names might have a crucial part. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? 4.4.2 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Table 3 in section 4.4.2 ## C ✓ **Did You Run Computational Experiments?** 4 ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Appendix D.1 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 4.2 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Appendix D.3 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
tan-etal-2023-focal
Focal Training and Tagger Decouple for Grammatical Error Correction
https://aclanthology.org/2023.findings-acl.370
In this paper, we investigate how to improve tagging-based Grammatical Error Correction models. We address two issues of current tagging-based approaches, label imbalance issue, and tagging entanglement issue. Then we propose to down-weight the loss of well-classified labels using Focal Loss and decouple the error detection layer from the label tagging layer through an extra self-attention-based matching module. Experiments over three latest Chinese Grammatical Error Correction datasets show that our proposed methods are effective. We further analyze choices of hyper-parameters for Focal Loss and inference tweaking.
# Focal Training And Tagger Decouple For Grammatical Error Correction Minghuan Tan1**, Min Yang**1∗And **Ruifeng Xu**2 1 Shenzhen Key Laboratory for High Performance Data Mining, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences. 2 Harbin Institute of Technology (Shenzhen). {mh.tan,min.yang}@siat.ac.cn, [email protected] ## Abstract In this paper, we investigate how to improve tagging-based Grammatical Error Correction models. We address two issues of current tagging-based approaches, label imbalance issue, and tagging entanglement issue. Then we propose to down-weight the loss of correctly classified labels using Focal Loss and decouple the error detection layer from the label tagging layer through an extra self-attention-based matching module. Experiments on three recent Chinese Grammatical Error Correction datasets show that our proposed methods are effective. We further analyze choices of hyper-parameters for Focal Loss and inference tweaking. ## 1 Introduction Grammatical Error Correction (GEC) has been receiving increasing interest from the natural language processing community with the surging popularity of intelligent writing assistants like Grammarly. In the English language, a series of benchmarks (Ng et al., 2013, 2014; Bryant et al., 2019) have been created for the evaluations of different methods. In many other languages, various emerging datasets with language-specific challenges are also attracting plenty of attention (Trinh and Rozovskaya, 2021; Korre and Pavlopoulos, 2022; Náplava et al., 2022; Zhang et al., 2022; Xu et al., 2022; Jiang et al., 2022). Existing methods for GEC can be categorized into sequence-to-sequence approaches, tagging-based approaches, and hybrid approaches. Sequence-to-sequence approaches require a larger amount of training data and usually rely on synthetic data for pretraining (Rothe et al., 2021; Stahlberg and Kumar, 2021; Kaneko et al., 2020). Tagging-based approaches adopt editing operations between the source text and the target text as training objectives (Malmi et al., 2019; Awasthi et al., 2019). These methods are faster in inference and ∗ Corresponding author. can achieve competitive performance as sequenceto-sequence approaches. Hybrid models separate the tagging process and the insertion process into two stages, and can easily change word order with an extra pointer network module (Mallinson et al., 2022). In average cases, hybrid approaches can achieve sub-linear inference time. In this work, we are interested in tagging-based models due to their simplicity and high efficiency and would like to investigate how to further improve their performance. In the literature, there is also work on improving the performance of existing tagging-based models. For example, Tarnavskyi et al. (2022) explored ensembles of recent Transformer encoders in large configurations with various vocabulary sizes. For general-purpose improvements, existing methods range from optimizing training schemes to changing inference techniques. For training, Li et al. (2021) explore how to enhance a model through generating valuable training instances and applying task-specific pretraining strategies. For inference, Sun and Wang (2022) propose Align-and-Predict Decoding (APD) to offer more flexibility for the precision-recall trade-off. From the perspective of system combination, Qorib et al. (2022) propose a simple logistic regression algorithm to combine GEC models effectively. Different from the methods discussed above, we focus on improving tagging-based models from the perspective of model designing and learning. Currently, GECToR (Omelianchuk et al., 2020) is one of the representative tagging-based models. GECToR contains a pretrained transformer-based encoder with two linear classification layers as the tagger. One of the linear layers is used for label tagging, and the other for error detection. However, we identify the following issues with current tagging-based models: (1) Label imbalance issue. The training labels contain a large portion of easy-to-learn labels and the distribution is highly COR COR ERR COR COR COR COR ERR COR COR COR ![1_image_0.png](1_image_0.png) CE_识 KEEP KEEP KEEP KEEP KEEP KEEP 0 1 2 3 4 5 6 7 8 9 10 Figure 1: Model structure with error detection decoupled from label tagging. skewed. Existing learning methods use cross entropy with label smoothing as the loss function which is deemed as sub-optimal for this scenario. (2) In current tagging-based models, sequence labeling and error detection are two linear classification layers over the same hidden representation. This entanglement may also hurt the performance of models. To solve the problems discussed above, we propose the following modifications to tagging-based models: (1) We use Focal Loss (Lin et al., 2017) to counteract class imbalance and down-weight the loss assigned to correctly classified labels. (2) We decouple error detection from label tagging using extra attention matching module (Wang and Jiang, 2017). We then verify the effectiveness of the proposed method over three recent Chinese grammatical error correction datasets. Through our experiments, we find that both focal loss and matching mechanisms contribute to performance gain. ## 2 Method Suppose we have an edit operation set O for the manipulation of text at token level. Given a piece of source context denoted as x = (x1, x2*, . . . , x*N ) and its corrected target sequence w = (w1, w2*, . . . , w*M), to construct a mapping from the (x, w) to O, we use a tagging scheme T to first compute alignments between the two sequences. Then we assign each token in the sequence with candidate operations. The tagged label sequence is denoted as yl = {(y1, y2*, . . . , y*N )|yi ∈ O} = T (x, w). Its corresponding error detection target is named as yd . Tagging Scheme. We use a tagging scheme from GECToR (Omelianchuk et al., 2020) with adapted vocabularies and operations by MuCGEC (Zhang et al., 2022). Specifically, the scheme computes an optimal token-level alignment between x and w. Then for each aligned pair of tokens, there will be four choices for tagging labels: (1) KEEP for identical tokens, (2) DELETE if the token comes from x only, (3) REPLACE_w if the token from x is replaced by the one from w, (4) APPEND_w if the token comes from w only. For example, APPEND_, and REPLACE_识 in Figure 1. Notice that if multiple insertions appear within one alignment, only the first one is used for training. The error detection labels are constructed from the tagging labels. It will be a correct label COR if tagged as KEEP else error label ERR. 你 好 很 高 兴 认 知 你 ! [SEP] Input: 你好很高兴认知你! Target: 你好,很高兴认识你! Hello, nice to meet you! Encoder. We use a transformer-based encoder to process the tokenized input text. We enclose the context with special tokens [CLS] and [SEP] and pass them into the BERT model. We use the last layer of BERT as the encoded hidden representation for the context. Considering our tagging system is consistent with GECToR, we also use a mismatched encoder to get hidden representations of the original word, denoted as H = (h0*, . . . ,* hN ). Tagger. Our tagger contains two separate linear classification heads. The label tagging head is conducted over the encoder's hidden representation H directly: $$\mathbf{p}_{l}=L i n e a r(D r o p o u t(\mathbf{H}))$$ $\left(1\right)$. The error detection head is decoupled from the label tagging head using an input constructed from H with a matching mechanism over its self-attended representation: $$\begin{array}{l}{{\alpha=s o f t m a x(\mathbf{H H}^{\top})}}\\ {{\mathbf{A}=\alpha^{\top}\mathbf{H}}}\\ {{\mathbf{M}=L i n e a r([\mathbf{H},\mathbf{A}])}}\\ {{\mathbf{p}_{d}=L i n e a r(L a y e r N o r m(\mathbf{M}))}}\end{array}$$ Training Objective. In this paper, we choose Focal Loss to down-weight correctly classified labels: $$F L(p,y)=-\sum_{t}(1-p_{t})^{\gamma}\log p_{t}$$ $$(6)$$ γlog pt (6) where γ is a hyper-parameter to control the loss assigned to these labels. Both label tagging and error detection contribute to the final loss. The final loss is a linear combination of label tagging loss and error detection loss: $${\mathcal{L}}=F L(\mathbf{p}_{l},\mathbf{y}_{l})+\lambda F L(\mathbf{p}_{d},\mathbf{y}_{d})$$ hyper-parameter, $\lambda$. where λ is a positive hyper-parameter. ## 3 Experiments We evaluate our method on three recent Chinese Grammar Error Correction datasets. ## 3.1 Datasets We use three recent grammar error correction datasets from the Chinese language, MuCGEC (Zhang et al., 2022), FCGEC (Xu et al., 2022) and MCSCSet (Jiang et al., 2022). Statistics of the three datasets are listed in Table 1. MuCGEC is a combination of multiple sources covering diverse types of Chinese grammatical errors. **FCGEC** is a human-annotated corpus with multiple references collected mainly from multiple-choice questions in public school Chinese examinations. **MCSCSet** is a high-quality Chinese Spelling Correction dataset from the medical domain collected from extensive real-world medical queries from Tencent Yidian. The corresponding misspelled sentences are manually annotated by medical specialists. | Train | Dev | Test | | |---------|-----------|--------|--------| | MuCGEC | 1,187,605 | 1,125 | 5,938 | | FCGEC | 36,340 | 2,000 | 3,000 | | MCSCSet | 157,194 | 19,652 | 19,650 | The evaluation metric reported in this paper is span level correction F0.5 scores evaluated using ChERRANT, a Chinese version of ERRANT1. Specifically, ChERRANT computes an optimal sequence of char-level edits with the minimal edit distance given an input sentence and a correction. Then consecutive char-level edits are further merged into span-level, resulting in the following error types: (1) Missing, (2) Redundant, (3) Substitution, (4) Word-Order. We analyze validation sets of all used datasets using ChERRANT and show the error type distribution in Figure 2. The distribution indicates the difficulty of each dataset which will be discussed in Section 3.3. Missing Redundant Substitution **Word-order** $$(7)$$ 241 231 1 ![2_image_0.png](2_image_0.png) 1971 1842 1452 370 2 1059 433 2 300 MuCGEC FCGEC MCSCSet ## 3.2 Settings We use a GECToR model released by MuCGEC2 as our checkpoint. The model uses StructBERTLarge (Wang et al., 2020) as the transformer encoder and it is the best tagging-based model over MuCGEC. It has 7375 labels for token-level operation and 2 labels for error detection. We evaluate our model on the official benchmark websites for MuCGEC3and FCGEC4. For all our experiments, we use a learning rate of 1e−5 with batch size 128 and run three epochs for training. The hyper-parameter λ is chosen as 1. The default γ for Focal Loss is chosen as 2. We use the maximum F0.5 score over the validation set to choose the best model for evaluation. To be consistent with MuCGEC, we use iterative refinement for five iterations to get the final corrected results. No inference tweaking tricks are used for our main results in Section 3.3. However, we will conduct further analysis on inference tweaking in Section 3.5. The training costs are computed on a NVIDIA GeForce RTX 3090 GPU. For MuCGEC, the cost is 14 GPU hours . For FCGEC, the cost is 3.5 GPU hours. For MCSCSet, the cost is 3 GPU hours. Our code has been released on Github5. 2https://github.com/HillZhang1999/ MuCGEC 3https://tianchi.aliyun.com/dataset/ 131328 4https://codalab.lisn.upsaclay.fr/ competitions/8020 5https://github.com/VisualJoyce/TERepo # $\pi$ / $\sigma$ = CH(L) - CH($\pi$) CH($\pi$) 1https://github.com/chrisjbryant/ errant Table 2: Comparison of our proposed methods and the GECToR model over test split. ## 3.3 Main Results We use the **GECToR** model as the baseline for comparison. For our proposed methods, we show incremental results of adding Focal Loss (FL) and Tagger Decouple (TD), denoted as **GECToR + FL** and **GECToR + FL + TD**. We report evaluation scores over the test split of each dataset in Table 2. The baseline scores for MuCGEC and FCGEC are quoted directly from their original papers. The baseline score for MCSCSet is offered by us. We then list the scores of our proposed methods. The table shows that using Focal Loss for training can improve performance for all datasets. If we further decouple error detection from label tagging, extra gains can be achieved consistently. It is worth noting that error type distribution reflects the complexity of a specific dataset. For example, MCSCSet is easier than the other two even if it comes from a different domain since the error types are mostly Substitution. ## 3.4 Analysis Over Choices Of Γ It remains to be answered whether we should choose a larger γ to make the model more aggressive about harder labels. We conduct experiments over MuCGEC using different γ. To evaluate the generalizability of the trained models, we further adopt a zero-shot setting using the test split of FCGEC. We don't use MCSCSet for zero-shot evaluation due to its low domain similarity with MuCGEC and low error type diversity. In Table 3, we list results for GECToR and GECToR + FL using different γs. On MuCGEC, using larger gamma helps the model to do better in evaluation. However, if we take the zero-shot setting into consideration, the performance over FCGEC is not consistent with the increase of γ. This indicates that larger γ tends to make the model overfit the training data. Table 3: Comparison of our proposed methods with different γ values over MuCGEC and FCGEC. ## 3.5 Analysis Over Inference Tweaking | Model | MuCGEC | FCGEC | |------------------|----------|---------| | GECToR | 39.59 | 18.06 | | GECToR + FL(γ=1) | 40.53 | 22.83 | | GECToR + FL(γ=2) | 41.22 | 20.67 | | GECToR + FL(γ=5) | 41.60 | 18.65 | Inference tweaking has been used as a postprocessing technique to further improve the performance of tagging-based models. The method searches two hyper-parameters (*δ, β*) over the validation set. δ is a threshold for sentence-level minimum error probability. β is a positive confidence bias for keeping the source token. Considering inference tweaking promotes F0.5 scores through trading off precision and recall, we conduct experiments to compare how our proposed methods perform against it. We use validation and test split of MuCGEC for the illustration. In Table 4, we list the best scores achieved after applying inference tweaking and place the difference value in the bracket. All scores over the validation split increase by roughly 0.5 points. However, on the test split, the tweaked results are not rising consistently. Although inference tweaking is effective, it's not guaranteed the (*δ, β*) searched over the validation set works for each specific model. Table 4: Performance differences over validation split and test split after applying inference tweaking. | Model | (δ, β) | Dev | Test | |-----------|-----------|---------------|---------------| | GECToR | (0.40, 0) | 35.63 (+0.45) | 39.87 (+0.28) | | + FL | (0.35, 0) | 38.80 (+0.51) | 41.21 (−0.01) | | + FL + TD | (0.35, 0) | 39.18 (+0.56) | 41.79 (+0.38) | ## 4 Limitations In this work, we have been focusing on improving the performance of tagging-based Grammatical Error Correction. Our work has the following limitations: (1) We work on three recent Chinese Grammatical Error Correction datasets. But there are many emerging datasets from various languages. We will add support for these languages on our GitHub repository and make all resources publicly accessible. (2) We point out a limitation of inference tweaking, but it remains to be explored how to | Model | MuCGEC | FCGEC | MCSCSet | |-----------|----------|---------|-----------| | GECToR | 39.59 | 27.45 | 82.13 | | + FL | 41.22 | 29.03 | 82.47 | | + FL + TD | 41.41 | 30.74 | 83.09 | explain the phenomenon and derive better tweaking methods. ## 5 Conclusion In conclusion, focal training and tagger decoupling are effective in improving current tagging-based Grammatical Error Correction models. However, it is also important to choose a suitable γ for Focal Loss considering the generalizability of the model. For the widely adopted post-processing technique inference tweaking, it depends on the model whether there will be significant performance gain. ## Acknowledgements This work was partially supported by the National Key Research and Development Program of China (2022YFF0902100), Shenzhen Science and Technology Innovation Program (KQTD20190929172835662), Shenzhen Basic Research Foundation (JCYJ20210324115614039 and JCYJ20200109113441941), and NSFC (no. 92270122). ## References Abhijeet Awasthi, Sunita Sarawagi, Rasna Goyal, Sabyasachi Ghosh, and Vihari Piratla. 2019. Parallel iterative edit models for local sequence transduction. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4260–4270, Hong Kong, China. Association for Computational Linguistics. Christopher Bryant, Mariano Felice, Øistein E. Andersen, and Ted Briscoe. 2019. The BEA-2019 shared task on grammatical error correction. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 52–75, Florence, Italy. Association for Computational Linguistics. Wangjie Jiang, Zhihao Ye, Zijing Ou, Ruihui Zhao, Jianguang Zheng, Yi Liu, Bang Liu, Siheng Li, Yujiu Yang, and Yefeng Zheng. 2022. Mcscset: A specialist-annotated dataset for medical-domain chinese spelling correction. In Proceedings of the 31st ACM International Conference on Information; Knowledge Management, CIKM '22, page 4084–4088, New York, NY, USA. Association for Computing Machinery. Masahiro Kaneko, Masato Mita, Shun Kiyono, Jun Suzuki, and Kentaro Inui. 2020. Encoder-decoder models can benefit from pre-trained masked language models in grammatical error correction. In *Proceedings of the 58th Annual Meeting of the Association* for Computational Linguistics, pages 4248–4254, Online. Association for Computational Linguistics. Katerina Korre and John Pavlopoulos. 2022. Enriching grammatical error correction resources for Modern Greek. In *Proceedings of the Thirteenth Language Resources and Evaluation Conference*, pages 4984–4991, Marseille, France. European Language Resources Association. Chong Li, Cenyuan Zhang, Xiaoqing Zheng, and Xuanjing Huang. 2021. Exploration and exploitation: Two ways to improve Chinese spelling correction models. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th* International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 441–446, Online. Association for Computational Linguistics. Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. 2017. Focal loss for dense object detection. In 2017 IEEE International Conference on Computer Vision (ICCV), pages 2999–3007. Jonathan Mallinson, Jakub Adamek, Eric Malmi, and Aliaksei Severyn. 2022. Edit5: Semiautoregressive text-editing with t5 warm-start. *ArXiv*, abs/2205.12209. Eric Malmi, Sebastian Krause, Sascha Rothe, Daniil Mirylenka, and Aliaksei Severyn. 2019. Encode, tag, realize: High-precision text editing. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5054–5065, Hong Kong, China. Association for Computational Linguistics. Jakub Náplava, Milan Straka, Jana Straková, and Alexandr Rosen. 2022. Czech grammar error correction with a large and diverse corpus. *Transactions of the Association for Computational Linguistics*, 10:452–467. Hwee Tou Ng, Siew Mei Wu, Ted Briscoe, Christian Hadiwinoto, Raymond Hendy Susanto, and Christopher Bryant. 2014. The CoNLL-2014 shared task on grammatical error correction. In *Proceedings of* the Eighteenth Conference on Computational Natural Language Learning: Shared Task, pages 1–14, Baltimore, Maryland. Association for Computational Linguistics. Hwee Tou Ng, Siew Mei Wu, Yuanbin Wu, Christian Hadiwinoto, and Joel Tetreault. 2013. The CoNLL2013 shared task on grammatical error correction. In *Proceedings of the Seventeenth Conference on* Computational Natural Language Learning: Shared Task, pages 1–12, Sofia, Bulgaria. Association for Computational Linguistics. Kostiantyn Omelianchuk, Vitaliy Atrasevych, Artem Chernodub, and Oleksandr Skurzhanskyi. 2020. GECToR - grammatical error correction: Tag, not rewrite. In Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 163–170, Seattle, WA, USA → Online. Association for Computational Linguistics. Muhammad Qorib, Seung-Hoon Na, and Hwee Tou Ng. 2022. Frustratingly easy system combination for grammatical error correction. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1964–1974, Seattle, United States. Association for Computational Linguistics. Sascha Rothe, Jonathan Mallinson, Eric Malmi, Sebastian Krause, and Aliaksei Severyn. 2021. A simple recipe for multilingual grammatical error correction. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th* International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 702–707, Online. Association for Computational Linguistics. Felix Stahlberg and Shankar Kumar. 2021. Synthetic data generation for grammatical error correction with tagged corruption models. In *Proceedings of the* 16th Workshop on Innovative Use of NLP for Building Educational Applications, pages 37–47, Online. Association for Computational Linguistics. Xin Sun and Houfeng Wang. 2022. Adjusting the precision-recall trade-off with align-and-predict decoding for grammatical error correction. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 686–693, Dublin, Ireland. Association for Computational Linguistics. Maksym Tarnavskyi, Artem Chernodub, and Kostiantyn Omelianchuk. 2022. Ensembling and knowledge distilling of large sequence taggers for grammatical error correction. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3842–3852, Dublin, Ireland. Association for Computational Linguistics. Viet Anh Trinh and Alla Rozovskaya. 2021. New dataset and strong baselines for the grammatical error correction of Russian. In *Findings of the Association* for Computational Linguistics: ACL-IJCNLP 2021, pages 4103–4111, Online. Association for Computational Linguistics. Shuohang Wang and Jing Jiang. 2017. Machine comprehension using match-lstm and answer pointer. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net. Wei Wang, Bin Bi, Ming Yan, Chen Wu, Jiangnan Xia, Zuyi Bao, Liwei Peng, and Luo Si. 2020. Structbert: Incorporating language structures into pre-training for deep language understanding. In *International* Conference on Learning Representations. Lvxiaowei Xu, Jianwang Wu, Jiawei Peng, Jiayu Fu, and Ming Cai. 2022. Fcgec: Fine-grained corpus for chinese grammatical error correction. In Findings of the Association for Computational Linguistics: EMNLP 2022. Yue Zhang, Zhenghua Li, Zuyi Bao, Jiacheng Li, Bo Zhang, Chen Li, Fei Huang, and Min Zhang. 2022. MuCGEC: a multi-reference multi-source evaluation dataset for Chinese grammatical error correction. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3118–3130, Seattle, United States. Association for Computational Linguistics. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 4 ✗ A2. Did you discuss any potential risks of your work? The benchmarks are open and the results are reproducible. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Left blank. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B **Did You Use Or Create Scientific Artifacts?** Not applicable. Left blank. B1. Did you cite the creators of artifacts you used? Not applicable. Left blank. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Not applicable. Left blank. ## C ✓ **Did You Run Computational Experiments?** 3 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 3.2 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 3.2 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 3.3-3.4 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 3.2 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
yang-etal-2023-leveraging
{LET}: Leveraging Error Type Information for Grammatical Error Correction
https://aclanthology.org/2023.findings-acl.371
Grammatical error correction (GEC) aims to correct errors in given sentences and is significant to many downstream natural language understanding tasks. Recent work introduces the idea of grammatical error detection (GED) to improve the GEC task performance. In contrast, these explicit multi-stage works propagate and amplify the problem of misclassification of the GED module. To introduce more convincing error type information, we propose an end-to-end framework in this paper, which Leverages Error Type (LET) information in the generation process. First, the input text is fed into a classification module to obtain the error type corresponding to each token. Then, we introduce the category information into the decoder{'}s input and cross-attention module in two ways, respectively. Experiments on various datasets show that our proposed method outperforms existing methods by a clear margin.
# Let: Leveraging Error Type Information For Grammatical Error Correction Lingyu Yang∗ , Hongjia Li∗ , Lei Li, Chengyin Xu, Shutao Xia, Chun Yuan† Tsinghua University, Beijing, China {yly20, lhj20, lei-li18, xucy20}@mails.tsinghua.edu.cn {xiast, yuanc}@sz.tsinghua.edu.cn ## Abstract Grammatical error correction (GEC) aims to correct errors in given sentences and is significant to many downstream natural language understanding tasks. Recent work introduces the idea of grammatical error detection (GED) to improve the GEC task performance. In contrast, these explicit multi-stage works propagate and amplify the problem of misclassification of the GED module. To introduce more convincing error type information, we propose an end-toend framework in this paper, which Leverages Error Type (LET) information in the generation process. First, the input text is fed into a classification module to obtain the error type corresponding to each token. Then, we introduce the category information into the decoder's input and cross-attention module in two ways, respectively. Experiments on various datasets show that our proposed method outperforms existing methods by a clear margin. ## 1 Introduction The grammatical error correction (GEC) task aims to correct grammatical errors in natural language texts, including spelling, punctuation, grammar, word selection, and more. As shown in Figure 1, a GEC model receives text containing errors and produces its corrected version. Current GEC algorithms are mainly divided into two categories: detection-based models and end-toend generative models. Detection-based models treat GEC as a token classification problem (Omelianchuk et al., 2020). By classifying each token in the sentence, we can make a detailed transformation according to the classification result to obtain the modified sentence. This method has strong interpretability and a transparent error correction process. However, to achieve precise error correction, it is necessary first to identify and classify all possible grammatical *These authors contributed equally to this work. †Corresponding authors. ![0_image_0.png](0_image_0.png) errors. The training data is then manually annotated based on the error categories, which is laborintensive. To avoid manually designing wrong categories and labeling data, many works (Yuan and Felice, 2013a; Yuan and Briscoe, 2016) have built end-toend generative GEC systems from the perspective of Machine Translation (MT), which is also the current mainstream method. In this approach, erroneous sentences correspond to the source language, and error-free sentences correspond to the target language. Most recent generative models (Raheja and Alikaniotis, 2020; Yuan et al., 2021) are based on the Transformer encoder-decoder architecture (Vaswani et al., 2017). They also achieve competitive results compared to detection-based models. The most significant advantage of the end-toend generative model is that we do not need to design complex error categories manually or perform labor-intensive labeling work on the data. We can only use parallel corpora to train the model. Recent works (Wang et al., 2020; Chen et al., 2020) have shown that if the error type results obtained in the grammatical error detect (GED) task are introduced into the generative model in some form, the error correction ability of the model will be further improved. This is because the entire training and inference process can be viewed as a black-box operation in an end-to-end generative model. Furthermore, the model can generate more accurate results if additional information guides this process (e.g., The classification result of some location is "delete"). Yuan et al. (2021) extends the Transformer encoder-decoder model based on introducing error type information. They classify input tokens into different error types, transform them into representations, and feed them into the decoder's crossattention module. However, this method suffers from two fundamental limitations: 1) **Error propagation**. Each token is mapped into a one-hot classification vector in the first process. If there is a misclassification in the results, it will be passed on and negatively influence the following parts. 2) **Mismatched cross attention**. In the original transformer decoder block, the input Q and K of the cross-attention module are from the semantic space of tokens. However, these inputs are from the semantic space of the error type information and the original tokens, respectively. This mismatch can lead to a reduction in the representation of the model. Therefore, to solve the above problems, we propose a simple yet novel generative model to improve the performance of GEC, termed LET (Leveraging Error Type information). First, we utilize the intermediate representation of the error type classification module as the error type vector. It would not discard the probabilities of other classes, even if their values are small. This operation ensures more convincing guidance of the type vectors to the generated modules. Second, to discard the mismatch in the crossattention module, we transfer the input from the previous sub-layer in the decoder to the classification vector. Thus, both parts of the input are in the same semantic space. Therefore, the crossattention for them is more reasonable. In summary, our contributions can be summarized in the following points: 1) We propose a novel sequence-to-sequence model which realizes the alignment of error type for GEC. This model improves the effect of this task with much more fine-grained error detection. 2) We demonstrate how GED benefits the correction task by introducing the error type infor- mation into the input module and the crossattention module of the decoder in two ways. 3) Experimental results on multiple datasets show that our proposed method achieves stateof-the-art results. ## 2 Related Work Much progress in the GEC task can be attributed to transforming the problem into a machine translation task (Brockett et al., 2006) from an ungrammatical source sentence to a grammatical target sentence. Early GEC-MT methods leveraged phrase-based statistical machine translation (PBSMT) (Yuan and Felice, 2013b). With the rapid development of related work on machine translation, statistical machine translation (SMT) and neural machine translation (NMT) have been successfully applied to various task-specific adaptations of GEC (Felice et al., 2014;Yuan and Briscoe, 2016;Junczys-Dowmunt et al., 2018) With the introduction of transformer architectures, this approach rapidly evolved to powerful Transformerbased seq2seq models (Vaswani et al., 2017). Transformer-based models autoregressively capture the complete dependency among output tokens (Yuan et al., 2019). Grundkiewicz et al. (2019) leveraged a Transformer model pre-trained on synthetic GEC data. Several improvement strategies of BERT were also adopted in the GEC model (Kaneko et al., 2020). With the development of large-scale pre-trained models recently, Rothe et al. (2021) built their system on top of T5(Xue et al., 2021) and reached new state-of-the-art results. Grammatical Error Detection is usually formulated as a sequence tagging task, where each erroneous token is assigned with an error type, e.g., selection errors and redundant words. Early GED methods mainly used rules to identify specific sentence error types, such as preposition errors (Tetreault and Chodorow, 2008). With the development of neural networks, Rei and Yannakoudakis (2016) presented the first work using a neural approach and framed GED as a binary sequence labeling problem, classifying each token in a sentence as correct or incorrect. Sequence labeling methods are widely used for GED, such as feature-based statistical models (Chang et al., 2012) and neural models (Fu et al., 2018). Due to the effectiveness of BERT (Devlin et al., 2019) in many other NLP applications, recent studies adopt BERT as the basic ![2_image_0.png](2_image_0.png) ## Architecture Of Ged Models(Li And Shi, 2021). Recent work has attempted to explore a different approach to using GED in GEC, which aims to use the detection results of GED to guide GEC generation. Yuan et al. (2019) introduced token-level and sentence-level GED as auxiliary tasks when training for GEC. Zhao et al. (2019) employed multitask learning to utilize the detection results of GED to guide GEC generation. Similarly, Chen et al. (2020) fine-tuned RoBERTa (Zhuang et al., 2021) for GED and improved the efficiency for GEC by dividing the task into two sub-tasks: Erroneous Span Detection and Erroneous Span Correction. (Yuan et al., 2021) treated GED as a sequence labeling task and GEC as a sequence-to-sequence task and additionally investigated ways to use multiclass GED predictions to inform GEC. ## 3 Method In this section, we first describe the problem definition and the basic model, our baseline. Then we describe the LET (Leveraging Error Type information) model, which explicitly applies the classification information (error types) of tokens to guide the generative model to generate better-corrected sentences. The whole architecture of LET is shown in Figure 2. ## 3.1 Problem Definition Given a sentence that may contain erroneous tokens U = {ui} N , the target of GEC is to correct the input sentence and output the corrected sentence C = {ci}M. N and M are the input and output sequence length, respectively. ## 3.2 Backbone We use BART (Lewis et al., 2020) as the backbone model of our end-to-end GEC system. BART is a denoising autoencoder that maps the noisy text to the correct form. It is implemented as a sequenceto-sequence model with a bidirectional encoder over corrupted text and a left-to-right autoregressive decoder (Vaswani et al., 2017). The word embedding layer is represented as Emb. The encoder and decoder are represented as EC and DC, respectively. The process of encoding and decoding can be formulated as: $$E_{U}=E C(U)\qquad\qquad\qquad(1)$$ $$C=D C(E_{U})\qquad\qquad(2)$$ ancolor $EC$. where EU is the output of the encoder EC. ## 3.3 Grammatical Error Detection We aim to obtain the error type classification of each token in the sentence by the sequence labeling task. In practice, we construct this classifier with three parts. First, a two-layer transformer encoder block EC′is designed to encode the input sentence U and obtain the long error type representation R long U, of which the embedding dimension is the same as the word embedding, such as 768 or 512. This procedure can be formulated as: $$R_{U}^{l o n g}=E C^{'}(U)$$ Then, a two-layer fully-connected network F F aims to transform the long error type representation to short error type representation R*short* U: $$R_{U}^{s h o r t}=F F(R_{U}^{l o n g})$$ where the dimension of the short one is the number of error types, such as 4, 25 or 55. Finally, the error type can be calculated by a Softmax layer SM: $$Y=SM(R_{U}^{short})\tag{5}$$ where $Y=\{y_{i}\}^{N}$ is the label sequence of $N$ tokens. ## 3.4 Gid: Guided Input Of The Decoder Naturally, after the generation module autoregressively decodes the tokens at some time step, if there is the error type information of the next time step of the original sentence, the generation module may make the correct decision more easily. For example, considering the error type of the next token in the original sentence is "Delete" (This token is redundant and needs to be deleted), the generation module will delete the next token by greater probability after receiving the information indicating "Delete." Metaphorically speaking, we can compare the decoder to a little boy and the decoding process to the boy solving a complex math problem. If a reference material is available to guide the problemsolving process, the little boy will undoubtedly find it easier to arrive at the correct answer. This reference material is what we refer to as "additional guiding information" in this context. Formally, at time step t, we have obtained the output of the last time step, which is represented as pt−1. Therefore, we take two elements as the input of this GID module: 1) Embt−1: the word embedding of pt−1; 2) R long t: the corresponding long error type representation of the token ut. Therefore, we obtain Ti, the output of GID and also the input of the decoder DC, by a direct point-wise add operation: $$T_{i}=E m b_{t-1}+R_{t}^{l o n g}$$ t(6) ## 3.5 Gca: Guided Cross Attention Module In addition to the above approach, we also want to introduce error type information in the crossattention module. $$({\mathfrak{I}})$$ Cross Attention 1 In the original transformer, the cross attention module in the decoder layer performs attention weighting calculations on the token embedding output by the encoder and the output of the previous self-attention module. The calculation formula is expressed as: $$E^{C A1}=s o f t m a x(\frac{Q K^{T}}{\sqrt{d_{K}}})V\qquad\qquad(7)$$ $$(4)$$ $$\mathbf{\Pi}(5)$$ where ECA1is the output of the Cross Attention 1 module. Here, Q represents the representation vector output by the input tokens of the current decoder after passing through the previous selfattention module. K represents the representation vector output by all the input tokens after passing through the stacked encoder. V is a copy of K. In practice, *Q/K/V* are firstly mapped to different representation spaces by matrices Wq/Wk/Wv, respectively. In Equation 7, by performing the scaled dotproduct operation on Q and K, the weight parameter for weighted summation of V is obtained. Previous work (Lee et al., 2018; Li et al., 2020) has shown that such an operation is to align the tokens input by the encoder and decoder at the semantic level, so that the decoder is able to generate accurate and reasonable results. Cross Attention 2 We describe alignment at the semantic level in the last subsection. However, more than this alignment is needed. What about alignment at the error type level? That is, we use the existing detection module to classify the original Q and K to error types and then use the obtained results to replace Q and K in Equation 7, which realizes the alignment at the error type level. Specifically, as shown in Figure 2, we utilize classification head F F to classify Q and K, and obtain their short error type representation vectors Q′and K′respectively: $$Q^{'}=F F(Q)$$ $K^{'}=F F(K)$ $$(8)$$ $$(\mathbb{9})$$ where the dimension of Q′and K′ depends on the classification category of the detection task. These $$(6)$$ | Model | with GED | BEA-test | CoNLL-2014 | | | | | |-----------------------------------------------------------------------------------------------------|------------|------------|--------------|--------|------|------|------| | Precision | Recall | F0.5 | Precision | Recall | F0.5 | | | | Constrained Data | | | | | | | | | Lewis et al. (2020) | ✘ | 48.4 | 41.7 | 47.2 | 50.6 | 26.3 | 43.1 | | Raheja and Alikaniotis (2020) | ✘ | 53.8 | 36.5 | 49.1 | 64.7 | 22.6 | 47.1 | | Kaneko et al. (2020) | ✔ | 58.1 | 44.8 | 54.8 | 63.6 | 33.0 | 53.6 | | Yuan et al. (2021) | ✔ | 60.8 | 50.8 | 58.5 | 60.4 | 39.0 | 54.4 | | LET (ours) | ✔ | 61.8 | 52.1 | 59.5 | 61.2 | 40.9 | 55.6 | | Unconstrained Data | | | | | | | | | Ji et al. (2017) | ✘ | - | - | - | - | - | 45.2 | | Ge et al. (2018) | ✘ | - | - | - | 61.2 | 37.9 | 54.5 | | Kiyono et al. (2019) | ✘ | 65.5 | 59.4 | 64.2 | 67.9 | 44.1 | 61.3 | | Lichtarge et al. (2020) | ✘ | 67.6 | 62.5 | 66.5 | 69.4 | 43.9 | 62.1 | | Wan et al. (2020) | ✘ | 66.9 | 60.6 | 65.5 | 69.5 | 47.3 | 63.5 | | Stahlberg and Kumar (2021) | ✘ | 72.1 | 64.4 | 70.4 | 72.8 | 49.5 | 66.6 | | Yuan and Bryant (2021) | ✘ | - | - | - | 74.3 | 39.0 | 62.9 | | Zhao et al. (2019) | ✔ | - | - | - | 67.7 | 40.6 | 59.8 | | Yuan et al. (2019) | ✔ | 70.5 | 55.1 | 66.8 | - | - | - | | Kaneko et al. (2020) | ✔ | 67.1 | 60.1 | 65.6 | 69.2 | 45.6 | 62.6 | | Chen et al. (2020) | ✔ | 70.4 | 55.9 | 66.9 | 72.6 | 27.2 | 61.0 | | Wang et al. (2020) | ✔ | - | - | - | 65.0 | 33.5 | 54.6 | | Yuan et al. (2021) | ✔ | 73.3 | 61.5 | 70.6 | 71.3 | 44.3 | 63.5 | | LET (ours) | ✔ | 74.6 | 62.9 | 71.9 | 71.7 | 45.6 | 64.3 | | Omelianchuk et al. (2020) | ✘ | 79.2 | 53.9 | 72.4 | 77.5 | 40.1 | 65.3 | | Table 1: Evaluation results using ERRANT on BEA-test and M2 (Dahlmeier and Ng, 2012) on CoNLL-2014. | | | | | | | | Table 1: Evaluation results using ERRANT on BEA-test and M2(Dahlmeier and Ng, 2012) on CoNLL-2014. Methods with Grammatical Error Detection (GED) module are marked with a check mark. On the contrary, pure sequence-to-sequence models and sequence labelling systems (only Omelianchuk et al. (2020)) are labeled with a cross mark. Only public BEA-2019 data is used in the training process of all constrained systems, while unconstrained systems are variously trained on private and/or artificial data. representation vectors can be viewed as representations of error types. Therefore, applying cross attention to them realizes the alignment of the tokens input by the encoder and the decoder at the error-type level. The modified self-attention equation can be formulated as follows: $$E^{C A2}=s o f t m a x(\frac{Q^{'}K^{'T}}{\sqrt{d_{K^{'}}}})V\qquad(10)$$ where dK ′ is the dimension of K ′, ECA2is the output of this Cross Attention 2 module. Combination of CA1 & CA2 Then, we combine the output of two cross-attention modules at pointwise. Nevertheless, before this, we need to define the weight of each one. Therefore, we calculate the dynamic Weighting factor λ: $\lambda=\sigma(W[E^{CA1};E^{CA2}]+b)$ (11) where σ is the logistic Sigmoid function and W and b are learnable parameters. Then we obtain the combined output EGCA as follows: $$E^{G C A}=\lambda E^{C A1}+(1-\lambda)E^{C A2}\ \ \ \ \ (12)$$ After this sub-module, EGCA is used as the input to the next sub-layer. Ultimately, the forward computation and back-propagation of the entire model are trained like the regular encoder-decoder model. ## 3.6 Loss Function The total loss contains two parts: 1) Lerr: The cross-entropy of the predicted error types and the ground truth of token-level labels. $\mathfrak{so}$ 2) Lsen: The cross-entropy of the output corrected sentences and corresponding target sentences. $$\mathrm{{\bf{n}e d\;a s}};$$ Total loss is defined as: $$L=\alpha L_{e r r}+(1-\alpha)L_{s e n}$$ where $\alpha\in[0,1]$ is a hyper-parameter. ## 4 Experiments To test the performance of the LET system, we conduct evaluation experiments on two mainstream GEC benchmarks: BEA-test (Bryant et al., 2019) and CoNLL-2014 (Ng et al., 2014) and compare with previous state-of-the-art approaches. ## 4.1 Datasets Following previous work, we use five datasets: - Lang-8 Corpus (Mizumoto et al., 2011) - Cambridge Learner Corpus (CLC) (Nicholls, 2003) - First Certificate in English (FCE) corpus (Yannakoudakis et al., 2011) - National University of Singapore Corpus of Learner English (NUCLE) (Dahlmeier et al., 2013) - Cambridge English Write & Improve + LOCNESS (W&I) corpus (Bryant et al., 2019) Following the training process of previous work (Kiyono et al., 2019; Lichtarge et al., 2020; Yuan et al., 2021), we pre-train two LET systems on public Lang-8 Corpus (under the constrained setting) and the CLC dataset (under the unconstrained setting), then fine-tune them on the same three datasets, including W&I, FCE, and NUCLE. Finally, we train all modules in LET simultaneously, jointly optimizing Lerr and Lsen based on Equation 13. ## 4.2 Error Type Annotations We obtain error type annotations in these corpora by the ERRANT (Bryant, 2019) annotation toolkit, which can pre-process sentences and standardize tokens into error type annotations. The kind of error types can be binary classes, 4-classes consisting of basic operations, 25-classes consisting of word types and 55-classes combining the above tags. Table 2 shows the error type annotations in different numbers of classes. $$(13)$$ N.C Error Type Annotations 2 right and wrong 4 insert, delete, replace and keep 25 insert(noun), insert(verb tense), insert(prep) replace(noun), replace(verb tense), keep, delete, etc. 55 insert(M:DET), insert(U:PREP), insert(R:VERB:TENSE), replace(R:VERB:TENSE), replace(M:DET) , keep, delete, etc. Table 2: Error Type Annotations. N.C: Number of Classes ## 4.3 Experiment Setup The LET model, which is implemented with Transformers* (Wolf et al., 2020), consists of 6 encoder layers, 6 decoder layers, and a shared classification head. The dimension of embedding is set to 768, and the batch size is set to 32. The maximum sequence length is 1024, and we pad sequences with the longest length in the batch. We train the model with Adam optimizer, and the learning rate is set to 2e-5. The weight factor α in Equation 13 is set to 0.2. The evaluation metric of text generation contains precision, recall, and F0.5 score. We train the model on 4 Nvidia V100 GPUs. It takes about 4 hours to train the model in one epoch. ## 4.4 Results Analysis We report the experimental results of various methods in Table 1. The experimental results demonstrate the effectiveness of LET. Overall performance As shown in Table 1, the proposed LET network outperforms most previous state-of-the-art methods on two mainstream GEC benchmarks: BEA-test and CoNLL-2014 under constrained and unconstrained settings. Constrained setting Compared with the previous SOTA seq2seq model Yuan et al. (2021), LET improve F0.5 score with 1% and 1.2% on BEA-test and CoNLL-2014, respectively. Compared with other seq2seq models, our work has achieved more obvious improvements based on the same experimental data. Unconstrained setting On BEA-test, compared with Yuan et al. (2021), LET is at least 1.3% better *https://github.com/huggingface/transformers N.C Algorithm P R F0.5 - Baseline 55.1 44.3 52.5 2 + GID 55.8 44.6 53.1 + GCA 57.4 42.7 53.7 + GID & GCA 58.1 44.7 54.8 4 + GID 56.1 44.6 53.4 + GCA 58.2 41.6 54.0 + GID & GCA 58.8 45.0 54.2 25 + GID 55.8 44.3 53.0 + GCA 58.2 42.9 54.3 + GID & GCA 58.7 45.1 55.4 55 + GID 55.2 44.0 52.5 + GCA 58.7 42.9 54.7 + GID & GCA **58.9 45.2 55.5** Table 3: Results based on ablated modules and different number of error types. N.C: Number of Classes Table 4: Results of two weighting ways of combining CA1 and CA2. than the state-of-the-art model on three key metrics. On CoNLL-2014, LET also achieves significant improvements. Notably, compared to precision (+1.3%/+0.4%), our method improves the recall score more (+1.4%/+1.3%). Under the Constrained setting, there is also a similar data distribution. It shows that the model is better at recalling the correct editing operations under the combined effect of our multiple innovations. Compared with models without a GED module, our LET is less than Stahlberg and Kumar (2021). The possible reason is that they used more data to train the model. As shown in the last line of Table 1, Omelianchuk et al. (2020) outperforms all systems above. Due to more data, fine-grained labels, and multiple ensemble strategies, this sequence-tagging model has taken the first place in GEC for a long time. ## 5 Discussion We discuss many details of the model in more depth in this section. Unless stated otherwise, all experiments in this section are tested on the BEA-dev dataset under the constrained data setting. ## 5.1 Ablation Study | Variations | Prec | Rec | F0.5 | |-----------------|--------|-------|--------| | Baseline | 55.1 | 44.3 | 52.5 | | Static weights | 55.8 | 44.6 | 53.1 | | Dynamic weights | 58.2 | 41.6 | 54.0 | We explore the effect of each component in the whole LET (Leveraging Error Type) system. We compare the Bart-base as the Baseline (6 encoderlayers, 6 decoder-layers, 768 hidden-states, 16 attention-heads, and 139M parameters). As shown in Table 3, GID and GCA achieve higher values than the Baseline on three key metrics no matter how many error types exist. Moreover, the combination of them even obtains more improvement, demonstrating the effectiveness of the proposed two modules. ## 5.2 Results On Different Error Categories In this section, we explore the performance of LET on different error categories. We use the same pre-training and fine-tuning data splits for the baseline model but with no additional GED input for fine-tuning, which follows the standard encoderdecoder GEC training procedure. As shown in Table 3, the results demonstrate the efficacy of the multi-encoder GEC model: adding GED predictions as auxiliary input yields a consistent statistically significant improvement in performance over the baseline. Our best system uses the 55-class GED predictions, achieving 55.5 F0.5. The reason may be that the 55-class system represents the best compromise between label informativeness and model reliability. Unlike the optimal scheme of LET, GID achieves the best result (53.4 F0.5) in the setting of 4-class GED prediction. Notably, GID using the 2-class GED predictions (binary predictions) outperformed the same model using the 55-class GED predictions. This is because 2-class GED predictions are less informative but more reliable. After all, there are only two classes, while 25-class and 55-class predictions tend to be more informative but less reliable because of the increased difficulty in predicting sparser classes. This also shows that the GID model lacking the alignment of the error type is not good at using too subdivided error type guidance information, and it can also be inferred that the GID model does not make full use of the error type information effectively. Notably, similar to LET, GCA achieves the best result (54.7 F0.5) in the setting of 55-class GED prediction. Meanwhile, the experiment shows that with the increase in the number of error type categories, the GCA model's effect gradually improves. | Algorithm | Variation | Precision | Recall | F0.5 | |--------------------------|-------------|-------------|----------|--------| | Baseline | - | 55.1 | 44.3 | 52.5 | | + Guided Cross Attention | - | 58.7 | 42.9 | 54.7 | | ablation study A | 54.9 | 44.4 | 52.4 | | | ablation study B | 55.2 | 44.3 | 54.6 | | ## 5.3 Effects Of Dynamic Weight Setting As described in Section 3.5, the guided crossattention module contains two sub-modules: Cross Attention 1 (CA1) and Cross Attention 2 (CA2). First, we explore the information fusion method by conducting a controlled experiment. Under the static weights setting: $$E^{G C A}=\beta E^{C A1}+(1-\beta)E^{C A2}$$ CA2(14) After grid search, the best β is set to 0.37. It can be seen from Table 4 that the guided crossattention module with dynamic weights is significantly better than it with static weights. Therefore, we conjecture that the model needs to adaptively change the information fusion weights of the two attention modules according to the input sentence to satisfy tasks of different difficulty. ## 5.4 Discussion On The Number Of Parameters Compared with the baseline method, our method introduces additional parameters, mainly from the newly added cross-attention module GCA. So, does the improvement in model performance benefit from the increase in the number of parameters? In order to explore this question, we conducted a related comparative experiment. We set up two ablation studies in Table 5. As shown in this table, comparing the results of GCAablation studies 1 & 2 shows that the increase in the number of parameters does improve the model effect under the current conditions, but the improvement here is negligible compared to the improvement brought by the GCA module. Experimental results show that our proposed method is necessary to align at the error type level. ## 6 Conclusion Grammar error correction is significant for many downstream natural language understanding tasks. In this paper, we propose an end-to-end framework termed LET, which effectively leverages the error type information generated by the GED task to guide the GEC task. Our work solves two critical problems in the previous work. Firstly, we have alleviated the problem of error propagation caused by hard-coded error types by introducing soft-encoded error types. Secondly, we have introduced the concept of error type alignment, which is more reasonable and adequate. We transfer the original semantic vectors into classification vectors to ensure that the two parts of the input of the proposed cross-attention module are both in the same semantic space. Experiments and ablation studies show that alignment leads to better results. Overall, LET provides a better sample for research in the GEC field and addresses some potential issues with previous technical solutions. ## Limitations By analyzing the error cases, we find that almost all the existing work (including our LET) cannot handle the disorder problem of words well, primarily when the error occurs far from the correct location. For example, there is a correct sentence: 'On my way to school today, I bought a very tasty apple.'. If the erroneous form is as follows: *'on my way to* school apple today, I bought a very tasty.', it is hard for the model to understand that the right thing to do is to put *apple* back at the end of the sentence. ## Acknowledgements This work was supported by the National Key R&D Program of China (2022YFB4701400/4701402), SZSTC Grant (JCYJ20190809172201639, WDZC20200820200655001), Shenzhen Key Laboratory (ZDSYS20210623092001004) and Beijing Key Lab of Networked Multimedia. ## References Chris Brockett, Bill Dolan, and Michael Gamon. 2006. Correcting esl errors using phrasal smt techniques. In *21st International Conference on Computational* Linguistics and 44th Annual Meeting of the ACL, Sydney, Australia. Christopher Bryant. 2019. Automatic annotation of error types for grammatical error correction. Technical Report UCAM-CL-TR-938, University of Cambridge, Computer Laboratory. Christopher Bryant, Mariano Felice, Øistein E. Andersen, and Ted Briscoe. 2019. The BEA-2019 shared task on grammatical error correction. In *Proceedings* of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 52–75, Florence, Italy. Association for Computational Linguistics. Ru-Yng Chang, Chung-Hsien Wu, and Philips Kokoh Prasetyo. 2012. Error diagnosis of chinese sentences using inductive learning algorithm and decomposition-based testing mechanism. ACM Transactions on Asian Language Information Processing (TALIP), 11(1):1–24. Mengyun Chen, Tao Ge, Xingxing Zhang, Furu Wei, and Ming Zhou. 2020. Improving the efficiency of grammatical error correction with erroneous span detection and correction. In *Proceedings of the 2020* Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7162–7169, Online. Association for Computational Linguistics. Daniel Dahlmeier and Hwee Tou Ng. 2012. Better evaluation for grammatical error correction. In *Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational* Linguistics: Human Language Technologies, pages 568–572, Montréal, Canada. Association for Computational Linguistics. Daniel Dahlmeier, Hwee Tou Ng, and Siew Mei Wu. 2013. Building a large annotated corpus of learner English: The NUS corpus of learner English. In *Proceedings of the Eighth Workshop on Innovative Use* of NLP for Building Educational Applications, pages 22–31, Atlanta, Georgia. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Mariano Felice, Zheng Yuan, Øistein E. Andersen, Helen Yannakoudakis, and Ekaterina Kochmar. 2014. Grammatical error correction using hybrid systems and type filtering. In *Proceedings of the Eighteenth* Conference on Computational Natural Language Learning: Shared Task, pages 15–24, Baltimore, Maryland. Association for Computational Linguistics. Ruiji Fu, Zhengqi Pei, Jiefu Gong, Wei Song, Dechuan Teng, Wanxiang Che, Shijin Wang, Guoping Hu, and Ting Liu. 2018. Chinese grammatical error diagnosis using statistical and prior knowledge driven features with probabilistic ensemble enhancement. In *Proceedings of the 5th Workshop on Natural Language* Processing Techniques for Educational Applications, pages 52–59, Melbourne, Australia. Association for Computational Linguistics. Tao Ge, Furu Wei, and Ming Zhou. 2018. Fluency boost learning and inference for neural grammatical error correction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1055–1065, Melbourne, Australia. Association for Computational Linguistics. Roman Grundkiewicz, Marcin Junczys-Dowmunt, and Kenneth Heafield. 2019. Neural grammatical error correction systems with unsupervised pre-training on synthetic data. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 252–263, Florence, Italy. Association for Computational Linguistics. Jianshu Ji, Qinlong Wang, Kristina Toutanova, Yongen Gong, Steven Truong, and Jianfeng Gao. 2017. A nested attention neural hybrid model for grammatical error correction. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 753–762, Vancouver, Canada. Association for Computational Linguistics. Marcin Junczys-Dowmunt, Roman Grundkiewicz, Shubha Guha, and Kenneth Heafield. 2018. Approaching neural grammatical error correction as a low-resource machine translation task. In *Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational* Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 595–606, New Orleans, Louisiana. Association for Computational Linguistics. Masahiro Kaneko, Masato Mita, Shun Kiyono, Jun Suzuki, and Kentaro Inui. 2020. Encoder-decoder models can benefit from pre-trained masked language models in grammatical error correction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4248–4254, Online. Association for Computational Linguistics. Shun Kiyono, Jun Suzuki, Masato Mita, Tomoya Mizumoto, and Kentaro Inui. 2019. An empirical study of incorporating pseudo data into grammatical error correction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1236–1242, Hong Kong, China. Association for Computational Linguistics. Kuang-Huei Lee, Xi Chen, Gang Hua, Houdong Hu, and Xiaodong He. 2018. Stacked cross attention for image-text matching. In Computer Vision - ECCV 2018, pages 212–228, Cham. Springer International Publishing. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 7871–7880, Online. Association for Computational Linguistics. Piji Li and Shuming Shi. 2021. Tail-to-tail nonautoregressive sequence prediction for Chinese grammatical error correction. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4973–4984, Online. Association for Computational Linguistics. Xiujun Li, Xi Yin, Chunyuan Li, Xiaowei Hu, Pengchuan Zhang, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, Yejin Choi, and Jianfeng Gao. 2020. Oscar: Object-semantics aligned pre-training for vision-language tasks. In European Conference on Computer Vision. Jared Lichtarge, Chris Alberti, and Shankar Kumar. 2020. Data weighted training strategies for grammatical error correction. *Transactions of the Association* for Computational Linguistics, 8:634–646. Tomoya Mizumoto, Mamoru Komachi, Masaaki Nagata, and Yuji Matsumoto. 2011. Mining revision log of language learning SNS for automated Japanese error correction of second language learners. In *Proceedings of 5th International Joint Conference on Natural* Language Processing, pages 147–155, Chiang Mai, Thailand. Asian Federation of Natural Language Processing. Hwee Tou Ng, Siew Mei Wu, Ted Briscoe, Christian Hadiwinoto, Raymond Hendy Susanto, and Christopher Bryant. 2014. The CoNLL-2014 shared task on grammatical error correction. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task, pages 1–14, Baltimore, Maryland. Association for Computational Linguistics. Diane Nicholls. 2003. The cambridge learner corpus: Error coding and analysis for lexicography and elt. In *Proceedings of the Corpus Linguistics 2003 conference*, volume 16, pages 572–581. Cambridge University Press Cambridge. Kostiantyn Omelianchuk, Vitaliy Atrasevych, Artem Chernodub, and Oleksandr Skurzhanskyi. 2020. Gector–grammatical error correction: tag, not rewrite. arXiv preprint arXiv:2005.12592. Vipul Raheja and Dimitris Alikaniotis. 2020. Adversarial Grammatical Error Correction. In *Findings* of the Association for Computational Linguistics: EMNLP 2020, pages 3075–3087, Online. Association for Computational Linguistics. Marek Rei and Helen Yannakoudakis. 2016. Compositional sequence labeling models for error detection in learner writing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1181–1191, Berlin, Germany. Association for Computational Linguistics. Sascha Rothe, Jonathan Mallinson, Eric Malmi, Sebastian Krause, and Aliaksei Severyn. 2021. A simple recipe for multilingual grammatical error correction. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 702–707, Online. Association for Computational Linguistics. Felix Stahlberg and Shankar Kumar. 2021. Synthetic data generation for grammatical error correction with tagged corruption models. In Proceedings of the 16th Workshop on Innovative Use of NLP for Building Educational Applications, pages 37–47, Online. Association for Computational Linguistics. Joel R. Tetreault and Martin Chodorow. 2008. The ups and downs of preposition error detection in ESL writing. In Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008), pages 865–872, Manchester, UK. Coling 2008 Organizing Committee. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *Advances in neural information processing* systems, 30. Zhaohong Wan, Xiaojun Wan, and Wenguang Wang. 2020. Improving grammatical error correction with data augmentation by editing latent representation. In Proceedings of the 28th International Conference on Computational Linguistics, pages 2202–2212, Barcelona, Spain (Online). International Committee on Computational Linguistics. Bo Wang, Kaoru Hirota, Chang Liu, Yaping Dai, and Zhiyang Jia. 2020. An approach to nmt re-ranking using sequence-labeling for grammatical error correction. *Journal of Advanced Computational Intelligence and Intelligent Informatics*, 24(4):557–567. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer. In *Proceedings* of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 483–498, Online. Association for Computational Linguistics. Helen Yannakoudakis, Ted Briscoe, and Ben Medlock. 2011. A new dataset and method for automatically grading ESOL texts. In *Proceedings of the 49th* Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 180–189, Portland, Oregon, USA. Association for Computational Linguistics. Zheng Yuan and Ted Briscoe. 2016. Grammatical error correction using neural machine translation. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 380–386, San Diego, California. Association for Computational Linguistics. Zheng Yuan and Christopher Bryant. 2021. Documentlevel grammatical error correction. In *Proceedings* of the 16th Workshop on Innovative Use of NLP for Building Educational Applications, pages 75–84, Online. Association for Computational Linguistics. Zheng Yuan and Mariano Felice. 2013a. Constrained grammatical error correction using statistical machine translation. In *Proceedings of the Seventeenth* Conference on Computational Natural Language Learning: Shared Task, pages 52–61, Sofia, Bulgaria. Association for Computational Linguistics. Zheng Yuan and Mariano Felice. 2013b. Constrained grammatical error correction using statistical machine translation. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning: Shared Task, pages 52–61. Zheng Yuan, Felix Stahlberg, Marek Rei, Bill Byrne, and Helen Yannakoudakis. 2019. Neural and FSTbased approaches to grammatical error correction. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 228–239, Florence, Italy. Association for Computational Linguistics. Zheng Yuan, Shiva Taslimipoor, Christopher Davis, and Christopher Bryant. 2021. Multi-class grammatical error detection for correction: A tale of two systems. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 8722–8736. Wei Zhao, Liang Wang, Kewei Shen, Ruoyu Jia, and Jingming Liu. 2019. Improving grammatical error correction via pre-training a copy-augmented architecture with unlabeled data. In *Proceedings of the* 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 156–165, Minneapolis, Minnesota. Association for Computational Linguistics. Liu Zhuang, Lin Wayne, Shi Ya, and Zhao Jun. 2021. A robustly optimized BERT pre-training approach with post-training. In Proceedings of the 20th Chinese National Conference on Computational Linguistics, pages 1218–1227, Huhhot, China. Chinese Information Processing Society of China. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitations A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1 introduction ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? No response. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? No response. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? No response. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No response. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? No response. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. No response. ## C ✓ **Did You Run Computational Experiments?** 4 Experiments ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 4 Experiments The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 4.3 Experiment Setup ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 4.4 Results analysis ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 4 Experiments D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
reid-artetxe-2023-role
On the Role of Parallel Data in Cross-lingual Transfer Learning
https://aclanthology.org/2023.findings-acl.372
While prior work has established that the use of parallel data is conducive for cross-lingual learning, it is unclear if the improvements come from the data itself, or if it is the modeling of parallel interactions that matters. Exploring this, we examine the usage of unsupervised machine translation to generate synthetic parallel data, and compare it to supervised machine translation and gold parallel data. We find that even model generated parallel data can be useful for downstream tasks, in both a general setting (continued pretraining) as well as the task-specific setting (translate-train), although our best results are still obtained using real parallel data. Our findings suggest that existing multilingual models do not exploit the full potential of monolingual data, and prompt the community to reconsider the traditional categorization of cross-lingual learning approaches.
# On The Role Of Parallel Data In Cross-Lingual Transfer Learning Machel Reid∗ Google DeepMind [email protected] ## Abstract While prior work has established that the use of parallel data is conducive for cross-lingual learning, it is unclear if the improvements come from the data itself, or it is the modeling of parallel interactions that matters. Exploring this, we examine the usage of unsupervised machine translation to generate synthetic parallel data, and compare it to supervised machine translation and gold parallel data. We find that even model generated parallel data can be useful for downstream tasks, in both a general setting (continued pretraining) as well as the task-specific setting (translatetrain), although our best results are still obtained using real parallel data. Our findings suggest that existing multilingual models do not exploit the full potential of monolingual data, and prompt the community to reconsider the traditional categorization of cross-lingual learning approaches. ## 1 Introduction Multilingual models have been shown to generalize across languages in a zero-shot fashion (Pires et al., 2019; Conneau and Lample, 2019; Conneau et al., 2020; Kale et al., 2021). These models are usually pretrained on concatenated monolingual corpora in multiple languages using some form of language modeling or denoising objective. The models are then finetuned using labeled downstream data in the source language (typically English), which makes them capable of generalizing to the target language thanks to the aligned representations learned at pretraining. While this paradigm does not require any data in the target language other than the monolingual pretraining corpus, prior work has reported improved results by incorporating **parallel data** into the pipeline, either at pretraining or finetuning time. ∗Work done while at the University of Tokyo †Work done while at Meta AI Mikel Artetxe† Reka AI [email protected] ![0_image_0.png](0_image_0.png) ![0_image_1.png](0_image_1.png) (b) All possibilities to use different types of pretraining data Figure 1: **Cross-lingual transfer settings.** Monolingual and parallel data can be used at different stages of the pipeline, either directly or indirectly through MT (b), but the traditional categorization falls short at capturing them (a). During pretraining, parallel data has been incorporated through an auxiliary objective, such as Translation Language Modeling (TLM) in XLM (Conneau and Lample, 2019) or bitext denoising in PARADISE (Reid and Artetxe, 2022). Regarding finetuning, it is common to use Machine Translation (MT)—which is trained on parallel data under the hood—to translate the downstream training data into the target language(s) (Conneau et al., 2020), which can be seen as a form of data augmentation. Nevertheless, it is still unclear why parallel data is beneficial for cross-lingual transfer learning. Is the **data itself** that matters, given the additional information that it contains? Or is it the explicit modeling of parallel interactions that is important? To answer this question, we systematically compare the use of parallel data from different sources: 5999 ground truth parallel data, or synthetic parallel data generated by either supervised MT, unsupervised MT, or word-by-word translation. Most notably, our unsupervised MT variant relies on the exact same monolingual corpus used to pretrain the model, so any potential improvement can only come from the modeling side. Our experiments on Natural Language Inference (NLI), Question Answering (QA) and Named Entity Recognition (NER) show that the explicit modeling of parallel interactions is indeed beneficial, demonstrating that existing pretraining and finetuning methods do not exploit the full potential of monolingual data. However, our best results are obtained using real parallel data—either directly or indirectly through supervised MT—showing that there is also some inherent value on it. In the light of these results, we argue that the traditional categorization of cross-lingual transfer approaches into zero-shot, *translate-train* and translate-test (Figure 1a) falls short at capturing the required detail for a fair comparison across different approaches. Given this, we encourage further research on understanding what the contribution of monolingual and parallel data is, and how to best leverage them (directly or indirectly through MT, and at different parts of the pipeline), which requires thinking beyond the boundaries of the existing categorization (Figure 1b). ## 2 Experimental Setup 2.1 Tasks We run experiments on 3 tasks: NLI on XNLI (Conneau et al., 2018), extractive QA on XQuAD (Artetxe et al., 2020), and NER on WikiANN (Pan et al., 2017). In all cases, we use the original training set in English, and evaluate transfer performance in other languages. Due to compute constraints, we restrict evaluation to the following set of languages: English (en), Arabic (ar), German (de), Hindi (hi), French (fr), Swahili (sw), Russian (ru), Thai (th) and Vietnamese (vi). Our finetuning incorporation experiments in §3.2 involve machine translating the training data into the target languages. For XNLI, we just translate the premise and hypothesis and leave the label unchanged. For XQuAD and WikiANN, which have token-level labels (as opposed to sequence-level), we translate the input text and project the answer spans by using the awesome (Dou and Neubig, 2021) word aligner , taking the aligned spans as the ## Target Labels. 2.2 Model We use XLM-R base (Conneau et al., 2020) for all of our experiments, which was trained through Masked Language Modeling (MLM) on CC-100 (a monolingual corpus covering 100 languages). For finetuning, we experiment with learning rates of 1e-5, 5e-5, and 1e-4 using the Adam optimizer. We train for up to 10 epochs and choose the checkpoint with the best validation performance averaged across the languages in consideration. ## 2.3 Parallel Data Sources We compare the following sources of parallel data in our experiments: Gold. Ground-truth parallel data generated by humans. We use the same parallel data as Reid and Artetxe (2022), which combines data from IWSLT, WMT, and other parallel corpora. Supervised MT. Synthetic parallel data generated through a conventional MT system. The MT system is supervised, so this approach is also leveraging ground-truth parallel data indirectly. We use the 420M M2M-100 model (Fan et al., 2020). Unsupervised MT. Synthetic parallel data generated through an unsupervised MT system (Artetxe et al., 2018; Conneau and Lample, 2019). The MT system is trained on a subset of the monolingual data used for pretraining, so this approach does not use any additional data neither directly nor indirectly, other than the synthetically generated one. More concretely, we use XLM-R base to initialize our unsupervised MT model, and finetune it in 16 directions (en↔{ar,de,hi,fr,sw,ru,th,vi} using the iterative denoising autoencoding and backtranslation approach proposed by Conneau and Lample (2019).1 We train for a total of 750k iterations using a batch size of 128k tokens. We use 200MB of text from CC100 for each language, amounting to a total of 1.8GB of training data. Dictionary. Synthetic parallel data generated through random word replacement with a dictionary. We use the same dictionaries as Reid and Artetxe (2022), which combine dictionaries from MUSE (Lample et al., 2018) and those extracted using word aligners (Östling and Tiedemann, 2016). Following Reid and Artetxe (2022), we replace 1https://github.com/facebookresearch/XLM | en | ar | de | hi | fr | sw | avg | en | ar | hi | ru | th | vi | avg | en | ar | fr | hi | ru | th | vi | sw | avg | |----------|------------------------------------|------------------------------------|---------------------------------------------|---------------------------------------------|------|-------|------|------|------|------|------|------|-------|------|------|------|------|------|------|------|------|-------| | 1) XLM-R | 83.9 71.9 75.2 69.1 77.4 62.2 73.3 | 86.5 68.6 76.7 80.1 74.2 79.1 77.5 | 81.3 53.0 80.5 73.0 69.1 1.3 79.4 70.5 63.5 | | | | | | | | | | | | | | | | | | | | | 2) | + unsup MT | 83.4 72.4 77.1 72.2 78.2 67.8 75.2 | 86.7 70.2 80.7 81.5 75.8 79.6 79.0 | 81.3 54.1 82.1 74.9 71.1 3.8 80.7 71.7 64.9 | | | | | | | | | | | | | | | | | | | | 3) | + sup MT | 83.2 74.4 77.5 72.7 78.3 70.1 76.0 | 86.6 73.5 81.1 83.0 77.4 81.9 80.7 | 81.6 57.0 82.3 75.4 71.6 5.8 81.6 73.4 66.1 | | | | | | | | | | | | | | | | | | | | 4) | + gold | 84.0 75.2 77.7 72.4 78.6 70.4 76.4 | 86.3 72.3 82.3 82.7 78.2 81.9 80.6 | 82.4 57.3 82.4 75.6 71.8 4.6 81.5 73.7 66.2 | | | | | | | | | | | | | | | | | | | Table 1: **Pretraining incorporation results.** We compare the original XLM-R model (1) with three variants where we continue pretraining it on either synthetic (2, 3) or real (4) parallel data. All models are finetuned on English downstream data and zero-shot transferred to the target language. Table 2: **Finetuning incorporation results.** We compare finetuning XLM-R on the original English data (1), and machine translated data through either word-by-word replacement (2), unsupervised MT (3) or supervised MT (4). | XNLI (acc) | XQuAD (F1) | WikiANN (F1) | | | | | | | | | | | | | | | | | | | | | |--------------|------------------------------------|------------------------------------|---------------------------------------------|---------------------------------------------|----|-----|----|----|----|----|----|----|-----|----|----|----|----|----|----|----|----|-----| | en | ar | de | hi | fr | sw | avg | en | ar | hi | ru | th | vi | avg | en | ar | fr | hi | ru | th | vi | sw | avg | | 1) XLM-R | 83.9 71.9 75.2 69.1 77.4 62.2 73.3 | 86.5 68.6 76.7 80.1 74.2 79.1 77.5 | 81.3 53.0 80.5 73.0 69.1 1.3 79.4 70.5 63.5 | | | | | | | | | | | | | | | | | | | | | 2) | + dict | 83.7 72.6 77.6 70.7 78.9 65.6 74.9 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | | | | | 3) | + unsup MT | 84.0 73.2 77.1 71.6 78.6 67.9 75.4 | 86.0 70.4 80.3 81.0 76.3 79.8 78.9 | 80.6 56.0 82.7 75.7 71.8 3.7 80.9 72.3 65.5 | | | | | | | | | | | | | | | | | | | | 4) | + sup MT | 84.2 74.6 78.2 73.1 79.4 70.6 76.7 | 86.3 73.2 81.6 83.4 77.2 81.4 80.5 | 82.2 57.4 83.1 76.4 72.4 5.2 82.1 73.4 66.6 | | | | | | | | | | | | | | | | | | | words that are included in our dictionary with a probability of 0.4. ## 3 Experiments And Results 3.1 Pretraining Incorporation In these experiments, we incorporate parallel data into the pretraining process. We take XLM-R as our starting point, which was trained on monolingual data through MLM, and continue pretraining it on both MLM and TLM for 70k steps, using a batch size of 64k tokens. We use a learning rate of 5e-5 with a linear warmup and cosine decay schedule. We use the MLM objective 70% of the time, and the TLM objective 30% of the time. The latter applies the same masking objective over concatenated parallel sentences, and we compare different sources of parallel data as detailed in §2.3. For parallel data generated through MT, we translate a random subset of CC100 (keeping consistent with the data used in pretraining). The model is then finetuned on the downstream tasks using the original training data in English, and zero-shot transferred to the target languages. We report our results in Table 1. We find that all variants incorporating parallel data outperform the original XLM-R model,2and the improvements 2The skeptical reader might attribute this improvement to the additional training steps we perform, irrespective of the use of parallel data. However, we find strong evidence that the improvements are brought by the use of parallel data given that (i) XLM-R was trained until convergence using a huge amount of compute, and our continued training represents an insignificant fraction on top (96 GPU days, compared to 13k GPU days, or a relative 0.7% further), and (ii) we get improvements in all target languages but not in English, suggesting are consistent across all target languages. However, different from Reid and Artetxe (2022), we do not find any clear improvements on English. Regarding the source of parallel data, we find that supervised MT performs at par with gold data, even for less-resourced languages for which MT tends to suffer. Unsupervised MT lags behind them, but consistently outperforms the baseline. These results suggests that the mere facilitation of parallel interaction is helpful even when not using any new data, but incorporating groundtruth parallel data brings further improvements. However, the way in which parallel data is incorporated—either directly or through MT— does not have any clear impact, as evidenced by the similar performance of supervised MT and gold. ## 3.2 Finetuning Incorporation In these experiments, we incorporate parallel data into the finetuning process. We translate the downstream training data in English into the rest of languages, and finetune XLM-R in the combined data in all languages. This is commonly referred to as translate-train-all in the literature. We report our results in Table 2. Similar to the finetuning incorporation, we find that incorporating parallel data outperforms the baseline in all tasks and target languages for all data sources that we explore. Supervised MT obtains the best results, followed by unsupervised MT and wordby-word translation with dictionaries. Similar to the pretraining incorporation results, this suggests that the additional steps improve the cross-lingual capabilities of the model but not its general quality. that synthetic parallel data can bring improvements even when generated exclusively through monolingual data, but using real parallel data brings further improvements. Finally, we find that even simplistic ways to incorporate parallel signals can bring improvements, as evidenced by the dictionary replacement results. ## 3.3 Discussion While prior work has reported strong results from incorporating parallel data for cross-lingual transfer learning, our results show that this improvement can partly—but not exclusively—be attributed to the explicit use of a parallel training signal, which can also be achieved through unsupervised MT without the need for any real parallel data. In fact, we find that the facilitation of parallel interactions is more important than the use of real parallel data in all tasks but XQuAD, where the latter has a larger impact. Despite the popularity of multilingual pretrained models, which predominantly rely on monolingual data both for pretraining and finetuning, this calls into question the extent to which existing approaches are able to exploit the full potential of such monolingual data. In addition, it is striking that we obtain similar results for both pretraining and finetuning incorporation, as well as supervised MT and gold standard parallel data. While further evidence is necessary to draw a more definitive conclusion, this suggests that parallel data brings similar improvements regardless of when (pretraining vs. finetuning) and how (directly vs. indirectly through MT) it is incorporated. ## 4 Reconsidering The Categorization Of Cross-Lingual Learning Approaches As illustrated in Figure 1a, approaches to crosslingual learning have traditionally been classified into 3 categories: *zero-shot* (finetune a multilingual model on English and zero-shot transfer into the target language), *translate-train* (translate the English training data into the target languages through MT and finetune a multilingual model), and *translatetest* (translate the test set into English and run inference using a monolingual model). This distinction is primarily based on which stage of the pipeline MT is incorporated into. While relevant from a practical perspective, we believe that, if taken in a rigid manner, such a framework can hinder addressing the more fundamental question of what the contribution of each data source is, and how to best leverage each of them. More concretely, as shown in Figure 1b, there are different data **types** that one can use (monolingual source corpora, monolingual target corpora and parallel corpora, in addition to downstream data), which can be incorporated at different **stages** of the pipeline (pretraining, finetuning, testing) and via different **procedures** (directly or indirectly through MT). We argue that research in cross-lingual learning should aim to understand how the variants in each dimension as well the interactions between them impact downstream performance, which can require thinking beyond the boundaries of the 3 conventional categories. For instance, our variant using unsupervised MT to translate the downstream training data would fall within the definition of translate-train. However, this approach is more comparable to *zero-shot* in that it only uses monolingual data, and it would be unfair to compare it to conventional *translate-train* systems that rely on parallel data to train the MT system. ## 5 Related Work Prior work has explored the extent to which monolingual pretraining relies on knowledge transfer from unlabeled corpora by using synthetic data (Chiang and Lee, 2020; Krishna et al., 2021) or downstream data (Krishna et al., 2022) instead, and similar ideas have also been explored in computer vision (Kataoka et al., 2020; Asano et al., 2020). However, to the best of our knowledge, we are first to examine if cross-lingual learning also relies on knowledge transfer from parallel data. Our use of synthetic parallel corpora is also connected with back-translation, which is widely used in MT (Sennrich et al., 2016). However, conventional MT systems are trained on parallel data, and backtranslation is usually motivated as a way to leverage additional (monolingual) data. In contrast, our unsupervised MT variant does not use any additional data compared to regular pretraining. ## 6 Conclusions In this work, we show that even model-generated parallel data can be useful for cross-lingual learning—greatly expanding the possibilities for multilingual models to improve their performance by taking advantage of their own machine translation capabilities. Given this, we advocate for investigating the optimal way to leverage monolingual and/or parallel data for cross-lingual learning, which might require thinking beyond the boundaries of the conventional zero-shot, *translate-train* and *translate-test* categories. ## 7 Limitations In this work, we only consider the pre-train then fine-tune paradigm which assumes that model weights are tuned for adaptation to specific tasks. Future work, once more capable multilingual LLMs are released, may also consider the few shot, and in-context learning-based setups to accommodate for more recent approaches towards adaptation in NLP. Future work may also consider setups more relevant to different, more diverse tasks (e.g. including webtext). ## References Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018. Unsupervised neural machine translation. In *International Conference on* Learning Representations. Mikel Artetxe, Sebastian Ruder, and Dani Yogatama. 2020. On the cross-lingual transferability of monolingual representations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4623–4637, Online. Association for Computational Linguistics. Yuki M. Asano, Christian Rupprecht, and Andrea Vedaldi. 2020. A critical analysis of selfsupervision, or what we can learn from a single image. In *International Conference on Learning Representations*. Cheng-Han Chiang and Hung-yi Lee. 2020. Pretraining a language model without human language. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8440– 8451, Online. Association for Computational Linguistics. Alexis Conneau and Guillaume Lample. 2019. Crosslingual language model pretraining. In *Advances* in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 7057–7067. Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. XNLI: Evaluating cross-lingual sentence representations. In *Proceedings of the 2018 Conference on Empirical Methods* in Natural Language Processing, pages 2475–2485, Brussels, Belgium. Association for Computational Linguistics. Zi-Yi Dou and Graham Neubig. 2021. Word alignment by fine-tuning embeddings on parallel corpora. In Conference of the European Chapter of the Association for Computational Linguistics (EACL). Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, and Armand Joulin. 2020. Beyond english-centric multilingual machine translation. Mihir Kale, Aditya Siddhant, Noah Constant, Melvin Johnson, Rami Al-Rfou, and Linting Xue. 2021. nmt5 - is parallel data still relevant for pre-training massively multilingual language models? arXiv preprint arXiv:2106.02171. Hirokatsu Kataoka, Kazushige Okayasu, Asato Matsumoto, Eisuke Yamagata, Ryosuke Yamada, Nakamasa Inoue, Akio Nakamura, and Yutaka Satoh. 2020. Pre-training without natural images. In *Proceedings of the Asian Conference on Computer Vision*. Kundan Krishna, Jeffrey Bigham, and Zachary C. Lipton. 2021. Does pretraining for summarization require knowledge transfer? In *Findings of the Association for Computational Linguistics: EMNLP 2021*, pages 3178–3189, Punta Cana, Dominican Republic. Association for Computational Linguistics. Kundan Krishna, Saurabh Garg, Jeffrey P. Bigham, and Zachary C. Lipton. 2022. Downstream datasets make surprisingly good pretraining corpora. Guillaume Lample, Alexis Conneau, Marc'Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2018. Word translation without parallel data. In *6th International Conference on Learning Representations,* ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. Robert Östling and Jörg Tiedemann. 2016. Efficient word alignment with Markov Chain Monte Carlo. *Prague Bulletin of Mathematical Linguistics*, 106:125–146. Xiaoman Pan, Boliang Zhang, Jonathan May, Joel Nothman, Kevin Knight, and Heng Ji. 2017. Crosslingual name tagging and linking for 282 languages. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1946–1958, Vancouver, Canada. Association for Computational Linguistics. Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual BERT? In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 4996– 5001, Florence, Italy. Association for Computational Linguistics. Machel Reid and Mikel Artetxe. 2022. PARADISE: Exploiting parallel data for multilingual sequenceto-sequence pretraining. In *Proceedings of the* 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 800–810, Seattle, United States. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. In *Proceedings of the* 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86–96, Berlin, Germany. Association for Computational Linguistics. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 7 ✗ A2. Did you discuss any potential risks of your work? Our work is primarily an analysis and does not entail any clear risk ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? Not applicable. Left blank. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Not applicable. Left blank. ## C ✓ **Did You Run Computational Experiments?** 3 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 3 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 3 C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Not applicable. Left blank. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 3 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
guo-etal-2023-comave
{C}o{M}ave: Contrastive Pre-training with Multi-scale Masking for Attribute Value Extraction
https://aclanthology.org/2023.findings-acl.373
Attribute Value Extraction (AVE) aims to automatically obtain attribute value pairs from product descriptions to aid e-commerce. Despite the progressive performance of existing approaches in e-commerce platforms, they still suffer from two challenges: 1) difficulty in identifying values at different scales simultaneously; 2) easy confusion by some highly similar fine-grained attributes. This paper proposes a pre-training technique for AVE to address these issues. In particular, we first improve the conventional token-level masking strategy, guiding the language model to understand multi-scale values by recovering spans at the phrase and sentence level. Second, we apply clustering to build a challenging negative set for each example and design a pre-training objective based on contrastive learning to force the model to discriminate similar attributes. Comprehensive experiments show that our solution provides a significant improvement over traditional pre-trained models in the AVE task, and achieves state-of-the-art on four benchmarks.
# Comave**: Contrastive Pre-Training With Multi-Scale Masking For Attribute** Value Extraction Xinnan Guo1, Wentao Deng2, Yongrui Chen1, Yang Li1**, Mengdi Zhou**2, Guilin Qi1, Tianxing Wu1, Yang Dong2, Liubin Wang2, **Yong Pan**2 1School of Computer Science and Engineering, Southeast University, Nanjing, China 2Ant Group, China [email protected], [email protected], [email protected], [email protected], [email protected], gqi, [email protected], [email protected] ## Abstract ![0_Image_0.Png](0_Image_0.Png) Attribute Value Extraction (AVE) aims to automatically obtain attribute value pairs from product descriptions to aid e-commerce. Despite the progressive performance of existing approaches in e-commerce platforms, they still suffer from two challenges: 1) difficulty in identifying values at different scales simultaneously; 2) easy confusion by some highly similar fine-grained attributes. This paper proposes a pre-training technique for AVE to address these issues. In particular, we first improve the conventional token-level masking strategy, guiding the language model to understand multi-scale values by recovering spans at the phrase and sentence level. Second, we apply clustering to build a challenging negative set for each example and design a pre-training objective based on contrastive learning to force the model to discriminate similar attributes. Comprehensive experiments show that our solution provides a significant improvement over traditional pretrained models in the AVE task, and achieves state-of-the-art on four benchmarks1. ## 1 Introduction Product features are crucial components of ecommerce platforms and are widely used in applications such as product recommendation (Cao et al., 2018), product retrieval (Magnani et al., 2019), and product question answering (Yih et al., 2015; Chen et al., 2021b). Each product feature typically consists of an *attribute* and one or more *values*, providing detailed product descriptions to help customers make purchasing decisions. In recent years, Attribute Value Extraction (AVE) (Xu et al., 2019; Zhu et al., 2020; Yan et al., 2021) methods have received increasing attention because they can automatically extract product features from a massive amount of unstructured product text, with impressive results in e-commerce platforms, such as Amazon, AliExpress, and JD. 1https://github.com/ygxw0909/CoMave Figure 1: An example of attribute value extraction in the insurance field. Wherein each insurance clause contains multi-scale values and fine-grained similar attributes. However, as e-commerce grows, some emerging domains, such as finance, insurance, and healthcare, bring two new challenges: a) **Multi-scale** values. Unlike normal products (e.g., clothing) with only short values (e.g., color: red), insurance products can have a value of a longer phrase or even multiple sentences. For example, the value of attribute *renewal rule* in Figure 1 contains more than 25 words (in green), rendering it impractical to retrieve them using related techniques such as Name Entity Recognition (NER) (Li et al., 2020; Yang et al., 2021). b) **Fine-grained divisions of** attributes. Compared with the coarse division of attributes in traditional e-commerce (e.g., *color*, size, and *material*), the division in insurance products is more refined, resulting in different attributes often having similar types. For instance, in the insurance clauses in Figure 1, maximum insurance age and *maximum renewal age* are both ages, and grace period and *hesitation period* are both periods. This fine-grained division makes the distinction between the different attributes subtle, thus increasing the difficulty to distinguish between them. Although recent pre-trained language models (PLMs) such as BERT (Devlin et al., 2019) and ROBERTA (Liu et al., 2019) achieve tremendous success on a spectrum of NLP tasks, including AVE, we argue that they are not sufficient for the challenges above. First, the conventional Masking Language Model (MLM) focuses on token-level recovery and does not consider multi-scale values. Second, there is still a gap between the unsupervised general objectives and the downstream AVE in terms of task form, such that the model cannot benefit from pre-training when retrieving attributes, let alone distinguishing between fine-grained similar attributes.. In this paper, we propose COMAVE, a novel PLM for AVE tasks. Relying on the large-scale corpus of triples ⟨text, attribute, *value*⟩ collected by distant supervision, we propose three pre-training objectives to address the challenges: a) **MultiScale Masked Language Model** (MSMLM). We extend token-level recovery to the phrase as well as the sentence level, using different masking mechanisms to force the model to perceive spans of various lengths, thus providing a basis for identifying values at different scales. b) **Contrastive** Attribute Retrieval (CAR). To adapt the model to the fine-grained division of attributes, we require it to retrieve the correct attributes from a challenging candidate set of semantically similar attributes. The candidates are mainly collected by clustering and a contrastive loss is designed to help the model perceive the subtle differences between them. c) Value Detection (VD). To close the gap between pre-training and downstream AVE and further enhance the model's perception of values extraction, we let the model recognize all values without considering the corresponding attribute. To fully evaluate our pre-trained COMAVE, we construct a new challenging benchmark INS. It consists of financial and medical texts from real scenarios and the corresponding manual annotations and is full of the two challenges we mentioned. Comprehensive experiments on four AVE datasets including INS demonstrate that, equipped with only a simple fine-tuning output layer, our COMAVE not only achieves state-of-the-art results on the hardest INS but also outperforms all the compared methods on existing benchmarks. Our contributions are summarized as follows: - We release an advanced pre-trained language model, namely COMAVE, for solving common challenges in AVE tasks. To the best of our knowledge, this is the first pre-training model aimed at AVE tasks. - We propose three novel pre-training objectives: Multi-Scale MLM allows the model to adapt to values span of different scales, CAR uses contrastive loss to force the model to perceive subtle differences in similar attributes, and VD bridges the gap between pre-training and downstream tasks. - Our method obtains state-of-the-art results on four AVE benchmarks, achieving significant improvements compared to existing PLMs. ## 2 Preliminaries Given a natural language text T and a set of candidate attributes set A = {a1, a2, ..., a|A|}, where ai is an attribute, the goal of AVE is to extract a set Y = {(a∗1 , V1)*, ...,*(a∗n, Vn)}, where a∗ i ∈ A and Viis the set of values belonging to a∗ i . For simplicity, each value v ∈ Viis defined as a span of T . In general, T is collected from a large number of product-related documents or other data sources, and A is a collection of attributes for various products in different categories. Note that although formally AVE is similar to NER, the two still have significant differences, as we mentioned in section 1. First, the division of attributes is more fine-grained than the division of entity types (e.g., *location* and *person*). Second, the scale of entities is generally shorter, while that of values varies from token level to sentence level. Therefore, conventional NER methods are difficult to directly port to AVE tasks. ## 3 Methodology 3.1 Pre-Training Corpus Construction The pre-training procedure of COMAVE requires a large-scale corpus C = {(Ti, Ai, Yi)}M containing tens of millions of data. Manual annotation of such a large corpus is obviously impractical, thus we designed an automatic method to construct C. In brief, we first collect the triples ⟨subject, predicate, *object*⟩ from several existing open-domain knowledge graphs, including DBpedia (Lehmann et al., 2015), Yago (Tanon et al., 2020), WikiData (Vrandecic and Krötzsch, 2014), and OpenKG (Chen et al., 2021a). Then, we regard ![2_image_0.png](2_image_0.png) each *predicate* and *object* as the attribute ai and the value vi, respectively, thereby building a seed set {(ai, Vi)}N by aligning and merging the attributes. Finally, we use this set as a distant supervision to mine the corresponding texts from the web data, thus building the pre-training corpus. ## 3.2 Pre-Training Comave Since ROBERTA (Liu et al., 2019) has been shown to be promising and robust on multiple NLP tasks, we use it to initialize COMAVE. We then further pre-train COMAVE on our corpus C. As shown in Figure 2, we flatten each pair (T , A) into a sequence X with the </s> token, $$\begin{array}{l}{{<\!\!s\!>,x_{1},x_{2},...,x_{n},<\!\!/s\!>,<\!\!/s\!>,x_{1,1}^{a},x_{1,2}^{a},}}\\ {{{}}}\\ {{x_{1,|a_{1}|}^{a},<\!\!/s\!>,x_{2,1}^{a},...,x_{m,|a_{m}|}^{a},<\!\!/s\!>,}}\end{array}\tag{1}$$ then COMAVE converts the each token of X into a semantic vector, $$\begin{array}{l}\mathbf{h}^{<\!\!>},\mathbf{h}_{1}^{T},\mathbf{h}_{2}^{T},...,\mathbf{h}_{n}^{T},\mathbf{h}^{<\!\!/\!\!>},\mathbf{h}^{<\!\!/\!\!>},\mathbf{h}_{1,1}^{a},\mathbf{h}_{1,2}^{a},\\ \mathbf{h}_{1,|a_{1}|}^{a},\mathbf{h}^{<\!\!/\!\!>},\mathbf{h}_{2,1}^{a},...,\mathbf{h}_{m,|a_{m}|}^{a},\mathbf{h}^{<\!\!/\!\!>},\end{array}\tag{2}$$ where hT i ∈ R dand h a j,k ∈ R d denote the vector of xi and x a j,k, respectively, and h <s> ∈ R dis regarded as the global semantic vector of X . Considering the above challenges, we design three objectives to pre-train COMAVE as follows. ## Multi-Scale Masked Language Model The most common objective of pre-training is to employ MLM to guide the model to perform extensively. Unlike BERT or ROBERTA which focuses on token-level recovery, we prefer COMAVE to be aware of various values, regardless of their scales. Consequently, we design two parallel masking mechanisms, namely phrase-level and sentencelevel masking. During pre-training, each T is performed by only one of the two mechanisms, and the probabilities are set as ρ and 1-ρ, respectively. Furthermore, we empirically find that an appropriate masking percentage is a prerequisite for MLM to be effective. We denote this budget percentage by µp and µs, respectively, and try to make the masking result close to it for both mechanisms. In phrase-level masking, we are inspired by SpanBERT (Joshi et al., 2020) and randomly mask a short span of tokens for each selected T until the budget µp is spent. The probability distribution of the masking length, denoted by l ∈ [1, lmax], is: $$P_{\mathrm{phrase}}(l)={\frac{\sigma^{-l}+\gamma}{\sum_{l^{\prime}=1}^{l_{\mathrm{max}}}\sigma^{-l^{\prime}}+\gamma}},\qquad\qquad(3)$$ where both σ and γ are hyper-parameters. This distribution ensures that the masking probability of each span decreases smoothly as its length increases, while also preventing long spans from being rarely selected. Note that we make sure that each masked span is formed by complete words. In sentence-level masking, we mask only one sentence for each selected T because recovering a sentence requires sufficient context. In this way, it is more difficult to make the total number of masked tokens approach µs compared to masked phrases since the length of different sentences can vary significantly. To achieve this goal, we propose a simple but effective strategy to dynamically control the masking probability of each sentence. Specifically, assuming that the current sentence masking rate of µc = P P|Tmask| l(T ) , where Tmask is the tokens that has been masked. If µc < µs, it means that the current masking rate is less than the standard value, so we should pay more attention to longer sentences Slong = {s|l(s) > l(T ) ∗ µs}, giving higher masking probabilities. Otherwise, we should focus on short ones Sshort = {s|l(s) ≤ l(T ) ∗ µs}. Following BERT (Devlin et al., 2019), we replace 80% of the masked tokens with <mask>, 10% with the random tokens in the corpus, and leave the remaining 10% unchanged. ## Contrastive Attribute Retrieval We expect to adapt COMAVE to the subtle differences between attributes in the pre-training phase. To this end, for each training text T and its ground truth attributes A+, a challenging negative set A− = A− c ∪ A− gis built to confuse the model. Here, each ac ∈ A− c is sampled using clustering to guarantee it is highly similar to A+ (see below for details), and each ag ∈ A− g is a random one from the total attribute pool to maintain the diversity of the negative examples. If T has no ground truth, then A− = A− g . During pre-training, COMAVE is required to retrieve each correct attribute a + ∈ A+ by scoring all a ∈ A+ ∪ A− with $${\mathcal{P}}({\mathcal{T}},a)=\mathrm{sigmoid}(\mathbf{h}^{\mathrm{cs}}*W^{\mathrm{CAR}}),\qquad(4)$$ where WCAR ∈ R d∗ηis a trainable parameter and η denotes the maximum of |A|. To make the score of A+ higher than that of each negative example, i.e., ∀a− ∈ A−, P(T , a+) > P(T , a−), where a + ∈ A+. Inspired by (Khosla et al., 2020), we define a Margin Ranking Loss to better leverage contrastive learning and strengthen the distinction between fine-grained attributes, $${\mathcal{L}}_{\mathrm{CRA}}=\sum_{i=1}^{|{\mathcal{A}}|}\sum_{j=i+1}^{|{\mathcal{A}}|}(1-z)*|{\mathcal{P}}_{i}-{\mathcal{P}}_{j}|+\tag{5}$$ $$z*\operatorname*{max}(0,\lambda-|{\mathcal{P}}_{i}-{\mathcal{P}}_{j}|),$$ where Piis short for P(T , ai), and λ is the margin. If both ai and aj are positive or negative examples, z = 0, otherwise z = 1. The key to this training objective is how to collect A− cthat is highly similar to A+. Clustering has been proven to have a natural advantage in retrieving similar instances, so we used the widely used *K-medoids* (Park and Jun, 2009) clustering method to construct A− c . Concretely, the distance between two attributes is $$\mathrm{d}(a_{i},a_{j})=\omega*\tilde{\mathrm{d}}(a_{i},a_{j})+(1-\omega)*\tilde{\mathrm{d}}(\mathcal{V}_{i},\mathcal{V}_{j}),\tag{6}$$ $$\tilde{\mathrm{d}}(z_{i},z_{j})=\tau(f_{t}(z_{i},z_{j}))+\tau(f_{s}(\mathbf{z}_{i},\mathbf{z}_{j})),\tag{7}$$ where ft and fs denote *Levenshtein distance* and Euclidean metric, respectively. z ∈ R dis the ROBERTA (Liu et al., 2019) pooling vector of z. τ denotes the score normalization to ensure balance. The distance considers both the literal and semantic features of the attributes and associated values. ## Value Detection To further cross the gap between the pre-training and downstream AVE tasks, we also add a training objective of detecting values. For (T , A), wherein each positive attribute a + i ∈ A+ corresponds to one or more extractable values Vi = {v1, v2*, ..., v*n} in T . The model needs to classify each token xi ∈ T , according to whether it is a part values of V: $$P(x_{i}|{\mathcal{T}},{\mathcal{A}})=\operatorname{softmax}(\mathbf{h}_{i}^{\mathcal{T}}\cdot W^{\mathsf{V{D}}}),\quad\quad(8)$$ where WVD ∈ R dis trainable parameter. We define "V" and "O" as labels to represent that xi ∈ V and xi ∈ V / , respectively. Note that each token does not need to be classified to the exactly belonged attribute. ## 3.3 Fine-Tuning To fully evaluate the effectiveness of our pretraining for downstream tasks, we add the following two output layers to fine-tune our COMAVE, respectively. ## Sequence Tagging Layer In this setting, T and all candidate attributes A are first fed to COMAVE, as in the pre-training. Then, according to the output hT, a Conditional Random Field (CRF) generates a sequence Y = {y1, y2*, ..., y*n}. Here n is the length of T and each yi ∈S|A| k=1{Bk, Ik, O} is a tag indicating whether the token xi ∈ T is the beginning (Bk), inside (Ik) and outside (Ok) of a value in the attribute ak ∈ A. ## Machine Reading Comprehension Layer In this case, COMAVE takes each (T , ai) as input and predicts the span of target values belonging to ai ∈ A in T . Here, we follow a representative work (Li et al., 2020) that consists of two steps. First, the candidate start and end indexes of the span are predicted using the binary classification of each token separately. Subsequently, a matching score is performed for each candidate index pair of start and end. Finally, the pairs with scores above the threshold are retained as the results. ## 4 Experiments Datasets To comprehensively evaluate our method, we used the following four datasets covering both English and Chinese: 1) INS is a Chinese AVE dataset which is collected from the real product data of Alipay2 platform. It contains various types of large-scale insurance products from real scenarios, including wealth insurance, health insurance, travel insurance, life insurance, etc. From each product document, the attributes and values are manually annotated. There are 29 global attributes and the samples are divided into 9112/1138/1138 for Train/Val/Test, respectively. Table 1 gives several groups of similar attributes and the number of their corresponding examples. Table 2 shows the distribution of different value scales. They reveal that the two challenges we focus on are prevalent in INS. 2) **MEPAVE** (Zhu et al., 2020) is a Chinese AVE dataset with examples from the JD e-commerce platform 3, containing 26 global attributes and 87,194 samples. Most of the text is mainly from the product titles. We randomly divided the dataset into three parts of Train/Val/Test in the ratio of 8:1:1 according to (Zhu et al., 2020) for experiments. 3) **AE-Pub** (Xu et al., 2019) is an English AVE dataset with 110,484 samples and 2https://www.alipay.com/ 3https://www.jd.com | FG Attributes Group | Train | Val | Test | |-------------------------------------------------------------|------------|---------|--------| | Period: | hesitation | period, | | | grace period, waiting period for continuous insurance, etc. | 555 | 73 | 77 | | Age: Maximum insurance age, | 544 | 65 | 69 | | Minimum insurance age, Maximum renewal age, etc. Amount: deductible, insured | 170 | 19 | 24 | | amount, etc. Area: insured areas, restricted | 83 | 12 | 13 | | areas, etc. Disease: disease, disease restriction, etc. | 329 | 39 | 38 | Table 1: Statistical results of the fine-grained attributes in INS. There are about 20% samples containing two or more attributes in the same group. | Length | Train | Val | Test | |----------|--------------|-------------|-------------| | [1, 5] | 5235 (55.0%) | 667 (54.8%) | 662 (53.5%) | | (5, 10] | 1622 (17.0%) | 202 (16.6%) | 207 (16.7%) | | (10, 20] | 1481 (15.6%) | 198 (16.3%) | 205 (16.6%) | | (20, +∞) | 1179 (12.4%) | 149 (12.3%) | 164 (13.2%) | Table 2: Statistical results of multi-scale value in INS. Note that the results are the amounts of the values over 2400 attributes obtained from AliExpress4. In order to make a fair comparison with previous models that could not handle a large number of attributes, we selected 4 frequent attributes (i.e. BrandName, Material, Color, *Category*) and divided the relevant instances randomly by 7:1:2, referring to the dataset publisher. 4) MAE (IV et al., 2017) is an English multi-modal AVE dataset that contains 200 million samples and 2000 attributes. Following (Zhu et al., 2020), we built an MAEtext dataset to focus on the textual modal. Same as **AE-Pub**, we also selected the 20 most frequent attributes from Train/Val/Test sets. ## Evaluation Metrics In most experiments, we used Mirco-F1 scores as the main evaluation metric. We followed the criterion of exact matching, where the complete sequence of predicted attributes and extracted values must be correct. Accuracy was also used as another evaluation in the detailed analysis. ## Methods For Comparison We compared the proposed method with notable AVE methods, including BiLSTM+CRF (Ma and Hovy, 2016), OpenTag (Zheng et al., 2018), 4https://www.aliexpress.com | INS | MEPAVE | AE-Pub | MAE | | | | | | | | | | |-----------------|----------|----------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------| | Method | Attr | Val | Over | Attr | Val | Over | Attr | Val | Over | Attr | Val | Over | | BiLSTM + CRF | 74.37 | 55.26 | 52.05 | 89.29 | 80.49 | 78.94 | 81.74 | 77.03 | 75.69 | 79.84 | 77.40 | 73.50 | | OpenTag | 73.66 | 62.65 | 57.16 | 88.54 | 83.26 | 84.11 | 86.35 | 85.18 | 83.37 | 82.22 | 79.34 | 76.23 | | ScalingUp | 83.88 | 66.78 | 65.99 | 93.42 | 91.20 | 89.56 | 89.05 | 88.64 | 87.19 | 89.36 | 79.69 | 78.75 | | JAVE | 87.11 | 72.40 | 70.07 | 95.56 | 92.98 | 91.03 | 90.57 | 90.14 | 88.14 | 93.50 | 94.12 | 91.96 | | AVEQA | 86.53 | 71.34 | 68.89 | 95.75 | 93.65 | 91.69 | 91.45 | 92.86 | 90.35 | 94.56 | 95.78 | 92.91 | | UIE | 87.46 | 73.11 | 71.65 | 96.67 | 93.24 | 92.98 | 94.35 | 91.10 | 89.36 | 96.02 | 95.99 | 94.50 | | COMAVE + Tagger | 87.31 | 75.95 | 73.34 | 96.02 | 94.52 | 93.41 | 93.07 | 92.73 | 90.91 | 96.31 | 96.88 | 94.91 | | COMAVE + MRC | 88.90 | 78.70 | 75.92 | 97.04 | 95.78 | 95.39 | 95.97 | 94.24 | 93.65 | 96.92 | 98.12 | 96.55 | SUOpenTag (Xu et al., 2019), JAVE (Zhu et al., 2020), AVEQA (Wang et al., 2020), and UIE (Lu et al., 2022). In addition, the existing representative PLMs are involved in comparison to showing the improvement of our PLM on the AVE task, including BERT (Devlin et al., 2019), ROBERTA (Liu et al., 2019), SpanBERT (Joshi et al., 2020), MacBERT (Cui et al., 2020), and ELECTRA (Clark et al., 2020). ## Implementation Details Our method ran on Tesla A100 GPUs. All the pretrained models used in our experiments were large versions by default. Chinese and English versions of COMAVE were pre-trained respectively for evaluation in two languages. The hyper-parameters in pre-training were set as follows: (1) The batch size and the learning rate were set to 256 and 1e-5. (2) In the CAR task, η, λ, and ω were set to 12, 2, and 0.4, respectively. The ratio for A+, A− g , and A− c was 1:1:1 (3) In the MSMLM task, ρ, σ, γ, ℓmax, µp, and µs were separately set to 0.2, 1.20, 2e-4, 20, 15%, 10%. In phrase-level masking, ℓmax was set to 20, and ℓ*mean* was approximately equal to 5.87. In the fine-tuning stage, the batch size and the learning rate were set to 80 and 2e-5, respectively. ## 4.1 Overall Results Comparison With Ave Baselines We first compared with the baselines. To ensure fairness in the number of parameters, we replaced BERT-Base with ROBERTA-Large in the evaluations of Chinese datasets, and the distilled context layer of AVEQA is also replaced by ROBERTALarge in all evaluations. The results are shown in Table 3. Our proposed COMAVE equipped with the MRC layer achieves state-of-the-art on all four benchmarks. Most baselines perform poorly on INS because they focus on traditional e-commerce | Method | INS | MEPAVE | AE-pub | MAE | |----------------|--------|----------|----------|-------| | + Tagger Layer | | | | | | BERT | 70.42* | 89.77* | 86.72 | 92.71 | | ROBERTA | 71.63 | 90.82 | 88.55 | 93.12 | | SpanBERT | - | - | 88.23 | 92.79 | | MacBERT | 71.55 | 90.79 | - | - | | ELECTRA | 71.69 | 91.53 | 88.75 | 93.42 | | COMAVE | 73.34 | 93.41 | 90.91 | 94.91 | | + MRC Layer | | | | | | BERT | 71.59* | 94.03* | 90.49 | 93.31 | | ROBERTA | 72.89 | 94.74 | 91.13 | 94.14 | | SpanBERT | - | - | 90.62 | 94.24 | | MacBERT | 73.03 | 93.99 | - | - | | ELECTRA | 73.46 | 94.21 | 91.50 | 95.32 | | COMAVE | 75.92 | 95.39 | 93.65 | 96.55 | products and cannot handle the two challenges mentioned in Section 1. Unlike them, UIE achieves competitive results, especially on *Attr*, as it is a generic approach pre-trained by multiple information extraction tasks. However, limited by its weak multi-scale value extraction capability, it still cannot handle INS. The performance on *Attr* and Val demonstrates that our method brings significant improvements in both attribute retrieval and value extraction, thus outperforming all baselines on *Over*. Benefiting from the pre-training, our COMAVE outperforms all the baselines by adding only a simple fine-tuning output layer. ## Comparison With Plms To further evaluate the contribution of our pretraining methods, we compared COMAVE with several common PLMs. All the models were guaranteed to be equipped with the same output layer | Method | INS | MEPAVE | AVE-Pub | MAE | |---------------|-------|----------|-----------|-------| | COMAVE | 75.92 | 95.39 | 93.65 | 96.55 | | − MSMLM | 74.60 | 94.87 | 91.99 | 94.86 | | MSMLM − PhraM | 75.19 | 94.97 | 92.59 | 95.33 | | MSMLM − SentM | 75.23 | 95.04 | 92.77 | 95.54 | | − CAR | 73.97 | 94.45 | 91.32 | 94.97 | | CAR − MRL | 74.55 | 95.12 | 92.58 | 95.29 | | CAR − CS | 74.30 | 95.04 | 91.96 | 95.12 | | − VD | 74.82 | 94.74 | 91.80 | 95.01 | | ROBERTA | 73.22 | 94.03 | 89.49 | 94.31 | Table 5: Overall ablation results on four datasets. when compared. The results are shown in Table 4. Here SpanBERT and MacBERT have no results on some datasets because of lacking a corresponding language version. SpanBERT achieves almost the same results as ROBERTA with half the number of parameters because it excels in span representation. ELECTRA adopts creative adversarial pretraining and therefore performs well. Compared to the pre-training backbone ROBERTA, our further pre-trained COMAVE gains a significant improvement. Moreover, regardless of the simple finetuning layer used, our model outperforms all the other PLMs, which indicates that our pre-training effectively alleviates the challenges of AVE tasks. ## 4.2 Ablation Tests To evaluate the contributions of each training objective, we considered the following settings: - − *MSMLM*: Removing the training objective of Multi-Scale Masked Language Model. - MSMLM − *PhraM*: Only Using the sentencelevel masking mechanism. - MSMLM − *SentM*: Only Using the phraselevel masking mechanism. - − CAR: Removing the training objective of Contrastive Attribute Retrieval. - CAR − MRL: Replacing the Margin Ranking Loss LCAR with the Cross Entropy Loss. - CAR − CS: Cluster sampling is not used in CAR, i.e., A− = A− g . - − VD: Removing the training objective of Value Detection. Here, MRC was uniformly selected as the output layer for all the settings due to its better performance. Table 5 shows the results on four datasets. CAR, MSMLM, and VD all bring obviously improvements, proving the effectiveness and necessity of our pre-training objectives for the AVE tasks. ![6_image_0.png](6_image_0.png) The result indicates that the contribution of CAR is the most pronounced among the three objectives. The final performance of the model decreases significantly when either the clustering sampling or the contrast loss is removed. In addition, we find that the combination of phrase-level and sentencelevel masking is more effective than using only one of them. VD also delivers a promising improvement which proves the benefit for AVE tasks. ## 4.3 Tests On Fine-Grained Attribute Groups To further validate the effectiveness of our method in discriminating fine-grained similar attributes, we evaluated the performance of the model on the finegrained attribute groups mentioned in Table 1. The experimental results are shown in the upper part of Figure 3. Our COMAVE equipped with all components achieves the best results on all attribute groups. When the CAR training objective is removed, the overall performance shows a dramatic decrease in all the fine-grained attribute groups. This reveals that contrastive learning in a challenging set during pre-training contributed significantly to enhancing the capability of discriminating similar attributes in downstream tasks. ## 4.4 Performance On Multi-Scale Values We also tested the performance of the model for extracting values at different scales, and the results are shown in the lower part of Figure 3. As we expected, the contribution of phrase-level masking | Dataset | Setting | ROBERTA | COMAVE | |----------------|-----------|-----------|----------| | INS(29Attr) | 5-SHOT | 45.47 | 50.56 | | 10-SHOT | 50.91 | 54.32 | | | MEPAVE(26Attr) | 5-SHOT | 55.53 | 58.42 | | 10-SHOT | 61.86 | 64.70 | | | AE-Pub(4Attr) | 5-SHOT | 45.84 | 51.24 | | 10-SHOT | 62.31 | 66.04 | | | MAE(20Attr) | 5-SHOT | 51.07 | 55.71 | | 10-SHOT | 59.19 | 62.36 | | Table 6: Results of few-shot tests on all datasets. is greater when dealing with shorter values (number of tokens less than 10). The improvement of sentence-level masking becomes significant when the length of value gradually grows to more than 20. This proves the reasonableness of our combination of phrase-level and sentence-level masking. Moreover, we find that even pre-training without MSMLM, COMAVE still performs better than pure ROBERTA. This demonstrates the boost from objective VD and the expected external knowledge of the pre-training corpus. ## 4.5 Few-Shot Tests To further fit realistic applications, we also tested the performance of the model in few-shot scenarios. We adopted the N-WAY K-SHOT setup, i.e., the few-shot training set has N attributes, and each attribute has corresponding K training samples randomly selected. Here, we let N equal the number of attributes per dataset and focused on testing two sets of 5-SHOT and 10-SHOT settings. Table 6 shows the experimental results. Under the stringent condition of using only 5 training samples for each attribute, COMAVE scores Mirco-F1 over 50% on all datasets. When the training set is expanded to 10-SHOT, the performance reaches approximately 65% on the other three datasets, except for the challenging INS. The smaller the sample size, the greater the improvement of COMAVE. Due to pre-training with a large-scale AVE corpus collected, COMAVE is more capable than ROBERTA in handling the few-shot AVE task. ## 5 Related Work With the development of e-commerce, Attribute Value Extraction which aims to retrieve the attributes and extract the values from the target data resource in order to obtain the structured information of the products recently attracts lots of attention. Several previous methods (Zheng et al., 2018; Xu et al., 2019) employ traditional sequence tagging models. Furthermore, AVEQA (Wang et al., 2020) first tries to use MRC based method to handle the task, but it can not be applied when each attribute has several different values. JAVE (Zhu et al., 2020) designs a multi-task model which divides the task into two sub-task: attributes prediction and value extraction. AdaTag (Yan et al., 2021) uses a hyper-network to train experts' parameters for each attribute in order to build an adaptive decoder. QUEACO (Zhang et al., 2021) adopts a teacher-student network to leverage weakly labeled behavior data to improve performance. MAVEQA (Yang et al., 2022) mixes multisource information with a novel global and local attention mechanism. However, none of the existing methods pay attention to the two challenges mentioned in section 1. Language model pre-training (Devlin et al., 2019; Liu et al., 2019) and task-specific fine-tuning achieve significant improvement on many NLP tasks. Recently, some work (Joshi et al., 2020; Clark et al., 2020; Cui et al., 2020; Sanh et al., 2019) further modified the MLM to achieve better results. In information extraction tasks, UIE (Lu et al., 2022) is proposed as a universal pre-training model for several extraction tasks by generation, it is generic but lacks further fitting for different extraction tasks. Currently, there is no task-specific pre-training model for attribute value extraction. ## 6 Conclusion In this paper, we presented a new pre-training model for attribute value extraction, called COMAVE which is pre-trained by three novel objectives with a large-scale corpus. Multi-Scale Masked Language Model is designed to force the model to understand multi-scale values by recovering masked spans at both the phrase and sentence levels. Contrastive Attribute Retrieval improves the discrimination of fine-grained attributes based on contrastive learning. Meanwhile, Value Detection is adopted to reinforce the value extraction and further benefit downstream AVE tasks. Extensive experiments indicate that COMAVE achieves stateof-the-art results on four benchmarks compared with the existing baselines and PLMs. In future work, we will expand our work on more scenarios and industries, and also explore the optimization of the downstream fine-tune model. ## 7 Limitations This paper proposed a novel pre-training model COMAVE which aims at textual AVE tasks, while in this field, multi-modal AVE tasks also widely exist in many e-commerce platforms. We expect that the following works can leverage COMAVE as a powerful word embedding pre-training model for text encoding combined with image feature representation in multi-modal AVE tasks in the future. Meanwhile, the same as the previous AVE works, we assume that each T is an independent extraction object, without considering the context-dependent of the whole data resources, such as long documents and instructions, which exceeds the length of an allowable single input. ## Acknowledgements This work is supported by the NSFC (Grant No. U21A20488, 62006040), the Project for the Doctor of Entrepreneurship and Innovation in Jiangsu Province (Grant No. JSSCBS20210126), the Fundamental Research Funds for the Central Universities, and ZhiShan Young Scholar Program of Southeast University. ## References Min Cao, Sijing Zhou, Honghao Gao, and Youhuizi Li. 2018. A novel hybrid collaborative filtering approach to recommendation using reviews: The product attribute perspective (S). In The 30th International Conference on Software Engineering and Knowledge Engineering, Hotel Pullman, Redwood City, California, USA, July 1-3, 2018, pages 7–10. KSI Research Inc. and Knowledge Systems Institute Graduate School. Huajun Chen, Ning Hu, Guilin Qi, Haofen Wang, Zhen Bi, Jie Li, and Fan Yang. 2021a. Openkg chain: A blockchain infrastructure for open knowledge graphs. Data Intell., 3(2):205–227. Yongrui Chen, Huiying Li, Guilin Qi, Tianxing Wu, and Tenggou Wang. 2021b. Outlining and filling: Hierarchical query graph generation for answering complex questions over knowledge graph. *CoRR*, abs/2111.00732. Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: pretraining text encoders as discriminators rather than generators. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang, and Guoping Hu. 2020. Revisiting pre-trained models for chinese natural language processing. In Findings of the Association for Computational Linguistics: EMNLP 2020, Online Event, 16-20 November 2020, volume EMNLP 2020 of *Findings of ACL*, pages 657–668. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171–4186. Association for Computational Linguistics. Robert L. Logan IV, Samuel Humeau, and Sameer Singh. 2017. Multimodal attribute extraction. In 6th Workshop on Automated Knowledge Base Construction, AKBC@NIPS 2017, Long Beach, California, USA, December 8, 2017. OpenReview.net. Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2020. Spanbert: Improving pre-training by representing and predicting spans. *Trans. Assoc. Comput. Linguistics*, 8:64– 77. Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. 2020. Supervised contrastive learning. In *Advances in Neural* Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N. Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick van Kleef, Sören Auer, and Christian Bizer. 2015. Dbpedia - A large-scale, multilingual knowledge base extracted from wikipedia. *Semantic Web*, 6(2):167–195. Xiaoya Li, Jingrong Feng, Yuxian Meng, Qinghong Han, Fei Wu, and Jiwei Li. 2020. A unified MRC framework for named entity recognition. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 5849–5859. Association for Computational Linguistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. *CoRR*, abs/1907.11692. Yaojie Lu, Qing Liu, Dai Dai, Xinyan Xiao, Hongyu Lin, Xianpei Han, Le Sun, and Hua Wu. 2022. Unified structure generation for universal information extraction. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 5755–5772. Association for Computational Linguistics. Xuezhe Ma and Eduard H. Hovy. 2016. End-to-end sequence labeling via bi-directional lstm-cnns-crf. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. The Association for Computer Linguistics. Alessandro Magnani, Feng Liu, Min Xie, and Somnath Banerjee. 2019. Neural product retrieval at walmart.com. In *Companion of The 2019 World Wide* Web Conference, WWW 2019, San Francisco, CA, USA, May 13-17, 2019, pages 367–372. ACM. Hae-Sang Park and Chi-Hyuck Jun. 2009. A simple and fast algorithm for k-medoids clustering. Expert Syst. Appl., 36(2):3336–3341. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of BERT: smaller, faster, cheaper and lighter. *CoRR*, abs/1910.01108. Thomas Pellissier Tanon, Gerhard Weikum, and Fabian M. Suchanek. 2020. YAGO 4: A reasonable knowledge base. In The Semantic Web - 17th International Conference, ESWC 2020, Heraklion, Crete, Greece, May 31-June 4, 2020, Proceedings, volume 12123 of *Lecture Notes in Computer Science*, pages 583–596. Springer. Denny Vrandecic and Markus Krötzsch. 2014. Wikidata: a free collaborative knowledgebase. Commun. ACM, 57(10):78–85. Qifan Wang, Li Yang, Bhargav Kanagal, Sumit Sanghai, D. Sivakumar, Bin Shu, Zac Yu, and Jon Elsas. 2020. Learning to extract attribute value from product via question answering: A multi-task approach. In KDD '20: The 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Virtual Event, CA, USA, August 23-27, 2020, pages 47–55. ACM. Huimin Xu, Wenting Wang, Xin Mao, Xinyu Jiang, and Man Lan. 2019. Scaling up open tagging from tens to thousands: Comprehension empowered attribute value extraction from product title. In *Proceedings* of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 5214–5223. Association for Computational Linguistics. Jun Yan, Nasser Zalmout, Yan Liang, Christan Grant, Xiang Ren, and Xin Luna Dong. 2021. Adatag: Multi-attribute value extraction from product profiles with adaptive decoding. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 4694–4705. Association for Computational Linguistics. Li Yang, Qifan Wang, Zac Yu, Anand Kulkarni, Sumit Sanghai, Bin Shu, Jon Elsas, and Bhargav Kanagal. 2022. MAVE: A product dataset for multi-source attribute value extraction. In WSDM '22: The Fifteenth ACM International Conference on Web Search and Data Mining, Virtual Event / Tempe, AZ, USA, February 21 - 25, 2022, pages 1256–1265. ACM. Pan Yang, Xin Cong, Zhenyu Sun, and Xingwu Liu. 2021. Enhanced language representation with label knowledge for span extraction. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 4623–4635. Association for Computational Linguistics. Wen-tau Yih, Ming-Wei Chang, Xiaodong He, and Jianfeng Gao. 2015. Semantic parsing via staged query graph generation: Question answering with knowledge base. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL 2015, July 26-31, 2015, Beijing, China, Volume 1: Long Papers, pages 1321–1331. The Association for Computer Linguistics. Danqing Zhang, Zheng Li, Tianyu Cao, Chen Luo, Tony Wu, Hanqing Lu, Yiwei Song, Bing Yin, Tuo Zhao, and Qiang Yang. 2021. QUEACO: borrowing treasures from weakly-labeled behavior data for query attribute value extraction. In CIKM '21: The 30th ACM International Conference on Information and Knowledge Management, Virtual Event, Queensland, Australia, November 1 - 5, 2021, pages 4362–4372. ACM. Guineng Zheng, Subhabrata Mukherjee, Xin Luna Dong, and Feifei Li. 2018. Opentag: Open attribute value extraction from product profiles. In *Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,* KDD 2018, London, UK, August 19-23, 2018, pages 1049–1058. ACM. Tiangang Zhu, Yue Wang, Haoran Li, Youzheng Wu, Xiaodong He, and Bowen Zhou. 2020. Multimodal joint attribute prediction and value extraction for ecommerce product. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language* Processing, EMNLP 2020, Online, November 16-20, 2020, pages 2129–2139. Association for Computational Linguistics. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 7: Limitations ✓ A2. Did you discuss any potential risks of your work? Section 7: Limitations ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and Section 1: Introduction ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? No response. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? No response. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? No response. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No response. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? No response. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. No response. ## C ✓ **Did You Run Computational Experiments?** Section 4: Experiments ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 4: Experiments, Implementation Details The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4: Experiments, Implementation Details ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 4: Experiments ✗ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? The packages used in our code are listed in GitHub. ## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Section 4: Experiments, Datasets. We Build A Hand-Labeled Dataset Called Ins For Evaluation. ✗ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? We build the manually labeled dataset INS for evaluation, while there is no human participation in other parts. ✗ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? We build the manually labeled dataset INS for evaluation, while there is no human participation in other parts. ✗ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? The dataset is collected from the platform of our affiliation, and the source is not discussed in this submission for anonymity. ✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Our dataset contains the product information of the e-commerce platform, not the information of humans, and humans only participate in labeling the dataset. ✗ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Our dataset contains the product information of the e-commerce platform, without the information of humans, and humans only participate in labeling the dataset.
jeong-etal-2023-phrase
Phrase Retrieval for Open Domain Conversational Question Answering with Conversational Dependency Modeling via Contrastive Learning
https://aclanthology.org/2023.findings-acl.374
Open-Domain Conversational Question Answering (ODConvQA) aims at answering questions through a multi-turn conversation based on a retriever-reader pipeline, which retrieves passages and then predicts answers with them. However, such a pipeline approach not only makes the reader vulnerable to the errors propagated from the retriever, but also demands additional effort to develop both the retriever and the reader, which further makes it slower since they are not runnable in parallel. In this work, we propose a method to directly predict answers with a phrase retrieval scheme for a sequence of words, reducing the conventional two distinct subtasks into a single one. Also, for the first time, we study its capability for ODConvQA tasks. However, simply adopting it is largely problematic, due to the dependencies between previous and current turns in a conversation. To address this problem, we further introduce a novel contrastive learning strategy, making sure to reflect previous turns when retrieving the phrase for the current context, by maximizing representational similarities of consecutive turns in a conversation while minimizing irrelevant conversational contexts. We validate our model on two ODConvQA datasets, whose experimental results show that it substantially outperforms the relevant baselines with the retriever-reader. Code is available at: \url{https://github.com/starsuzi/PRO-ConvQA}.
# Phrase Retrieval For Open-Domain Conversational Question Answering With Conversational Dependency Modeling Via Contrastive Learning Soyeong Jeong1 Jinheon Baek2 Sung Ju Hwang1,2 **Jong C. Park**1∗ School of Computing1 Graduate School of AI2 Korea Advanced Institute of Science and Technology1,2 {starsuzi,jinheon.baek,sjhwang82,jongpark}@kaist.ac.kr ## Abstract Open-Domain Conversational Question Answering (ODConvQA) aims at answering questions through a multi-turn conversation based on a retriever-reader pipeline, which retrieves passages and then predicts answers with them. However, such a pipeline approach not only makes the reader vulnerable to the errors propagated from the retriever, but also demands additional effort to develop both the retriever and the reader, which further makes it slower since they are not runnable in parallel. In this work, we propose a method to directly predict answers with a phrase retrieval scheme for a sequence of words, reducing the conventional two distinct subtasks into a single one. Also, for the first time, we study its capability for ODConvQA tasks. However, simply adopting it is largely problematic, due to the dependencies between previous and current turns in a conversation. To address this problem, we further introduce a novel contrastive learning strategy, making sure to reflect previous turns when retrieving the phrase for the current context, by maximizing representational similarities of consecutive turns in a conversation while minimizing irrelevant conversational contexts. We validate our model on two ODConvQA datasets, whose experimental results show that it substantially outperforms the relevant baselines with the retriever-reader. Code is available at: https://github.com/ starsuzi/PRO-ConvQA. ## 1 Introduction Conversational Question Answering (ConvQA) is the task of answering a sequence of questions that are posed during information-seeking conversations with users (Choi et al., 2018; Reddy et al., 2019; Zaib et al., 2022). This task has recently gained much attention since it is similar to how humans seek and follow the information that they want to find. To solve this problem, earlier ConvQA ∗ Corresponding author ![0_image_0.png](0_image_0.png) work proposes to predict answers based on both the current question and the previous conversational histories, as well as the passage that is relevant to the ongoing conversation (Qu et al., 2019; Huang et al., 2019; Kim et al., 2021; Li et al., 2022a). However, this approach is highly suboptimal and might not be applicable to real-world scenarios, since it assumes that the gold passage, containing answers for the current question, is given to the ConvQA system; meanwhile, the gold passage is usually not available during the real conversation. To address this limitation, some recent work (Qu et al., 2020; Anantha et al., 2021; Li et al., 2022c; Adlakha et al., 2022; Fang et al., 2022) proposes to extend the existing ConvQA task to an opendomain question answering setting with an assumption that the conversation-related passages are not given in advance; therefore, it is additionally required to access and utilize the query-relevant passages in a large corpus, for example, Wikipedia. Under this open-domain setting, most existing Open-Domain ConvQA (ODConvQA) work relies on the retriever-reader pipeline, where they first retrieve the passages, which are relevant to both the current question and conversational context, from a large corpus, and then predict answers based on information in the retrieved passages. This retrieverreader pipeline approach is illustrated in Figure 1. However, despite their huge successes, such a pipeline approach consisting of two sub-modules has a few major drawbacks. First, since the reader is decomposed from the retriever, it is difficult to train the retriever-reader pipeline in an end-to-end manner, which results in an additional effort to develop both the retriever and the reader independently. Second, the error can be accumulated from the retriever to the reader, since the failure in finding the relevant passages for the current question negatively affects the reader in predicting answers, which is illustrated in Figure 1. Third, while the latency is an important factor when conversing with humans in the real-world scenarios, the retrieverreader pipeline might be less efficient, since these two modules are not runnable in parallel. An alternative solution tackling the limitations above is to directly predict the phrase-level answers consisting of a set of words, which are predicted from a set of documents in a large corpus. While this approach appears challenging, recent work shows that it is indeed possible to directly retrieve phrases within a text corpus based on their representational similarities to the input question (Seo et al., 2019; Lee et al., 2021a,b). However, its capability of retrieving phrases has been studied only with single-turn-based short questions, and their applications to ODConvQA, additionally requiring contextualizing the multi-turn conversations as well as effectively representing the lengthy conversational histories, have not been explored. To this end, in this work, we first formulate the open-domain ConvQA task, previously done with the two-stage retriever-reader pipeline, as a direct phrase retrieval problem based on a single dense phrase retriever. However, in contrast to the single-turn open-domain question answering task that needs to understand only a single question, the target ODConvQA is more challenging since it has to comprehensively incorporate both the current question and the previous conversational histories in multi-turns. For example, as shown in Figure 1, in order to answer the question, " What happened in 2003", the model has to fully understand that the conversational context is related to the song, not the movie. While some work (Qu et al., 2020; Fang et al., 2022; Adlakha et al., 2022) proposes to feed an ODConvQA model the entire context consisting of the current question together with the conversational histories as an input, this naïve approach might be insufficient to solve the conversational dependency issue, which may lead to suboptimal performances in a phrase retrieval scheme. In order to further address such a conversational dependency problem, we suggest to enforce the representation of the current conversational context to be similar to the representation of the previous context. Then, since two consecutive turns in a conversation are dependently represented in a similar embedding space, phrases that are relevant to both the current and previous conversational contexts are more likely to be retrieved, for the current question. To realize this objective, we maximize the representational similarities between the current conversational context and its previous contexts, while minimizing the representations between the current and its irrelevant contexts within the same batch via the contrastive learning loss, which is jointly trained with the dense phrase retriever. This is illustrated in Figure 1, where we force the representation of the current conversational turn to be similar to its previous turn. We refer to our proposed method as Phrase Retrieval for Open-domain **Conv**ersational Question Answering (**PRO-ConvQA**). We validate our proposed PRO-ConvQA method on two standard ODConvQA datasets, namely ORQuAC (Qu et al., 2020) and TopiOCQA (Adlakha et al., 2022), against diverse ODConvQA baselines that rely on the retriever-reader pipeline. The experimental results show that our PRO-ConvQA significantly outperforms relevant baselines. Furthermore, a detailed analysis demonstrates the effectiveness of the proposed contrastive learning strategy and the efficiency of our phrase retrieval strategy. Our contributions in this work are threefold: - We formulate a challenging open-domain conversational question answering (ODConvQA) problem into a dense phrase retrieval problem for the first time, by simplifying the conventional two-stage pipeline approach to ODConvQA tasks consisting of the retriever and the reader into one single phrase retriever. - We ensure that, when retrieving phrases, the representation for the current conversational context is similar to the representations for previous conversation histories, by modeling their conversational dependencies based on the contrastive learning strategy. - We show that our PRO-ConvQA method achieves outstanding performances on two benchmark ODConvQA datasets against relevant baselines that use a pipeline approach. ## 2 Related Work Conversational Question Answering ConvQA is similar to the reading comprehension task (Rajpurkar et al., 2016; Trischler et al., 2017) in that it also aims at correctly answering the question from the given reference passage (Choi et al., 2018; Reddy et al., 2019). However, ConvQA is a more difficult task than the reading comprehension task, since ConvQA has to answer questions interactively with users through multi-turns, which requires capturing all the contexts including previous conversational turns and the current question as well as its reference passage. To consider this unique characteristics, a line of research on ConvQA has focused on selecting only the queryrelevant conversation history (Huang et al., 2019; Qu et al., 2019; Chen et al., 2020; Qiu et al., 2021). However, recent work observed that a simple concatenation of the conversational histories outperforms the previous history selection approaches, thanks to the efficacy of the pre-trained language models (Vaswani et al., 2017) in contextualizing long texts (Kim et al., 2021). However, as the conversations often involve linguistic characteristics such as anaphora and ellipsis (Zaib et al., 2022), some work suggested to rewrite the ambiguous questions to explicitly model them (Kim et al., 2021; Vakulenko et al., 2021; Raposo et al., 2022). However, a naïve ConvQA setting assumes a fundamentally unrealistic setting, where the gold reference passages, containing answers corresponding to the questions, are already given. Open-Domain ConvQA In order to address the unrealistic nature of the aforementioned ConvQA scenario, some recent work proposed to extend it to the open-retrieval scenario, which aims at retrieving relevant passages in response to the ongoing conversation and then uses them as reference passages, instead of using human-labeled passages. In this setting, effectively incorporating the conversational histories into the retrieval models is one of the main challenges, and several work (Lin et al., 2021; Yu et al., 2021; Mao et al., 2022; Wu et al., 2022) proposed improving the first-stage retrievers, which are trained with particular machine learning techniques such as knowledge distillation, data augmentation, and reinforcement learning. However, their main focus is only on the first-stage retrieval aiming at returning only the query-related candidate passages, without giving exact answers to the questions. Also, some methods, such as ConvDR (Yu et al., 2021) and ConvADR-QA (Fang et al., 2022), use additional questions, which are rewritten from original questions by humans, to improve a retrieval performance by distilling the knowledge from the rewritten queries to the original queries. However, manually-rewritten queries are usually not available, and annotating them requires significant costs; therefore, they are trainable only under specific circumstances. On the other hand, to provide exact answers for the question within the current conversation turn, some other work adapted a retriever-reader pipeline, which can additionally read the query-relevant passages retrieved from a large corpus (Qu et al., 2020; Li et al., 2022c; Adlakha et al., 2022; Fang et al., 2022). However, such a pipeline approach has critical drawbacks due to its structural limitation composed of two sub-modules, thereby requiring additional effort to independently train both the retriever and the reader, both of which are also not runnable in parallel during inference, as well as bounding the reader's performance to the previous retrieval performance. Dense Phrase Retrieval Instead of using a conventional pipeline approach, consisting of the retriever and the reader, we propose to directly predict answers for the ODConvQA task based on dense phrase retrieval. Following this line of previous researches, there exists some work that proposed to directly retrieve phrase-level answers from a large corpus; however, such work mainly focuses on non-conversational domains, such as question answering and relation extraction tasks (Seo et al., 2019; Lee et al., 2021a,b). Specifically, the pioneering work (Seo et al., 2019) used both of the sparse and dense phrase representations for their retrieval. Afterwards, Lee et al. (2021a) improved the phrase retrieval model that uses only dense representations without using any sparse representations, resulting in improved performance while reducing the memory footprint. Motivated by its effectiveness and efficiency, several work recently proposed to use the dense phrase retrieval system in diverse open-retrieval problems (Lee et al., 2021b; Li et al., 2022b; Kim et al., 2022); however, their applicability to our target ODConvQA has been largely underexplored. Therefore, in this work, we adapt dense phrase retrieval to the ODConvQA task for the first time, and further propose to model conversational dependencies in phrase retrieval. ## 3 Method In this section, we first define the Conversational Question Answering (ConvQA) task, and its extension to the open-domain setting: Open-Domain ConvQA (ODConvQA) in Section 3.1. Then, we introduce our dense phrase retrieval mechanism to effectively and efficiently solve the ODConvQA task, compared to the conventional retriever-reader pipeline approach, in Section 3.2. Last, we explain our novel conversational dependency modeling strategy via contrastive learning, in Section 3.3. ## 3.1 Preliminaries In this subsection, we first provide general descriptions of the ConvQA and the ODConvQA tasks. Conversational Question Answering Let qi be the question and ai be the answer for the i-th turn of the conversation. Also, let p∗ i a reference passage, which contains the answer ai for the question qi. Then, given qi, the goal of the ConvQA task is to correctly predict the answer ai based on the reference passage p∗ i and the previous conversation histories: {qi−1, ai−1, ..., q1, a1}. Here, for the simplicity of the notation, we denote the i-th conversational context as the concatenation of the current input question and the previous conversation histories, formally represented as follows: $$\mathrm{Conv}_{i}=\{q_{i},q_{i-1},a_{i-1},...,q_{1},a_{1}\}.\qquad(1)$$ Then, based on the notation of the conversational context Convi, we formulate the objective of the ConvQA task with a scoring function f, as follows: $$f(a_{i}|\mathsf{Conv}_{i})=M_{c q a}(p_{i}^{*},\mathsf{Conv}_{i};\theta_{c q a}),\quad(2)$$ where Mcqa is a certain ConvQA model that predicts ai from p∗ i based on Convi, which is parameterized by θcqa. However, this setting of providing the reference passage p∗ i containing the exact answer aiis largely unrealistic, since such the gold passage is usually not available when conversing with users in the real-world scenario. Therefore, in this work, we consider the more challenging open-domain ConvQA scenario, where we should extract the answers within the query-related documents from a large corpus, such as Wikipedia. Open-Domain ConvQA Unlike the ConvQA task that aims at extracting the answers from the gold passage p∗ i , the ODConvQA task is required to search a collection of passages for the relevant passages and then extract answers from them. Therefore, the scoring function f of the ODConvQA task is formulated along with the certain passage pj from the large corpus P, as follows: $f(a_{i}|\text{Conv}_{i})=M_{\text{\it ad}}(p_{j},\text{Conv}_{i};\theta_{\text{\it ad}})$, with $p_{j}\in\mathcal{P}$, $$\quad(3)$$ where M*odcqa* is an ODConvQA model parameterized by θ*odcqa*, and P is a collection of passages. Retriever-Reader To realize the scoring function in Equation 3 for ODConvQA, the retrieverreader pipeline approach is dominantly used, which first retrieves the top-K query-relevant passages and then reads a set of retrieved passages to answer the question based on them. Therefore, for this pipeline approach, the scoring function f is decomposed into two sub-components (i.e., retriever and reader), formally defined as follows: $$\begin{array}{c}{{f(a_{i}|\mathsf{Conv}_{i})=M_{r e t r}(\mathcal{P}_{K}|\mathsf{Conv}_{i};\theta_{r e t r}),}}\\ {{\qquad\qquad\times M_{r e a d}(a_{i}|\mathcal{P}_{K};\theta_{r e a d}),}}\end{array}$$ $$\quad(4)$$ where the first-stage retriever M*retr* and the secondstage reader M*read* are parameterized with θ*retr* and θ*read*, respectively. Also, PK indicates a set of top-K query-relevant passages, which are retrieved from the large corpus, PK ⊂ P, based on the retriever M*retr*. However, such a retrieverreader pipeline is problematic for the following reasons. First, it is prone to error propagation from the retriever to the reader, since, if M*retr* retrieves irrelevant passages PK that do not contain the answer such that ai ∈ P/ K, the reader M*read* fails to answer correctly. Second, it is inefficient, since M*read* requires the M*retr*'s output as the input; therefore, M*retr* and M*read* are not runnable in parallel. Last, it demands effort to construct both M*retr* and M*read*. ## 3.2 Dense Phrase Retrieval For Odconvqa In order to address the aforementioned limitations of the retriever-reader pipeline for ODConvQA, in this work, we newly formulate the ODConvQA task as a dense phrase retrieval problem. In other words, we aim at directly retrieving the answer ai, consisting of a sequence of words (i.e., phrase), based on its representational similarity to the conversational context Convi via the dense phrase retriever (Lee et al., 2021a). Formally, the scoring function for our ODConvQA based on the phrase retrieval scheme is defined as follows: $$f(a_{i}|\text{Conv}_{i})=E_{C o n v Q}(\text{Conv}_{i})^{\top}E_{A}(a_{i}),\tag{5}$$ where E*ConvQ* and EA are encoders that represent the conversational context Convi and the phraselevel answer ai, respectively. Also, ⊤ symbol denotes inner product between its left and right terms. We note that this phrase retrieval mechanism defined in Equation 5 is similarly understood as predicting the answer in the reading comprehension task (Rajpurkar et al., 2016; Seo et al., 2017). To be specific, in the reading comprehension task, we predict the start and end tokens of the answer ailocated in the gold passage p∗ i . Similarly, in the phrase retrieval task, we directly predict the start and end tokens of the answer which is located within one part of the entire total passages P; therefore, all words in all passages are sequentially pre-indexed and the goal is to find only the locations of the answer based on its similarity to the input context, e.g., Convi. Note that this phrase retrieval approach simplifies the conventional two-stage pipeline approach, commonly used for ODConvQA tasks, into the single direct answer retrieval, by removing the phrase reading done over the retrieved documents. The training objective of the most information retrieval work (Karpukhin et al., 2020; Qu et al., 2021) is to rank the pair of the query and its relevant documents highest among all the other irrelevant pairs. Similar to this, our training objective with a dense phrase retriever is formalized as follows: $${\mathcal{L}}_{n e g}=-\log{\frac{e^{f(a^{+},\mathrm{conv}_{i})}}{e^{f(a^{+},\mathrm{conv}_{i})}+\sum\limits_{k=1}^{N}e^{f(a^{-},\mathrm{conv}_{i})}}},$$ where, for the context Convi, a + is the positive answer phrase and a− is the negative answer phrase. We describe how to construct the negative contextphrase pairs and additional details for training of the dense phrase retriever in the paragraph below. Training Details In order to improve the performance of the dense phrase retriever, we adopt the existing strategies following Lee et al. (2021a). First of all, we construct the negative samples, used in Equation 6, based on in-batch and pre-batch sampling strategy. Specifically, for the B number of phrases in the batch, (B − 1) in-batch phrases are used for negative samples by excluding one positive phrase with regard to the certain conversation context. Also, given the preceding C number of batches, we can obtain the negative phrases for the current conversation context with a size of (B×C). In addition to negative sampling, we use the queryside fine-tuning scheme, which optimizes only the conversational question encoder, E*ConvQ*, by maximizing the representational similarities between the correctly retrieved phrases and their corresponding conversational contexts after the phrase indexing. Last, to further improve predicting the start and end spans of the phrase retriever, we first train the reading comprehension model and then distill its knowledge, by minimizing the KL divergences of span predictions between the reading comprehension model and the phrase retriever. For more details, please refer to Lee et al. (2021a). ## 3.3 Conversational Dependency Modeling While Equation 6 effectively discriminates positive answer phrases from negative answer phrases, relying on it is sub-optimal when solving the ODConvQA task, where each conversational turn shares a similar context with its previous turn. In other words, since information-seeking conversational questions are asked in a sequence, two consecutive contexts, Convi−1 and Convi, should have similar representations compared to the other turns from different conversations. Therefore, we further model such a conversational dependency by maximizing the similarity between the sequential turns while minimizing the similarity between the other irrelevant turns via contrastive learning as follows: $$\mathcal{L}_{turn}=-\log\frac{e^{f(\text{Conv}_{i},\text{Conv}_{i-1})}}{e^{f(\text{Conv}_{i},\text{Conv}_{i-1})}+\sum_{k=1}^{B-1}e^{f(\text{Conv}_{i}-\text{Conv}_{i-1})}}\tag{7}$$ f(Convi,Convi−1) , where Convi − comes from a collection of the irrelevant conversation turns within the batch. By optimizing the objective in Equation 7, the encoder E*ConvQ* represents the current conversational turn Convi probably similar to its previous turn Convi−1; therefore, the retrieved phrase captures both the current and previous conversational contexts. Overall Training objective We optimize the phrase retrieval loss from Equation 6 and conversational dependency loss from Equation 7 as follows: $${\mathcal{L}}=\lambda_{1}{\mathcal{L}}_{n e g}+\lambda_{2}{\mathcal{L}}_{t u r n},$$ where λ1 and λ2 are the weights for each loss term. ## 4 Experimental Setups In this section, we explain datasets, metrics, models, and implementation details. ## 4.1 Datasets And Metrics OR-QuAC OR-QuAC (Qu et al., 2020) is the benchmark ODConvQA dataset, which extends a popular ConvQA dataset, namely QuAC (Choi et al., 2018), to the open-retrieval setting. This dataset consists of 35,526 conversational turns for training, 3,430 for validation, and 5,571 for testing. TopiOCQA TopiOCQA (Adlakha et al., 2022) is another ODConvQA dataset that considers the topic-switching problem across different conversational turns. This dataset contains 45,450 conversational turns and 2,514 turns for training and validation, respectively. Note that we use a validation set since the test set is not publicly open. Evaluation Metrics We evaluate all models with F1-score and extact match (EM) following the standard protocol on the ODConvQA tasks (Qu et al., 2020; Adlakha et al., 2022). Also, for retrieval performances, we use the standard ranking metrics: Top-K accuracy, mean reciprocal rank (MRR), and Precision, following Lee et al. (2021b). ## 4.2 Baselines And Our Model We introduce the baselines with a retriever-reader pipeline, which is dominantly adopted for ODConvQA. We do not compare against the incomparable $\Gamma_{\mathrm{i}}-1$ 4. | OR-QuAC | TopiOCQA | | | | |----------------------------------------------------|------------|-------|-------|-------| | F1 | EM | F1 | EM | | | BM25 Ret. + DPR Read. | 30.82 | 11.17 | 13.92 | 4.09 | | DPR Ret. + DPR Read. | 25.94 | 8.15 | 23.13 | 9.06 | | ORConvQA | 28.86 | 14.39 | 10.67 | 2.36 | | PRO-ConvQA (Ours) | 36.84 | 15.73 | 36.67 | 20.38 | | Table 1: F1 and EM scores on OR-QuAC and TopioCQA. | | | | | baselines that use the additional data, such as rewritten queries (Yu et al., 2021; Fang et al., 2022). BM25 Retriever + DPR Reader This is one of the most widely used retriever-reader pipeline approaches that first retrieves query-relevant passages with a sparse retriever, BM25 (Robertson et al., 1994), and then reads top-k retrieved passages with a DPR reader (Karpukhin et al., 2020). DPR Retriever + DPR Reader This pipeline uses a dense retriever for the first retrieval stage, DPR retriever (Karpukhin et al., 2020), which calculates the similarity between a query and passages on a latent space, instead of using a sparse retriever. ORConvQA This model consists of a dense retriever and a reader with an additional re-ranker, which is trained with two phases (Qu et al., 2020): 1) retriever pre-training and 2) concurrent learning. Specifically, it first trains the retriever and generates dense passage representations. Then, the model further trains the retriever, reader, and reranker using the pre-trained retriever and generated passage representations. PRO-ConvQA(Ours) This is our model that directly retrieves answers without passage reading, trained jointly with contrastive learning to further address a conversational dependency issue. ## 4.3 Implementation Details We implement ODConvQA models using PyTorch (Paszke et al., 2019) and Transformers library (Wolf et al., 2020). For all the models, we use the 2018-12-20 Wikipedia snapshot having a collection of 16,766,529 passages. We exclude the questions with unanswerable answers, since we cannot find their answers with the corpus, which is not suitable for the goal of the open-retrieval problem. Furthermore, as our model answers questions extractively, we convert TopiOCQA with the gold answers in a free-form text to our extractive setting by considering the provided rationale as the gold answers, following the existing setting from Jeong et al. (2023). For training PRO-ConvQA, we set ![6_image_0.png](6_image_0.png) the batch size (B) as 24 and the pre-batch size (C) as 2. Also, We train PRO-ConvQA with 3 epochs with a learning rate of 3e − 5 and further fine-tune a query encoder with 3 epochs. We set λ1 and λ2 as 4 and 1 for OR-QuAC and 2 and 1 for TopiOCQA, respectively. For computing resources, we use two GeForce RTX 3090 GPUs with 24GB memory. For retriever-reader baselines, we retrieve top-5 passages to train and evaluate the reader, following Qu et al. (2020). Also, due to the significant costs of evaluating retrieval models, we perform experiments with a single run. ## 5 Results And Discussion In this section, we show the overall results and provide detailed analyses. Main Results As Table 1 shows, our proposed PRO-ConvQA model significantly outperforms all baselines with a retriever-reader pipeline on two benchmark datasets. This implies that the twostage models might be susceptible to error propagation between the retrieval and reader stages, therefore ineffectively bounding the overall performances when a model fails to correctly retrieve reference passages during the first stage. However, our PRO-ConvQA is free from such a bottleneck problem, since it directly retrieves answer phrases, without requiring an additional reader. Interestingly, a recent ORConvQA model shows largely inferior performances on the TopiOCQA dataset. Note that for TopiOCQA, target passages of two consecutive conversation turns sometimes have different topics, compared to the OR-QuAC dataset where all passages within the whole conversation share a single topic. Therefore, TopiOCQA follows a more realistic setting where a topic constantly changes during the conversation. However, note that ORConvQA is not trained in a truly end-to-end fashion, since it first retrieves passage embeddings from a pre-trained retriever, | Relative Time | #Q / sec. | | |-----------------------|-------------|------| | BM25 Ret. + DPR Read. | 16.94 | 1.74 | | DPR Ret. + DPR Read. | 15.48 | 1.91 | | ORConvQA | 10.95 | 2.70 | | PRO-ConvQA (Ours) | 1.00 | 29.6 | and then uses the already encoded passage embeddings when concurrently training a retriever, reader, and re-ranker. Therefore, ORConvQA is vulnerable to such a topic-shifting situation, as the passage encoder and embedding are not updated during a concurrent training step. Meanwhile, our PRO-ConvQA is trained in an end-to-end fashion, thereby effectively learning to retrieve phrases. Similarly, using BM25 as a first-stage retriever also shows a large performance gap between the two datasets. Note that BM25 lexically measures relevance between a conversational turn and a passage by counting their overlapping terms. Therefore, compared to the other dense-retrieval-based two-stage models, this unique characteristic of BM25 brings additional advantages on the ORQuAC dataset, where each conversational turn revolves around the same topic. More specifically, the conversational history, which is accumulated during each turn, becomes very relevant to the target retrieval passage as the conversation progresses. However, such a lexical comparison scheme fails to effectively retrieve the passages when a topic slightly changes for each conversation turn on TopiOCQA, since it cannot capture a semantic interrelationship between conversational turns and a passage. On the other hand, our PRO-ConvQA shows robust performances on both datasets by retrieving the phrases over the semantic representation space. We further analyze the strengths of the PRO-ConvQA in the following paragraphs. Effectiveness on Retrieval Performance In order to validate whether a failure of the retriever works as a bottleneck in a two-stage pipeline, we measure retrieval performances in Figure 2. Compared to the PRO-ConvQA, the models based on the retriever-reader pipeline fail to correctly retrieve relevant reference passages, thus negatively leading to the degenerated overall performance. This result corroborates our hypothesis that there exists a bottleneck problem in the first retrieval ![7_image_1.png](7_image_1.png) stage. Furthermore, this result demonstrates that our PRO-ConvQA also effectively retrieves the related passages at a phrase level, even though it is not directly designed to solve the conversational search task that aims at only retrieving the passages related to each conversational turn. Efficiency on Inference Time In the real world, inference speed for returning answers to the given questions is crucially important. Thus, we report the runtime efficiency of our PRO-ConvQA against the other baselines in Table 2. Note that PROConvQA is highly efficient for searching answer phrases over the baselines with a retriever-reader pipeline. This is because retrieval and reader stages cannot be run in parallel, since the latter reader stage requires the retrieved passages as the input. On the other hand, our proposed PRO-ConvQA is simply composed of a single phrase retrieval stage with two decomposable encoders, as formulated in Equation 5. This decomposable feature enables maximum inner product search (MIPS), thus contributing to fast inference speed. Ablation Studies To understand how each component in the PRO-ConvQA contributes to performance gains, we provide ablation studies in Table 3. As shown in Table 3, our contrastive learning for conversational dependency modeling and also query-side fine-tuning strategies positively contribute to the overall performance. Furthermore, the significant performance drops when removing each component indicate that there exists a complementary relation between the two components. Zero-shot Performance In order to apply ODConvQA models in a real-world scenario, one may consider a zero-shot performance since highquality training data is not always available. Therefore, we show zero-shot performances, assuming that the target training data is only available for ORQuAC, but not for TopiOCQA. As Figure 3 shows, the proposed PRO-ConvQA outperforms the base- ![7_image_0.png](7_image_0.png) line models by a large margin. This implies that such a zero-shot setting is challenging to the previous ODConvQA models, since they are trained and tested in a different topic-shifting setting; they are trained to assume that each turn shares the same topic within a conversation, but tested in a situation where the topic changes as the conversation proceeds. However, PRO-ConvQA is more robust than other baselines in a zero-shot setting, since its training objective aims at retrieving answers at a phrase-level, rather than a passage-level, which enables capturing topic shifts with more flexibility. Efficient Transfer Learning Besides a zeroshot performance, transferability between different datasets is another important feature to consider in a real-world scenario. In particular, it would be efficient to reuse a dump of phrase embeddings and indexes even if the target data changes, with respect to the training effort and disk footprint for storing a large size of embeddings and indexes. As we have validated the effectiveness of fine-tuning a query encoder in Table 3, it would be more efficient if we could only update the query encoder to adapt to the newly given data, without re-training everything from scratch. To see this, we conduct an experiment in a transfer learning scenario, where a phrase retrieval model is trained on OR-QuAC, but the query-side encoder is further fine-tuned for TopiOCQA and tested on it. As Figure 3 shows, fine-tuning a query-side encoder further improves the performance when compared to the zero-shot model. This indicates that PRO-ConvQA can be efficiently adapted to diverse realistic settings, only compensating a little amount of costs for adaption. Generative Reader While our PRO-ConvQA shows outstanding performances under the extractive reader setting, it is also possible to further combine PRO-ConvQA with a recent generative reader model, Fusion-in-Decoder (FiD) (Izacard and Grave, 2021). We conduct experiments with ![8_image_0.png](8_image_0.png) the publicly available FiD model1, which is already trained on TopiOCQA, without any further training. As Figure 4 shows, our PRO-ConvQA consistently shows superior F1 and EM scores under the generative reader setting, compared to the DPR baseline. This is because PRO-ConvQA is superior in passage-level retrieval as shown in Figure 2, which further leads to accurately answering questions with correctly retrieved passages. Also, we believe that the performance would be further improved by additionally training a FiD model on the retrieved passages from PRO-ConvQA, instead of using an already trained one. ## 6 Conclusion In this work, we pointed out the limitations of the retriever-reader pipeline approach to ODConvQA, which is prone to error propagation from the retriever, unable to run both sub-modules in parallel, and demanding effort to manage these two submodules, due to its decomposed structure. To address such issues, we formulated the ODConvQA task as a dense phrase retrieval problem, which makes it possible to directly retrieve the answer based on its representational similarity to the current conversational context. Furthermore, to model the conversational dependency between the current and its previous turns, we force their representations to be similar with contrastive learning, which leads to retrieving more related phrases to the conversational history as well as the current question. We validated our proposed PRO-ConvQA on ODConvQA benchmark datasets, showing its efficacy in effectiveness and efficiency. ## Limitations As shown in Table 3, the contrastive learning strategy to model the conversational dependencies between the current and previous conversational turns is a key element in our phrase retrieval-based OD- 1https://github.com/McGill-NLP/topiocqa ConvQA task. However, when the current conversational topic is significantly shifted from the previous topic as the user may suddenly come up with new ideas, our contrastive learning strategy might be less effective. This is because modeling the conversational dependency is, in this case, no longer necessary. While we believe such situations are less frequent, one may further tackle this scenario of significant topic switching, for example, with history filtering, which we leave as future work. ## Ethics Statement We show clear advantages of our PRO-ConvQA framework for ODConvQA tasks compared to the retriever-reader approach in both effectiveness and efficiency perspectives. However, when given the conversational context from malicious users who ask for offensive and harmful content, our PROConvQA framework might become vulnerable to retrieving toxic phrases. Therefore, before deploying our PRO-ConvQA to real-world scenarios, we have to ensure the safety of the retrieved phrases. ## Acknowledgements This work was supported by Institute for Information and communications Technology Promotion (IITP) grant funded by the Korea government (No. 2018-0-00582, Prediction and augmentation of the credibility distribution via linguistic analysis and automated evidence document collection). ## References Vaibhav Adlakha, Shehzaad Dhuliawala, Kaheer Suleman, Harm de Vries, and Siva Reddy. 2022. Topiocqa: Open-domain conversational question answering with topic switching. *Trans. Assoc. Comput. Linguistics*, 10:468–483. Raviteja Anantha, Svitlana Vakulenko, Zhucheng Tu, Shayne Longpre, Stephen Pulman, and Srinivas Chappidi. 2021. Open-domain question answering goes conversational via question rewriting. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACLHLT 2021, pages 520–534. Association for Computational Linguistics. Yu Chen, Lingfei Wu, and Mohammed J. Zaki. 2020. Graphflow: Exploiting conversation flow with graph neural networks for conversational machine comprehension. In *Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence,* IJCAI 2020, pages 1230–1236. ijcai.org. Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wentau Yih, Yejin Choi, Percy Liang, and Luke Zettlemoyer. 2018. Quac: Question answering in context. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 2174–2184. Association for Computational Linguistics. Hung-Chieh Fang, Kuo-Han Hung, Chen-Wei Huang, and Yun-Nung Chen. 2022. Open-domain conversational question answering with historical answers. In Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022, pages 319–326. Association for Computational Linguistics. Hsin-Yuan Huang, Eunsol Choi, and Wen-tau Yih. 2019. Flowqa: Grasping flow in history for conversational machine comprehension. In *7th International Conference on Learning Representations, ICLR 2019, New* Orleans, LA, USA, May 6-9, 2019. Gautier Izacard and Edouard Grave. 2021. Leveraging passage retrieval with generative models for open domain question answering. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, EACL 2021, pages 874–880. Association for Computational Linguistics. Soyeong Jeong, Jinheon Baek, Sung Ju Hwang, and Jong Park. 2023. Realistic conversational question answering with answer selection based on calibrated confidence and uncertainty measurement. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 477–490, Dubrovnik, Croatia. Association for Computational Linguistics. Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick S. H. Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, pages 6769–6781. Association for Computational Linguistics. Gangwoo Kim, Hyunjae Kim, Jungsoo Park, and Jaewoo Kang. 2021. Learn to resolve conversational dependency: A consistency training framework for conversational question answering. In *Proceedings* of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, pages 6130–6141. Association for Computational Linguistics. Hyunjae Kim, Jaehyo Yoo, Seunghyun Yoon, Jinhyuk Lee, and Jaewoo Kang. 2022. Simple questions generate named entity recognition datasets. In *Proceedings of the 2022 Conference on Empirical Methods* in Natural Language Processing, EMNLP 2022. Association for Computational Linguistics. Jinhyuk Lee, Mujeen Sung, Jaewoo Kang, and Danqi Chen. 2021a. Learning dense representations of phrases at scale. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, pages 6634–6647. Association for Computational Linguistics. Jinhyuk Lee, Alexander Wettig, and Danqi Chen. 2021b. Phrase retrieval learns passage retrieval, too. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 3661– 3672, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Huihan Li, Tianyu Gao, Manan Goenka, and Danqi Chen. 2022a. Ditch the gold standard: Re-evaluating conversational question answering. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, ACL 2022, pages 8074– 8085. Association for Computational Linguistics. Jiacheng Li, Jingbo Shang, and Julian J. McAuley. 2022b. Uctopic: Unsupervised contrastive learning for phrase representations and topic mining. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, ACL 2022, pages 6159–6169. Association for Computational Linguistics. Yongqi Li, Wenjie Li, and Liqiang Nie. 2022c. Dynamic graph reasoning for conversational opendomain question answering. *ACM Trans. Inf. Syst.*, 40(4):82:1–82:24. Sheng-Chieh Lin, Jheng-Hong Yang, and Jimmy Lin. 2021. Contextualized query embeddings for conversational search. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language* Processing, EMNLP 2021, pages 1004–1015. Association for Computational Linguistics. Kelong Mao, Zhicheng Dou, and Hongjin Qian. 2022. Curriculum contrastive context denoising for fewshot conversational dense retrieval. In SIGIR '22: The 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 176–186. ACM. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Z. Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, pages 8024–8035. Minghui Qiu, Xinjing Huang, Cen Chen, Feng Ji, Chen Qu, Wei Wei, Jun Huang, and Yin Zhang. 2021. Reinforced history backtracking for conversational question answering. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, pages 13718–13726. AAAI Press. Chen Qu, Liu Yang, Cen Chen, Minghui Qiu, W. Bruce Croft, and Mohit Iyyer. 2020. Open-retrieval conversational question answering. In *Proceedings of* the 43rd International ACM SIGIR conference on research and development in Information Retrieval, SIGIR 2020, pages 539–548. ACM. Chen Qu, Liu Yang, Minghui Qiu, Yongfeng Zhang, Cen Chen, W. Bruce Croft, and Mohit Iyyer. 2019. Attentive history selection for conversational question answering. In *Proceedings of the 28th ACM* International Conference on Information and Knowledge Management, CIKM 2019, pages 1391–1400. ACM. Yingqi Qu, Yuchen Ding, Jing Liu, Kai Liu, Ruiyang Ren, Wayne Xin Zhao, Daxiang Dong, Hua Wu, and Haifeng Wang. 2021. RocketQA: An optimized training approach to dense passage retrieval for opendomain question answering. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5835–5847, Online. Association for Computational Linguistics. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100, 000+ questions for machine comprehension of text. In *Proceedings* of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, pages 2383–2392. The Association for Computational Linguistics. Gonçalo Raposo, Rui Ribeiro, Bruno Martins, and Luísa Coheur. 2022. Question rewriting? assessing its importance for conversational question answering. In Advances in Information Retrieval - 44th European Conference on IR Research, ECIR 2022, Stavanger, Norway, April 10-14, 2022, Proceedings, Part II, volume 13186 of *Lecture Notes in Computer Science*, pages 199–206. Springer. Siva Reddy, Danqi Chen, and Christopher D. Manning. 2019. Coqa: A conversational question answering challenge. *Trans. Assoc. Comput. Linguistics*, 7:249– 266. Stephen E. Robertson, Steve Walker, Susan Jones, Micheline Hancock-Beaulieu, and Mike Gatford. 1994. Okapi at TREC-3. In *Proceedings of The* Third Text REtrieval Conference, TREC 1994, volume 500-225 of *NIST Special Publication*, pages 109–126. National Institute of Standards and Technology (NIST). Min Joon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. Min Joon Seo, Jinhyuk Lee, Tom Kwiatkowski, Ankur P. Parikh, Ali Farhadi, and Hannaneh Hajishirzi. 2019. Real-time open-domain question answering with dense-sparse phrase index. In *Proceedings of the* 57th Conference of the Association for Computational Linguistics, ACL 2019, pages 4430–4441. Association for Computational Linguistics. Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2017. Newsqa: A machine comprehension dataset. In Proceedings of the 2nd Workshop on Representation Learning for NLP, Rep4NLP@ACL 2017, pages 191–200. Association for Computational Linguistics. Svitlana Vakulenko, Shayne Longpre, Zhucheng Tu, and Raviteja Anantha. 2021. Question rewriting for conversational question answering. In WSDM '21, The Fourteenth ACM International Conference on Web Search and Data Mining, Virtual Event, Israel, March 8-12, 2021, pages 355–363. ACM. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, pages 5998– 6008. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, EMNLP 2020 - Demos, pages 38– 45. Association for Computational Linguistics. Zeqiu Wu, Yi Luan, Hannah Rashkin, David Reitter, Hannaneh Hajishirzi, Mari Ostendorf, and Gaurav Singh Tomar. 2022. CONQRR: Conversational query rewriting for retrieval with reinforcement learning. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022. Association for Computational Linguistics. Shi Yu, Zhenghao Liu, Chenyan Xiong, Tao Feng, and Zhiyuan Liu. 2021. Few-shot conversational dense retrieval. In SIGIR '21: The 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 829–838. ACM. Munazza Zaib, Wei Emma Zhang, Quan Z. Sheng, Adnan Mahmood, and Yang Zhang. 2022. Conversational question answering: a survey. *Knowl. Inf. Syst.*, 64(12):3151–3195. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? See the 'Limitations' section, after the conclusion. ✓ A2. Did you discuss any potential risks of your work? See the 'Ethics Statement' section, after the conclusion. ✓ A3. Do the abstract and introduction summarize the paper's main claims? See the 'Abstract' and '1. Introduction' sections. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** See '4. Experimental Setups'. ✓ B1. Did you cite the creators of artifacts you used? See '4. Experimental Setups'. ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? No, but we followed their licenses. ✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? No, but we followed their licenses. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. See '4. Experimental Setups'. ## C ✓ **Did You Run Computational Experiments?** See '4. Experimental Setups'. ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? See '4. Experimental Setups'. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? See '4. Experimental Setups'. ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? See '4. Experimental Setups'. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? See '4. Experimental Setups'. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
yu-etal-2023-unlearning
Unlearning Bias in Language Models by Partitioning Gradients
https://aclanthology.org/2023.findings-acl.375
Recent research has shown that large-scale pretrained language models, specifically transformers, tend to exhibit issues relating to racism, sexism, religion bias, and toxicity in general. Unfortunately, these pretrained language models are used almost universally in downstream tasks, and natural language processing is often applied to make real-world predictions. Thus, debiasing these language models as early in development as possible is increasingly crucial for preventing unintentional harms caused by natural language systems. To this end, we propose a new technique called partitioned contrastive gradient unlearning (PCGU), a gray-box method for debiasing pretrained masked language models. PCGU aims to optimize only the weights that contribute most to a specific domain of bias, doing so by computing a first-order approximation based on the gradients of contrastive sentence pairs. Our experiments show that PCGU is both low-cost and seems particularly effective at pinpointing the sources of implicit social bias in large pretrained transformers. Although we train using PCGU in the gender-profession domain only, we find that doing so can also partially mitigate bias across other domains. All code for our implementation and experiments can be found at \url{https://github.com/CharlesYu2000/PCGU-UnlearningBias}.
# Unlearning Bias In Language Models By Partitioning Gradients Charles Yu, Sullam Jeoung, Anish Kasi, Pengfei Yu, Heng Ji University of Illinois at Urbana-Champaign {ctyu2,sjeoung2,anishk4,pengfei4,hengji}@illinois.edu ## Abstract Recent research has shown that large-scale pretrained language models, specifically transformers, tend to exhibit issues relating to racism, sexism, religion bias, and toxicity in general. Unfortunately, these pretrained language models are used almost universally in downstream tasks, and natural language processing is often applied to make real-world predictions. Thus, debiasing these language models as early in development as possible is increasingly crucial for preventing unintentional harms caused by natural language systems. To this end, we propose a new technique called partitioned contrastive gradient unlearning (PCGU), a gray-box method for debiasing pretrained masked language models. PCGU aims to optimize only the weights that contribute most to a specific domain of bias, doing so by computing a first-order approximation based on the gradients of contrastive sentence pairs. Our experiments show that PCGU is both low-cost and seems particularly effective at pinpointing the sources of implicit social bias in large pretrained transformers. Although we train using PCGU in the genderprofession domain only, we find that doing so can also partially mitigate bias across other domains. All code for our implementation and experiments can be found at https: //github.com/CharlesYu2000/ PCGU-UnlearningBias. ## 1 Introduction In the past few years, extraordinary improvements have been made to most applications of natural language processing due to the prevalence of large pretrained language models, particularly Transformers (Vaswani et al., 2017). These language models achieve remarkable performance not only because of mechanisms like attention (Bahdanau et al., 2016), but because of rich and diverse natural language corpora scraped from literature and the internet. However, in spite of some measures to ensure that these natural language sentences are high quality (Radford et al., 2019), recent work has shown that pretraining corpora contain many toxic/biased sentences and that neural models trained on such data readily capture and exhibit these biases (Caliskan et al., 2017; May et al., 2019; Gehman et al., 2020; Kurita et al., 2019). Previous studies suggest that embeddings and models encode harmful social biases (Bolukbasi et al., 2016; Caliskan et al., 2017; Kaneko and Bollegala, 2021; Dev et al., 2019; Nangia et al., 2020; Kurita et al., 2019; Nadeem et al., 2020). This can be problematic, as the lack of interpretability in modern language models means that negative stereotypes and social biases encoded in models may lead to unfairness and harms in production systems. Without effective mitigation techniques, finetuned models utilizing these flawed language representations might accidentally inherit spurious correlations not representative of the real world or their target task. To mitigate the representational harms explained in Barocas et al. (2017); Blodgett et al. (2020), we might aim for two goals of different granularities. The first goal proposes to debias a model such that its *predictions* encode the least bias. The second aims to remove social bias throughout a model such that the model minimally *represents* constructs that can cause itself to be biased in its predictions. Regardless of the debiasing goal, the north star is to eliminate harms caused by the model, so we must be motivated by how pretrained language models are used. Minimizing the cost of adoption for debiased language models is a high priority for debiasing, as any barriers may cause people to be skeptical of the societal benefits. To ensure that people have little reason not to use our debiased model, we aim to minimize representing bias while still maximizing the representation ability of the model. In this study, we focus on debiasing pretrained language models 6032 used directly for masked language modeling. Crucially, we modify only their weights post-hoc without any changes to the architecture or additional modules. In this way, we enable key stakeholders to swap out their masked language models (by simply loading a different set of weights) but still use the exact same code for masked predictions, just as they might with any other finetuned model. Furthermore, stakeholders need not rely on the people pretraining the model to have incorporated debiasing procedures during the pretraining process. We restrict our study to masked language modeling, as the use cases of language models for other downstream tasks are disparate, and extrinsic evaluation of bias in those tasks is often be confounded by task-specific finetuning (Meade et al., 2022). We expect, based on the results from Kaneko and Bollegala (2021); Vig et al. (2020), that problematic social biases propagate throughout large portions of language models. Furthermore, based on the Lottery Ticket Hypothesis (Frankle and Carbin, 2019), we hypothesize that most bias is encoded by specific groups of neurons rather than individual weights throughout the model. So, we propose a gradient-based debiasing method called **partitioned contrastive gradient unlearning (PCGU)** to locate where in the model these problematic inferences originate from and to systematically retrain those parts of the model to *unlearn* this biased behavior. In our experiments, we use PCGU to unlearn biases in the gender-profession domain and evaluate our approach using prior association tests for bias/stereotypes. We find that PCGU is seemingly effective both in mitigating bias for the gender-profession domain that it is applied to as well as for generalizing these effects to other unseen domains. In addition, we observe that the procedure exhibits results quickly, requiring very few iterations over the tuning dataset and very little real time until convergence. The hyperparameter search space can be found in Appendix A. ## 2 Related Work Motivated by the idea that the words in sentences are the root of all the information flowing through language models, static word embeddings were the first target for debiasing (Bolukbasi et al., 2016; Zhao et al., 2018b; Sheng et al., 2019; Nangia et al., 2020; Dev et al., 2019; Karve et al., 2019; Zhang et al., 2018). These methods typically operate via projection onto some subspace that does not encode the targeted bias. However, modern language models do not use external embeddings, so it is not immediately clear that such methods can be applied to transformers. Further efforts have been made to extend those patterns for contextualized embeddings (Dev et al., 2019; Karve et al., 2019; Ravfogel et al., 2020; Kaneko and Bollegala, 2021). However, such studies typically do not account for interactions between different parts of the model when used in actual sentences. Instead, they focus either on the (static) word embedding layer or on aggregate representations of specific words. Methods that propose debiasing models beyond the word level have also been proposed (Liang et al., 2020; Cheng et al., 2021). However, most of these methods aim only to improve the case where another model will further use the sentence representations generated by the text encoder. Crucially, this does not solve any word-level problems such as masked language modeling. For example, methods like Cheng et al. (2021) add on extra modules, which means that the cost of adoption is more than simply loading a new weights file. In a different vein, methods like Schick et al. (2021) utilize multiple iterative prompts to debias generations only. Recently, much work in this field has been focused on changing the pretraining or finetuning process to prevent bias from being learned by the language model. Many approaches aim to change the training process for embeddings, classifiers, or encoders, either through changing the training procedure or adding bias-aware terms to the training loss function (Zhao et al., 2018a; Lauscher et al., 2021). Some of this work has achieved success by attempting to "neutralize" the language models' representation of biased words over some bias subspace by finetuning (Kaneko and Bollegala, 2021) or prompt tuning (Yang et al., 2023), or by extending these ideas by reformulating the bias dimensions as a set of implicit dimensions from social psychology (Omrani et al., 2023). Other methods propose changing or augmenting the training data in some way, typically by adding high-quality unbiased or antistereotypical sentences, eliminating blatantly biased or stereotypical sentences, or a combination of the two by replacing texts in the training corpus (Elazar and Goldberg, 2018; Guo et al., 2022; Qian et al., 2022). Yet other techniques utilize counterfactual or adversarial signals to dissuade models from encoding biases (Zhao et al., ![2_image_0.png](2_image_0.png) 2018a; Elazar and Goldberg, 2018; Zhang et al., 2018; Zmigrod et al., 2019; Hall Maudslay et al., 2019; Webster et al., 2020). Perhaps most similar to our method is actually work done in the knowledge editing space. Such tasks propose explicitly editing specific knowledge in a model without affecting unrelated knowledge (Sinitsin et al., 2020; Zhu et al., 2020). This is quite similar to our task in that we aim to remove specific social bias from our model without affecting unrelated inference ability. However, our method attempts to remove generalized forms of these biases, as opposed to removing/changing the more targeted and specific knowledge that knowledge editing methods attempts to do. Recent studies include gradient-based methods that train separate networks to predict efficient gradient updates for removing or replacing models' knowledge (Cao et al., 2021; Mitchell et al., 2021). ## 3 Methods At a high-level, PCGU is composed of three parts. First, gradients must be computed for a contrasting pair of sentences whose difference is in the domain that the model is biased in. Next, we apply a weight importance algorithm, based on gradients, to compute a ranked ordering of weights that are most important to our criterion (i.e., the weights that seem to most encode the biases we wish to unlearn). Finally, taking the earlier gradients and ordered weights as input, we compute a first-order approximation of the bias gradient and perform a standard optimization step of our language model. In our experiments, we apply this procedure to debias a group of masked transformer language models for the gender-profession domain such that their final parameters encode less inequality for MLM predictions. Specifically, we aim to update the models such that they are not generally biased toward a stereotypical sentence nor an antistereotypical sentence, since even antistereotypes can be harmful (McGowan and Lindgren, 2006). We later evaluate PCGU's efficacy using existing evaluation benchmarks. ## 3.1 Contrastive Gradients Formally, we can consider BERT (Devlin et al., 2019), or any masked language model in this class, as a probability function M parameterized by its weights θ ∈ R d(d is the number of parameters of the model). M computes the probability of a token (which should be masked due to contextual embeddings) conditioned on its right and left contexts. So, given a sentence si = [w 1 i , w2 i , . . . , wn i ] where w j i = [MASK], we can compute the probability distribution of all possible tokens at index j to investigate the model's biases. To calculate contrastive gradients in the genderprofession domain, we will employ a subset of the Winogender Schemas dataset (Rudinger et al., 2018). This subset is composed of 240 minimal sentence pairs, where the only difference between the sentences is the gender, either male or female1, 1We do not claim that gender is binary. However, as the dataset only consists of three pronouns (male, female, neutral such as "they"), we use only the male and female versions of the pronoun coreferent with the subject of the sentence. The subject of the sentence is always a person referred to by their occupation, so we can interpret the probabilities assigned to the male and female pronouns as the model's stereotype for each occupation. For example, we may have a pair of sentences s1 = "The professor could not attend the talk because he was preparing for the keynote." s2 = "The professor could not attend the talk because she was preparing for the keynote." The pronoun must be assumed by the model, as none of the context entails a gender. For domains other than gender-profession, an analogous dataset with minimally different sentence pairs could be utilized (or sentence tuples for non-binary domains, as described in Appendix E). For each of the sentences in the minimal pair, we compute the probability that the model assigns to the differing token. Using standard backpropagation, we then calculate the gradients, ∇1, ∇2 ∈ R d, of the probabilities with respect to the model's weights θ. ## 3.2 Determining Importance Of Weights Partitioning the Weights. Now, using ∇1 and ∇2, we will determine which dimensions of θ are the ones that seem most important to the representation of bias. To make this method robust, we partition θ into a set of weight vectors θ 1 ∈ R d1, θ2 ∈ R d2*, . . . , θ*m ∈ R dm (where d1 + *· · ·* + dm = d). The gradient ∇iis partitioned into ∇1 i , . . . , ∇m iin the same way. To determine how to partition θ, we hypothesize that a subset of neurons of the model should encode all the biases/preferences of the model in different contexts. This is motivated by the Lottery Ticket Hypothesis (Frankle and Carbin, 2019), which posited that neural networks often contain highly active subnetworks that can be solely trained to solve a task. Here, we propose two related forms of partitioning: input aggregation and output aggregation. In transformers, input aggregation partitions attention matrices by grouping together the weights that determine how much each element in the input embedding contributes to the key/query/value vectors. Output aggregation partitions the attention matrices by grouping the weights that determine to simplify experiments by using "disjoint" terms. A natural extension beyond binary gender words should be possible inductively, as discussed in Appendix E. how much each element in the key/query/value vectors is influenced by the input embedding. For non-attention weight matrices such as those used for dense layers, the same concepts apply but for the output embedding rather than the attention vectors. Note that we do not partition bias vectors for either partitioning method. As an example, consider an r × c weight matrix W and a 1 × r input embedding vector −→i . The left multiplication of −→i by W results in the 1 × C output embedding vector −→o = −→i · W. Input aggregation partitioning would partition W into r vectors (−→v1,−→v2*, . . . ,*−→vr ), where each of the vectors −→vi determines how much the i th index of −→i contributes to −→o (since each index j of −→o is computed as −→o j =Pr i=1 −→i i·−→vi j ). Output aggregation partitioning would instead partition W into c vectors (−→v1,−→v2*, . . . ,*−→vc ), where each of the vectors −→vj determines how much −→i contributes to the j th index of −→o (since −→o j is the dot product of −→vj and −→i ). Therefore, input aggregation partitioning is equivalent to partitioning the right-multiplied matrix by its rows, as illustrated in Figure 1. Similarly, output aggregation partitioning is splitting by its columns. In the 110M parameter version of BERT, using input aggregation partitioning to partition θ gives us approximately 114k weight vectors and using output aggregation partitioning results in about 88k weight vectors. Computing Importance of Weight Blocks. Next, we will calculate which vectors of the partition {θ 1, θ2*, . . . , θ*m} seem to most encode the bias. Since our minimal pairs differ only in the gender of the subject noun working in the profession, the gradients will encode the direction of maximal increase in probability for the associated gender term. We expect that some parts of the gradient may encode concepts like grammar, semantics, and syntax, and be similar for both gradients. On the other hand, we expect a few parts of the gradient to be drastically different, as those are the parts of the model that the gender of the pronoun is highly relevant to. With {∇i1} m 1and {∇i2} m 1being the partitioned gradients for the two minimally different sentences, we order the weight vectors θ r1, θr2*, . . . , θ*rm, where the ordering {r1, r2*, . . . , r*m} is determined by how different each of the corresponding gradient pieces is. Since the magnitude of each gradient piece is highly dependent on unrelated values, we use only the directions of the vectors to determine the difference between corresponding pieces in the two gradients. Thus, θ 1, θ2*, . . . , θ*m are ordered by importance, as computed by cosine similarity: $$I m p o r t a n c e(\theta^{i})=\frac{\nabla_{1}^{i}\cdot\nabla_{2}^{i}}{\|\nabla_{1}^{i}\|\|\nabla_{2}^{i}\|}\qquad(1)$$ Weight vectors where the associated contrasting gradient pieces have low cosine similarity are thus determined to be most important for the targeted bias. In contrast, the ones with high similarity are determined to be least important to that bias, but may be more relevant to unrelated concepts or different types of bias. ## 3.3 First-Order Gradient Optimization Step Finally, we take some subset of the partition of weight vectors and only optimize those parts of θ to approximate reducing bias. We choose the subset θ r1, θr2*, . . . , θ*rk as the k most important weight vectors. To determine the actual values of the gradient used in this optimization step, we consider the gradients of each pair of sentences in our tuning set. In each pair, we denote one sentence to be the "advantaged" sentence and the other to be the "disadvantaged" sentence. The advantaged sentence is the one that is expected to be more preferred by a biased model and the disadvantaged sentence to be the one less preferred. In our experiments tuning with Winogender, we use the included statistics about the proportion of gender-occupation coreference pairs in news sentences where the gender is female (Bergsma and Lin, 2006). From these proportions, we choose the sentence with the pronoun that is less often coreferent to be the disadvantaged sentence and the other to be the advantaged sentence. We then relabel the sentence pair s1, s2 to be sa1 , sa2 where a1 is the index of the advantaged sentence and a2 is the index of the disadvantaged sentence. For example, since the reported proportion of the male-surgeon pair is 0.9566, a1 = 1 is the index of the advantaged sentence and a2 = 2 is the disadvantaged sentence. Finally, to compose our bias gradient, we will take the gradient parts associated with the advantaged sentence (i.e., ∇r1 a1 , ∇r2 a1 , . . . , ∇rk a1 ) and apply a negative optimization step. In this negative optimization step, we perform gradient descent, moving the parameters in the direction of maximal probability decrease of the advantaged term: $$\theta^{r_{i}}\leftarrow\theta^{r_{i}}-\alpha1\{i\leq k\}\nabla_{a_{1}}^{r_{i}},$$ , (2) where α is the learning rate. It is also reasonable to move the parameters in the direction of maximal probability increase of the disadvantaged term, which we discuss more in Section 4.6. Similarly, we may decide that using historic counts is too rigid and non-representative of modern models. Instead of using the coreferent pairs proportions, we can decide which word is advantaged and which is disadvantaged based on their logits at inference time. We discuss this more in Section 4.7. ## 4 Experiments And Discussion 4.1 Datasets We evaluate our proposed PCGU method on two recent social bias benchmarks: StereoSet (Nadeem et al., 2020) and CrowS Pairs (Nangia et al., 2020). Since PCGU is intended for masked language modeling within a sentence, we use only the intrasentence subset of StereoSet. Each instance in StereoSet is a sentence triplet: one sentence is stereotypical, the second is antistereotypical, and the final sentence is a semantically non-sensical sentence. Girls tend to be more **soft** than boys. (stereo) Girls tend to be more **determined** than boys. (anti) Girls tend to be more **fish** than boys. (nonsense) Each instance in CrowS Pairs is a minimally different pair of stereotypical and antistereotypical sentences. Using these two datasets, masked language models can be evaluated for bias by comparing the probabilities associated with each sentence. ## 4.2 Evaluation Metrics The three StereoSet metrics are the Stereotype Score (SS), the Language Modeling Score (LMS), and the Idealized Context Association Test score (ICAT). These metrics are computed by comparing the probability assigned to the contrasting portion of each sentence conditioned on the shared portion of the sentence. The CrowS metric is similar to SS except that it computes the probability of the shared portion of the sentence conditioned on the contrasting portions of each sentence instead. SS and CrowS both measure the proportion of examples where the stereotypical sentence is assigned a higher probability than the antistereotypical sentence. The ideal score is 0.5, indicating no general bias toward either stereotypes or antistereotypes. To measure the language modeling abilities of the model, LMS is proposed as the proportion of $$\left(2\right)$$ | Model | SS → 0.5 (∆) LMS ↑ ICAT ↑ CrowS → 0.5 (∆) | | | | |------------------|---------------------------------------------|-------|-------|---------------| | bert-base-cased | 0.569 (0.069) | 0.873 | 0.752 | 0.551 (0.051) | | + PCGU (ours) | 0.534 (0.034) | 0.837 | 0.781 | 0.548 (0.048) | | + DPCE | 0.624 (0.124) | 0.785 | 0.590 | 0.458 (0.042) | | + AutoDebias | 0.530 (0.030) | 0.507 | 0.476 | 0.465 (0.035) | | + PCGU then DPCE | 0.581 (0.081) | 0.849 | 0.712 | 0.452 (0.048) | | + DPCE then PCGU | 0.569 (0.069) | 0.726 | 0.625 | 0.486 (0.014) | | Model | SS → 0.5 (∆) LMS ↑ ICAT ↑ CrowS → 0.5 (∆) | | | | |------------------|---------------------------------------------|-------|-------|---------------| | roberta-base | 0.625 (0.125) | 0.917 | 0.689 | 0.593 (0.093) | | + PCGU (ours) | 0.570 (0.070) | 0.839 | 0.722 | 0.584 (0.084) | | + DPCE | 0.641 (0.141) | 0.930 | 0.667 | 0.405 (0.095) | | + AutoDebias | 0.596 (0.096) | 0.685 | 0.554 | 0.467 (0.033) | | + PCGU then DPCE | 0.561 (0.061) | 0.860 | 0.755 | 0.311 (0.189) | | + DPCE then PCGU | 0.588 (0.088) | 0.853 | 0.703 | 0.516 (0.016) | | Model Name | k | Partition method | SS → 0.5 (∆) | LMS ↑ | ICAT ↑ | CrowS → 0.5 (∆) | |----------------------|-------|--------------------|-----------------|---------|-----------------|-------------------| | 0 (pretrained) | - | 0.5138 (0.0138) | 0.7724 | 0.7510 | 0.6048 (0.1048) | | | 14000 | Input | 0.4959 (0.0041) | 0.7675 | 0.7612 | 0.5968 (0.0968) | | | BERT (base, uncased) | 11000 | Output | 0.5122 (0.0122) | 0.7626 | 0.7440 | 0.6021 (0.1021) | | All | - | 0.4846 (0.0154) | 0.6512 | 0.6311 | 0.6021 (0.1021) | | | 0 (pretrained) | - | 0.5693 (0.0693) | 0.8729 | 0.7519 | 0.5511 (0.0511) | | | 3000 | Input | 0.5336 (0.0336) | 0.8372 | 0.7809 | 0.5477 (0.0477) | | | BERT (base, cased) | 9500 | Output | 0.5609 (0.0609) | 0.8571 | 0.7527 | 0.5424 (0.0424) | | All | - | 0.5126 (0.0126) | 0.5956 | 0.5806 | 0.5444 (0.0444) | | | 0 (pretrained) | - | 0.6246 (0.1246) | 0.9170 | 0.6885 | 0.5928 (0.0928) | | | 22000 | Input | 0.5698 (0.0698) | 0.8389 | 0.7218 | 0.5842 (0.0842) | | | RoBERTa (base) | 8000 | Output | 0.6130 (0.1130) | 0.8953 | 0.6931 | 0.6114 (0.1114) | | All | - | 0.5415 (0.0415) | 0.6827 | 0.6260 | 0.5358 (0.0358) | | | 0 (pretrained) | - | 0.5000 (0.0000) | 0.5669 | 0.5669 | 0.5676 (0.0676) | | | 1000 | Input | 0.4806 (0.0194) | 0.5371 | 0.5163 | 0.4483 (0.0517) | | | ALBERT (base) | 1300 | Output | 0.4790 (0.0210) | 0.4315 | 0.4134 | 0.4894 (0.0106) | | All | - | 0.4839 (0.0161) | 0.4452 | 0.4308 | 0.6068 (0.1068) | | examples where the stereotypical/antistereotypical sentences are assigned a higher probability than the non-sensical one. So, an ideal model achieves a score of 1, and debiasing methods should aim to minimally decrease this score during debiasing. In order to measure the tradeoff between better SS and worse LMS after debiasing, ICAT combines the two into a score between 0 and 1 such that a perfectly debiased and accurate model achieves a score of 1 (also, a fully random model achieves a score of 0.5). Full formulations of these metrics can be found in Appendix D. ## 4.3 Experiments We test PCGU on four masked language models: the uncased and cased versions of 110M BERT (Devlin et al., 2019), the 125M version of RoBERTa (Liu et al., 2019), and the 11M version of ALBERT (Lan et al., 2020), all pretrained from the HuggingFace library (Wolf et al., 2020). For each of the models, we report the results of the best-performing model tuned via PCGU using each of the two (input and output) aggregation partitioning methods. Input aggregation models were tuned for at most 15 epochs using a learning rate of α = 2e − 6 and output aggregation models were tuned for at most 10 epochs using a learning rate of α = 1e − 5. On a single NVIDIA Tesla V100 GPU (16GB), using a batch size of 64 pairs from Winogender (so there are 4 batches per epoch), PCGU tuning of BERT with PyTorch takes around 4 seconds per batch using input aggregation partitioning and 50 seconds per batch for output aggregation partitioning 2. The main cost of PCGU, other than the partitioning method which is implementation dependent (and can be quite fast if not made to be a general interface) is only a cosine similarity, so the cost of a single step PCGU is on the order of a single step of finetuning, implying scalability to modern large language models. 2The extra runtime of output aggregation is due only to the specific implementation we used, which indexed into tensors using the range() function to allow for a more generic interface rather than slicing. Slicing indices is much more efficient. Notably, we re-compute weight importance for each batch of b sentence pairs by computing the importance using the batched gradients. This is as opposed to computing the importance for each example pair (i.e., b = 1) or using a static selection of weights computed based on the full dataset. In our testing, we found little discernible difference in using different batch sizes, provided that they were reasonably large (b > 16). Evidently, larger batch sizes allowed the weight importance computation to be more robust. We report the results of these experiments in Table 2. Although the reported PCGU models do not achieve the perfect SS of 0.5, we tend to see significant improvement to the SS compared to relatively little decrease in LMS, leading to an increase in the overall ICAT score for both BERT and RoBERTa. However, this was not the case for ALBERT, whose pretrained version achieved a perfect SS, which might suggest that this method is more effective when knowledge is more distributed (i.e., for larger models) or that our stopping criteria are imprecise. Perhaps unsurprisingly, the CrowS score does not seem to be as affected by PCGU (although it does seem to have slightly improved in all cases). We attribute this observation to the fact that the gradient used for PCGU more closely resembles the probability used for the StereoSet metrics than the probability calculation used for the CrowS metric. Based on our random validation/test split of StereoSet, we find that apparently the dataset is not uniform. Therefore, the performance for either SS or LMS of a model on the validation set was not a great indicator of its performance on the test set. The average SS of each of the reported PCGU models on the validation set is within 0.016 of perfect, and mostly within 0.001 of perfect. However, not only do we find that many different models achieve perfect or near-perfect SS on the test set (but not on the validation set as well), but there exist yet other models that achieve high SS across the entire set but poor SS over each of the validation and test sets (Simpson's paradox). As part of a qualitative analysis, we find that most random examples from StereoSet and even our own examples follow the trends shown in Figure 2. This suggests that PCGU debiases by aiming for equality of genders in the sense used in Beutel et al. (2017); Zhang et al. (2018), where the odds of either gender are mostly uncorrelated with the context. In fact, variants of the sentence in Figure 2, such as the sentence "The professor had to write [MASK] keynote" further showcase that non-gendered infills can be minimally affected by PCGU debiasing. Prior to debiasing, the LM predicts "a" and "the" with 88% probability while predicting "his" and "her" with only 7% of the probability mass. After applying PCGU, the probability of "a" and "the" decreases only slightly, to 86%, while the gendered predictions "his" and "her" only increase to 10% of the total probability mass. Notably, PCGU seems effective at targeting actual biases, not simply differences in gender, a phenomenon discussed more in Appendix F. ## 4.4 Comparison With Similar Debiasing Methods We also compare models debiased using PCGU with those debiased by DPCE (Kaneko and Bollegala, 2021) and AutoDebias (Guo et al., 2022), two recent methods that update only the weights of the language model without changes in architecture, in Table 1: DPCE (Kaneko and Bollegala, 2021) is a method that finetunes layers of the model according to their novel objective function seeking to minimize bias in the contextualized word embedding produced at that layer. Their objective function depends on finding sentences in the corpus that utilize bias attribute words and creating a prototype from those words' contexts. Then, DPCE attempts to minimize the shared dimension between the attribute prototype and the contextualized word embedding (similar to the projection-based debiasing methods that subtract from embeddings their projections onto the bias subspace). AutoDebias (Guo et al., 2022) is a method that first searches for MLM prompts whose masked token distribution has the highest disagreement among the demographics chosen for debiasing (for example, the probability of the words "he" and "she" being very different). Then, they use a JensenShannon divergence-based objective function to finetune the model to equalize the demographic distribution across all the generated prompts. We find that PCGU tends to be far more effective than DPCE while AutoDebias produces a close-to-random model. Also, PCGU can significantly debias a model even after applying DPCE, but the opposite is less notable. Thus, as a standalone method, PCGU seems superior to the others. However, since they seem to have different effects (DPCE actually causes LMS to improve in some cases), it may be most effective to chain multiple methods together. The main methodological difference that seems to allow PCGU to perform better than DPCE is that PCGU does a very targeted finetuning by identifying the weight partitions in the model that should be altered, whereas DPCE finetuning is guided by the loss function only and is dependent on using highquality attribute prototypes. In practice, DPCE converges much slower than PCGU does, possibly due to this reliance on the prototypes. An explanation for the relatively poor performance of AutoDebias may be due to the way it finds the prompts with the highest distributional disagreement. This heuristic does not account for the fact that those prompts with the largest distributional disagreement in a strong PLM are often those whose context necessitates one version of a word and may not have anything to do with bias ("The [MASK] tied his shoes" should have a much higher probability for "man" than for "woman" and "The [MASK] person prayed at the synagogue" would have much higher probability for "Jewish" than for "Muslim"). ## 4.5 Weight Importance Ablations As an ablation test for the weight importance step, we also perform PCGU using all the weights (basically, taking a backward optimization step for the advantaged sentence). We find that, although the procedure generally is able to debias the language model well, the language modeling functionality is greatly crippled (similar to AutoDebias). This is in stark contrast to the weight partitioning versions, which incur a much smaller decrease in language modeling ability. These results suggest that some form of partitioning is clearly necessary; not all weights of the model contribute equally to bias. We also find that the choice of input vs output aggregation partitioning does not obviously affect the performance of the debiased models. However, across the experiments, the input partitioning method maintained a slight edge over the output partitioning method. ## 4.6 **Decreasing The Advantaged Probability Vs** Increasing The Disadvantaged Probability We also investigate the difference between taking the optimization step in PCGU to decrease the probability of the advantaged sentence compared to ![7_image_0.png](7_image_0.png) ![7_image_1.png](7_image_1.png) ![7_image_2.png](7_image_2.png) increasing the probability of the disadvantaged sentence. We find that the former results in faster convergence, although the latter does not take much longer to converge to similar performance. In general, the difference in performance depended more on the model selection criteria than on which gradient was used for the tuning. For example, selecting the model based on the SS over the gender and profession domains rather than based on the macroaveraged SS (compute SS for each domain and then average it) resulted in as much fluctuation in SS on the test set as using the disadvantaged gradient instead of the advantaged gradient did. There are some interesting implications related to the difference in goals for using each gradient. By decreasing the probability of the advantaged sentence, we are more directly teaching the model to be less biased. On the other hand, by increasing the probability of the disadvantaged sentence, we are instead teaching the model to be equally as biased toward both forms (compared to other options). In reality, bias comes in many shapes, and our work is motivated by the idea that we want to unlearn the entire class of bias, not just specific examples. Unfortunately, a pair of options is not enough to represent the full distribution of options. Therefore, it seems reasonable to believe that decreasing the probability of the advantaged sentence should be more applicable for general forms of bias. Thus, all our experiments report results from this method only. ## 4.7 **Dynamically Determining The Advantaged** And Disadvantaged Sentence Model Name Ss (Dynamic|Static|Pretrained) Lms (Dynamic|Static|Pretrained) Bert (Base, Uncased) 0.5106 | **0.4959** | 0.5138 0.7659 | 0.7675 | **0.7724** Bert (Base, Cased) 0.5777 | **0.5336** | 0.5693 0.8687 | 0.8372 | **0.8729** Roberta (Base) 0.6213 | **0.5698** | 0.6246 0.9128 | 0.8389 | **0.9170** Albert (Base) 0.5048 | 0.4806 | **0.5000** 0.5613 | 0.5371 | **0.5669** Table 3: PCGU with dynamic sentence classification (i.e., choosing which sentence is advantaged and which is disadvantaged based on the PLM's own prediction logits) vs static sentence classification (as reported in Table 2) and the original pretrained model. Bolded values denote the most effective version. Dynamic determination seems to be very similar to not changing original pretrained model, as opposed to the static sentence classification, which actually debiases. We also consider the differences between using a static determination of which sentence is advantaged and a dynamic determination, as alluded to in Section 3.3. A pretrained model's state is highly complex so the model may need to improve greatly for one region of the bias space and less so for another region. Therefore, it seems likely that one space may become debiased before another space has been debiased. By using a static determination, we resign ourselves to the likelihood that an already debiased space may become biased in the opposite direction while we debias the other space. In other words, it seems likely that the model may overshoot and fail to achieve an ideal overall performance when using the static determination. This is, in experimentation, not the case, and we report the results of PCGU using a dynamic determination in Table 3. At each training step, we dynamically choose the advantaged and disadvantaged sentences based on the logits of the masked token. Since this now allows us to simply aim for equality in the sentences, we then perform the optimization step using the difference in gradients (such that the advantaged sentence probability is decreased and the disadvantaged sentence probability is increased). In all cases, the model's performance both for SS and LMS remained similar to the original pretrained model. Thus, we can conclude that this dynamic determination is not usable for debiasing with PCGU. ## 4.8 Cross-Domain Effects Of Pcgu The scores for our experiments suggest that PCGU is effective at mitigating the amount of bias in a model without greatly affecting the transformer's ability to perform language modeling. Interestingly, ![8_image_0.png](8_image_0.png) Table 4: SS ranges for out-of-domain biases after PCGU. Observe that the perfect SS of 0.5 is contained in most of these ranges, suggesting that the weight vectors selected for unlearning by PCGU are, in some way, related to biases in general, not just the gender-profession biases encoded in the training data. despite the fact that our tuning set for PCGU only contained information related to gender and profession, we see that this procedure is able to change the amount of bias in other domains as well (to varying degrees), as shown in Table 4. This suggests that perhaps some of the parameters/neurons governing different domains of bias are potentially overlapping, causing some crossdomain convergence during training. However, it is just as possible that the difference in SS may be due only to noise or factors unrelated to bias. An extension of this experiment may be able to determine if different domains of bias can be concurrently or sequentially debiased, possibly via coordinate descent. It also seems reasonable, using the analogous data for other domains of bias mentioned in Section 3.1, to determine which weights are important for separate domains of bias and which are shared. ## 5 Conclusion In this paper, we introduced PCGU, a method to systematically search through a pretrained masked language model to find the origins of its bias and mitigate them. The positive results in our paper suggest that, with the proper data, post-hoc removal of problematic social biases can be efficient and targeted via PCGU. Our findings also support the notion that different types of bias arise from different areas in pretrained transformers. We believe that by focusing on the language model holistically, rather than as a collection of individual pieces, we can more effectively remove representational harms from pretrained language models. It is our hope that future studies are able to leverage PCGU to fully debias language models and increase adoption of fair pretrained models. ## 6 Limitations We acknowledge that the StereoSet and CrowS datasets and metrics are not ideal evaluation measures for debiasing work (see Blodgett et al. (2021) for more details about their pitfalls). We advise practitioners to conduct careful analysis for their specific use case rather than interpreting the scores from our experiments as clear measures of bias mitigation or removal. Furthermore, we realize that in discussion of harms, we should also ensure that allocative harms do not arise from dependency on a PCGU-debiased model. In this paper, we do not report experiments on models finetuned for other downstream tasks, as finetuning is generally more prone to spurious correlations and accidentally encoding bias, so evaluating such models obfuscates the procedure's effect on the pretrained model. Instead, we focused only on the masked language modeling task such that intrinsic and extrinsic evaluations both use the pretrained model directly and only. In the modern age of large language models, this is arguably more applicable, but this setting doesn't take into account the effects of prompts on the prediction distribution. An interesting extension of this study would be to debias using some form of PCGU in the pure generation setting and evaluating with high quality generation-based resources such as HolisticBias (Smith et al., 2022). However, the base form of PCGU is not directly applicable due to the difficulty in attaining and using minimal pairs/tuples in generations. Another related limitation is that our experiments were only conducted in English. However, many languages, such as Spanish or other Romance languages, have a much richer concept of grammatical/lexical gender sometimes affecting multiple words per sentence. Unfortunately, a fundamental problem with interpretability arises if we wish to evaluate the language model's bias implicitly. For example, the prediction in Figure 2 suggests that the debiased model is less biased than a model predicting the full probability mass for the female term. Discrete metrics fail to account for this behavior, so better evaluation metrics would also give us a better sense of the efficacy of our proposed method. We further note that gender, which has historically been treated as a binary construct, is likely to be a relatively easy domain to work with. Other more complicated social biases like racism and classism are similarly harmful, and an ideal debiasing procedure should work for all of them. Similar questions may arise about if we can ever comprehensively cover all domains without a better way to generalize across domains. It is also to be seen if PCGU can be directly used for other domains, as our experiments only touched on the intersection of gender and profession biases while observing that this has effects on other domains. Further work would be required to understand why, and in what contexts, PCGU can affect unseen domains; are the cross-domain results in the main paper artifacts of intersectionality (between seen and unseen domains) or is this truly generalizations across a broader notion of bias? Due to the complexity of social bias, it is not obvious if a properly modeled dataset for such other domains of bias can be easily constructed for usage with PCGU. A natural thought would be to attempt to generate training data for PCGU. We attempted this but found that the generations were not reliable in terms of providing signal for what constituted bias. By using a templated dataset like WinoGender, we can ensure that every instance in the training set encodes bias by an assumption of gender based on only the profession. Obviously, partitioning at the most granular level where each single parameter is its own part would make our directional comparison meaningless. However, we did not extensively study how important the specific partitioning method was. An interesting class of experiments would be using some sort of random partitioning, where each individual parameter is assigned to its group of parameters not according to any architectural reason but according to some sort of randomness. Our implementation of this made the gradient selection extremely expensive because it required too much indexing into tensors as opposed to a full replacement of specific dimensions. A better implementation or experiment would be needed to draw actionable conclusions about different partitioning methods. However, our baseline experiments for this matched with the intuition that sampling each weight as being a bias or non-bias weight using a Bernoulli distribution yields a similar effect as regular training with dropout, similar to the k=All experiments in Table 2. ## 7 Other Ethical Considerations This study employed a binary classification of gender in our experimentation and description of the methodology. It is our firm stance that such beliefs have no place in the community, especially considering that language evolves with its users. However, we believe that this narrow view of gender is necessary as a step in the broader direction of full equity. We hope that when high quality datasets displaying non-binary genders are released in a form usable by PCGU, researchers may revisit this paper and study an inductive extension of PCGU. We also recognize the fact that any method used for debiasing may possibly be reversed to train extremely bigoted models, possibly for trolling or targeted harassment. However, we believe that any such practice for PCGU would not be more harmful than existing training methods. As observed in our experiments, even when looking to increase the probability of logits only (as opposed to explicitly decreasing the advantaged sentence), the language modeling score still suffers. Therefore, there seems to be no reason to believe that PCGU could create a more biased model than simply finetuning on many bigoted examples. Due to the problems with StereoSet and CrowS alluded to in Section 6, we recognize that experimental results based on those metrics are not conclusive evidence that a model is biased or unbiased (or good at modeling). We urge any reader to make their own judgment about these models through their own qualitative analyses. ## Acknowledgement We are extremely grateful for and would like to thank all our anonymous reviewers for their insights and feedback. This research is based upon work supported by U.S. DARPA CCU Program No. HR001122C0034 and INCAS Program No. HR001121C0165. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. ## References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2016. Neural machine translation by jointly learning to align and translate. Solon Barocas, Kate Crawford, Aaron Shapiro, and Hanna Wallach. 2017. The problem with bias: Allocative versus representational harms in machine learning. In *In Proceedings of SIGCIS*. Shane Bergsma and Dekang Lin. 2006. Bootstrapping path-based pronoun resolution. In *Proceedings of the* 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 33–40, Sydney, Australia. Association for Computational Linguistics. Alex Beutel, Jilin Chen, Zhe Zhao, and Ed H. Chi. 2017. Data decisions and theoretical implications when adversarially learning fair representations. Su Lin Blodgett, Solon Barocas, Hal Daumé III, and Hanna Wallach. 2020. Language (technology) is power: A critical survey of "bias" in NLP. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 5454– 5476, Online. Association for Computational Linguistics. Su Lin Blodgett, Gilsinia Lopez, Alexandra Olteanu, Robert Sim, and Hanna Wallach. 2021. Stereotyping Norwegian salmon: An inventory of pitfalls in fairness benchmark datasets. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1004–1015, Online. Association for Computational Linguistics. Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. *Advances* in neural information processing systems, 29:4349– 4357. Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. *Science*, 356(6334):183–186. Nicola De Cao, Wilker Aziz, and Ivan Titov. 2021. Editing factual knowledge in language models. Pengyu Cheng, Weituo Hao, Siyang Yuan, Shijing Si, and Lawrence Carin. 2021. Fairfil: Contrastive neural debiasing method for pretrained text encoders. Sunipa Dev, Tao Li, Jeff Phillips, and Vivek Srikumar. 2019. On measuring and mitigating biased inferences of word embeddings. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Yanai Elazar and Yoav Goldberg. 2018. Adversarial removal of demographic attributes from text data. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 11–21, Brussels, Belgium. Association for Computational Linguistics. Jonathan Frankle and Michael Carbin. 2019. The lottery ticket hypothesis: Finding sparse, trainable neural networks. Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, and Noah A. Smith. 2020. Realtoxicityprompts: Evaluating neural toxic degeneration in language models. Yue Guo, Yi Yang, and Ahmed Abbasi. 2022. Autodebias: Debiasing masked language models with automated biased prompts. In *Proceedings of the* 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1012–1023, Dublin, Ireland. Association for Computational Linguistics. Rowan Hall Maudslay, Hila Gonen, Ryan Cotterell, and Simone Teufel. 2019. It's all in the name: Mitigating gender bias with name-based counterfactual data substitution. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5267–5275, Hong Kong, China. Association for Computational Linguistics. Masahiro Kaneko and Danushka Bollegala. 2021. Debiasing pre-trained contextualised embeddings. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1256–1266, Online. Association for Computational Linguistics. Saket Karve, Lyle Ungar, and João Sedoc. 2019. Conceptor debiasing of word representations evaluated on WEAT. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing, pages 40–48, Florence, Italy. Association for Computational Linguistics. Keita Kurita, Nidhi Vyas, Ayush Pareek, Alan W Black, and Yulia Tsvetkov. 2019. Measuring bias in contextualized word representations. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing, pages 166–172, Florence, Italy. Association for Computational Linguistics. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. Albert: A lite bert for self-supervised learning of language representations. In *ICLR*. OpenReview.net. Anne Lauscher, Tobias Lueken, and Goran Glavaš. 2021. Sustainable modular debiasing of language models. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 4782–4797, Punta Cana, Dominican Republic. Association for Computational Linguistics. Paul Pu Liang, Irene Mengze Li, Emily Zheng, Yao Chong Lim, Ruslan Salakhutdinov, and LouisPhilippe Morency. 2020. Towards debiasing sentence representations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5502–5515, Online. Association for Computational Linguistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. Chandler May, Alex Wang, Shikha Bordia, Samuel R. Bowman, and Rachel Rudinger. 2019. On measuring social biases in sentence encoders. In *Proceedings* of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 622–628, Minneapolis, Minnesota. Association for Computational Linguistics. Miranda Oshige McGowan and James Lindgren. 2006. Testing the model minority myth. *Nw. UL REv.*, 100:331. Nicholas Meade, Elinor Poole-Dayan, and Siva Reddy. 2022. An empirical survey of the effectiveness of debiasing techniques for pre-trained language models. In *Proceedings of the 60th Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 1878–1898, Dublin, Ireland. Association for Computational Linguistics. Eric Mitchell, Charles Lin, Antoine Bosselut, Chelsea Finn, and Christopher D. Manning. 2021. Fast model editing at scale. Moin Nadeem, Anna Bethke, and Siva Reddy. 2020. Stereoset: Measuring stereotypical bias in pretrained language models. *arXiv preprint arXiv:2004.09456*. Nikita Nangia, Clara Vania, Rasika Bhalerao, and Samuel R. Bowman. 2020. CrowS-pairs: A challenge dataset for measuring social biases in masked language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1953–1967, Online. Association for Computational Linguistics. Ali Omrani, Alireza Salkhordeh Ziabari, Charles Yu, Preni Golazizian, Brendan Kennedy, Mohammad Atari, Heng Ji, and Morteza Dehghani. 2023. Socialgroup-agnostic bias mitigation via the stereotype content model. In Proc. The 61st Annual Meeting of the Association for Computational Linguistics (ACL2023). Rebecca Qian, Candace Ross, Jude Fernandes, Eric Michael Smith, Douwe Kiela, and Adina Williams. 2022. Perturbation augmentation for fairer NLP. In *Proceedings of the 2022 Conference on* Empirical Methods in Natural Language Processing, pages 9496–9521, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Shauli Ravfogel, Yanai Elazar, Hila Gonen, Michael Twiton, and Yoav Goldberg. 2020. Null it out: Guarding protected attributes by iterative nullspace projection. In *Proceedings of the 58th Annual Meeting of* the Association for Computational Linguistics, pages 7237–7256, Online. Association for Computational Linguistics. Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in coreference resolution. In *Proceedings of the 2018* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 8–14, New Orleans, Louisiana. Association for Computational Linguistics. Timo Schick, Sahana Udupa, and Hinrich Schütze. 2021. Self-diagnosis and self-debiasing: A proposal for reducing corpus-based bias in NLP. Transactions of the Association for Computational Linguistics, 9:1408– 1424. Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng. 2019. The woman worked as a babysitter: On biases in language generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3407– 3412, Hong Kong, China. Association for Computational Linguistics. Anton Sinitsin, Vsevolod Plokhotnyuk, Dmitriy Pyrkin, Sergei Popov, and Artem Babenko. 2020. Editable neural networks. Eric Michael Smith, Melissa Hall, Melanie Kambadur, Eleonora Presani, and Adina Williams. 2022. "I'm sorry to hear that": Finding new biases in language models with a holistic descriptor dataset. In *Proceedings of the 2022 Conference on Empirical Methods* in Natural Language Processing, pages 9180–9211, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Jesse Vig, Sebastian Gehrmann, Yonatan Belinkov, Sharon Qian, Daniel Nevo, Yaron Singer, and Stuart Shieber. 2020. Investigating gender bias in language models using causal mediation analysis. In Advances in Neural Information Processing Systems, volume 33, pages 12388–12401. Curran Associates, Inc. Kellie Webster, Xuezhi Wang, Ian Tenney, Alex Beutel, Emily Pitler, Ellie Pavlick, Jilin Chen, Ed H. Chi, and Slav Petrov. 2020. Measuring and reducing gendered correlations in pre-trained models. Technical report. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Huggingface's transformers: State-of-the-art natural language processing. Ke Yang, Charles Yu, Yi R. Fung, Manling Li, and Heng Ji. 2023. Adept: A debiasing prompt framework. AAAI. Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell. 2018. Mitigating unwanted biases with adversarial learning. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018a. Gender bias in coreference resolution: Evaluation and debiasing methods. In *Proceedings of the 2018 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 15–20, New Orleans, Louisiana. Association for Computational Linguistics. Jieyu Zhao, Yichao Zhou, Zeyu Li, Wei Wang, and KaiWei Chang. 2018b. Learning gender-neutral word embeddings. In *Proceedings of the 2018 Conference* on Empirical Methods in Natural Language Processing, pages 4847–4853, Brussels, Belgium. Association for Computational Linguistics. Chen Zhu, Ankit Singh Rawat, Manzil Zaheer, Srinadh Bhojanapalli, Daliang Li, Felix Yu, and Sanjiv Kumar. 2020. Modifying memories in transformer models. Ran Zmigrod, Sabrina J. Mielke, Hanna Wallach, and Ryan Cotterell. 2019. Counterfactual data augmentation for mitigating gender stereotypes in languages with rich morphology. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1651–1661, Florence, Italy. Association for Computational Linguistics. ## A Hyperparameter Search For the models reported in Table 2, the only hyperparameter search performed was for the value of k. In general, fewer attempts were made for output aggregation methods, as those took much longer to perform. Also, output aggregation and input aggregation resulted in different maximum values of k. The range of k experimented on was based on being near to 10% of available vectors. All k values were chosen uniformly over the provided range (both bounds inclusive) based on the step size. Summary statistics are not included as each k is essentially a different value. 1. bert (both bert-base-uncased and bert-basecased). For input aggregation, k from 2000 to 22000 with a step size of 1000. For output aggregation, k from 5000 to 11000 with a step size of 1500. 2. roberta-base. For input aggregation, k from 2000 to 26000 with a step size of 1000. For output aggregation, k from 5000 to 11000 with a step size of 1500. 3. albert-base-v2. For input aggregation, k from 1000 to 8000 with a step size of 250. For output aggregation, k from 500 to 1500 with a step size of 200. ## B Dataset Download Links CrowS Pairs: https://github.com/nyu-mll/crowspairs StereoSet: https://stereoset.mit.edu/ ## C Dataset Statistics The full CrowS dataset of 1508 examples is used for evaluation. Instances from StereoSet where any of the masked words tokenized to more than one token were discarded, since the masked language models we use do not support joint mask prediction/infilling. In the remaining set, there were 765 instances in the gender domain, 2430 in the profession domain, 2886 in the race domain, and 237 in the religion domain. ## D Evaluation Metrics Given a sentence si = [w 1 i , w2 i , . . . , wn i ] where w j i = [MASK], we can compute the probability distribution of the tokens in the masked index by taking $$M(\cdot|left=[w_{i}^{1},\ldots,w_{i}^{j-1}],$$ $$right=[w_{i}^{j+1},\ldots,w_{i}^{n}],\theta).\tag{3}$$ So, we can compute the probability that the model prefers a specific word in the context of sentence si, where siis understood to have a single [MASK] token at position j, by the notation M(si) = M(w j i|*lef t* = [w 1 i , . . . , w j−1 i]*, right* = [w j+1 i*, . . . , w*n i ], θ). Sentence stis stereotypical, sa is antistereotypical, and the final sentence sn is the non-sensical sentence. As a reminder, for StereoSet we have all three sentences and for CrowS we have only the sensical two sentences. Stereoset. There are three evaluation metrics proposed in the StereoSet dataset: the Stereotype Score (SS), the Language Modeling Score (LMS), and the Idealized Context Association Test score (ICAT). The SS of a model M is the proportion of the sentence pairs in which the model tends to prefer the stereotypical sentence over the antistereotypical sentence. For an evaluation set E, $$s_{\mathcal{S}}(M)=\mathbb{E}_{(s_{t},s_{a},s_{n})\in\mathcal{E}}\mathbb{1}[M(s_{\mathbf{t}})>M(s_{\mathbf{a}})]\tag{4}$$ An ideal model without bias is claimed to have an SS score of 0.5 meaning that it does not prefer either a stereotype or an antistereotype in general. The LMS score measures the basic language modeling capability of a model and is intended to mimic a regression test. It is calculated as how often the model M prefers an acceptable sentence over a meaningless one. $$lms(M)=\frac{1}{2}\mathbb{E}_{(s_{t},s_{a},s_{n})\in\mathcal{E}}\mathbb{1}\left[M(s_{\mathbf{t}})>M(s_{\mathbf{n}})\right]$$ $$+\frac{1}{2}\mathbb{E}_{(s_{t},s_{a},s_{n})\in\mathcal{E}}\mathbb{1}\left[M(s_{\mathbf{a}})>M(s_{\mathbf{n}})\right],\tag{5}$$ where we consider both stereotypical and antistereotypical sentences to be informative. A perfect language model should have a score of 1 and a debiased language model should have a score similar to the original language model. ICAT combines SS and LMS as $$i c a t(M)=l m s(M)*{\frac{\operatorname*{min}\{s s(M),1-s s(M)\}}{0.5}}$$ - $\ddot{\textrm{i}}$). (6) A perfect model achieves an ICAT of 1, a fully biased model achieves an ICAT of 0, and a random model achieves an ICAT of 0.5. CrowS Pairs. The CrowS score is also based on the masked language modeling probabilities but computed to condition on the prior probabilities of words. Given a pair of stereotypical and antistereotypical sentences (st, sa), we first split the tokens of each of them into constrastive tokens Ct, Ca (soft vs **determined** in the example from Section 4.1) and overlapping tokens O. We then compute the probability of each sentence via a summation of masked language modeling log probabilities of all overlapping tokens conditioned on the non-overlapping tokens: $$Q(M,{\mathcal{C}})=\sum_{j\in{\mathcal{O}}}\log P(j|{\mathcal{C}},{\mathcal{O}}\backslash\{j\})\qquad(7)$$ Finally, the CrowS metric measures the proportion of CrowS pairs where the model assigned a higher probability to the stereotypical sentence compared to the antistereotypical one: $$c r o w s(M)=\mathbb{E}_{(s_{t},s_{a})\in\mathcal{E}}\mathbf{1}\Big[Q(M,\mathcal{C}_{t})>Q(M,\mathcal{C}_{a})\Big]\tag{8}$$ i ## E Non-Binary Bias Domains To handle the multi-class setting (e.g., religion bias), we can adjust the weight block importance calculation to be based on variance rather than only direction (i.e., run PCA, then choose the weight vectors where the first few principal components explain the most variance in the gradients) and adjust the gradient optimization step to be based on a weighted average of the projection of the gradients. A weighted average of the gradients encodes the same philosophy as the proposed binary form of PCGU from the main paper; consider that the current gradient update of decreasing the advantaged sentence would be identical (other than some scaling) to a weighted average in the case where the gradients point in completely opposite directions (when they are slightly off opposite, it becomes approximate). Also, with a weighted vector average, we can still utilize the philosophy of decreasing the advantaged forms (as suggested in Section 4.6). ## F Facts Vs Bias The boundary between fact and bias can often be blurry. Although we know some sentences may contain unalienable truths, an LM without world knowledge may not. However, it should at least recognize that these sentences *represent* facts. In this sense, both the sentence "Men run faster at the Olympics" and the sentence "Women run faster at the Olympics" could be reasonable (even if one is false). By using WinoGender, we guarantee that all examples for PCGU contain bias, because they necessarily assume a gender. When probing our MLMs with 1. The runner tied [MASK] shoes. 2. The fast runner tied [MASK] shoes. 3. Men run [MASK] than women do. ## 4. Women Run [Mask] Than Men Do. we find that PCGU debiases the distribution of {his, her} for the first two sentences (both of which start out with "his" having the highest probability of all predicted words) but does not touch the distribution of the top words for the last two sentences which are shaped like facts (the distributions for both sentences before PCGU have "faster" with around 90% of the probability mass, followed by "better," "more," and "longer." After PCGU, the order of the words remains the same, and the probabilities remain constant as well, other than slight variations on the order of <1%). So, it seems that even without explicitly differentiating "facts" from "bias," the choice of training data allows PCGU to unlearn ideas that are clearly biased and leave those closer to fact untouched. This may also suggest that such facts and biases are encoded in separate parts of the PLM. One nice feature of PCGU is that the decision of which sentence is advantaged/disadvantaged is decoupled from the rest of the method. If one wanted to use training data which may or may not contain fact, it seems reasonable that they could incorporate some fact-checking/NLI model in the scoring function when determining which sentence is advantaged/disadvantaged. Of course, this runs into the problem that a biased scorer may incorrectly perceive an opinion to be factual, so that model itself should be debiased, possibly via a self-training loop with PCGU. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 6 ✓ A2. Did you discuss any potential risks of your work? 7 ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** 3,4 ✓ B1. Did you cite the creators of artifacts you used? 3,4 B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? 3, Limitations ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. 3, Appendix ## C ✓ **Did You Run Computational Experiments?** 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 4, Appendix ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 4 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 4 ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
mueller-etal-2023-meta
Meta-training with Demonstration Retrieval for Efficient Few-shot Learning
https://aclanthology.org/2023.findings-acl.376
Large language models show impressive results on few-shot NLP tasks. However, these models are memory and computation-intensive. Meta-training allows one to leverage smaller models for few-shot generalization in a domain-general and task-agnostic manner; however, these methods alone results in models that may not have sufficient parameterization or knowledge to adapt quickly to a large variety of tasks. To overcome this issue, we propose meta-training with demonstration retrieval, where we use a dense passage retriever to retrieve semantically similar labeled demonstrations to each example for more varied supervision. By separating external knowledge from model parameters, we can use meta-training to train parameter-efficient models that generalize well on a larger variety of tasks. We construct a meta-training set from UnifiedQA and CrossFit, and propose a demonstration bank based on UnifiedQA tasks. To our knowledge, our work is the first to combine retrieval with meta-training, to use DPR models to retrieve demonstrations, and to leverage demonstrations from many tasks simultaneously, rather than randomly sampling demonstrations from the training set of the target task. Our approach outperforms a variety of targeted parameter-efficient and retrieval-augmented few-shot methods on QA, NLI, and text classification tasks (including SQuAD, QNLI, and TREC). Our approach can be meta-trained and fine-tuned quickly on a single GPU.
# Meta-Training With Demonstration Retrieval For Efficient Few-Shot Learning Aaron Mueller1∗ Kanika Narang2 **Lambert Mathias**2 Qifan Wang2 **Hamed Firooz**2 1Johns Hopkins University, Baltimore, MD 2 Meta AI, Menlo Park, CA [email protected], {kanika13,mathiasl,wqfcr,mhfirooz}@meta.com ## Abstract ![0_Image_0.Png](0_Image_0.Png) Large language models show impressive results on few-shot NLP tasks. However, these models are memory and computation-intensive. Metatraining allows one to leverage smaller models for few-shot generalization in a domaingeneral and task-agnostic manner (Min et al., 2022a; Wei et al., 2022; Chen et al., 2022); however, these methods alone results in models that may not have sufficient parameterization or knowledge to adapt quickly to a large variety of tasks. To overcome this issue, we propose meta-training *with demonstration retrieval*, where we use a dense passage retriever to retrieve semantically similar labeled demonstrations to each example for more varied supervision. By separating external knowledge from model parameters, we can use meta-training to train parameter-efficient models that generalize well on a larger variety of tasks. We construct a meta-training set from UNIFIEDQA and CROSSFIT, and propose a demonstration bank based on UNIFIEDQA tasks. To our knowledge, our work is the first to combine retrieval with meta-training, to use DPR models to retrieve demonstrations, and to leverage demonstrations from many tasks simultaneously, rather than randomly sampling demonstrations from the training set of the target task. Our approach outperforms a variety of targeted parameter-efficient and retrievalaugmented few-shot methods on QA, NLI, and text classification tasks (including SQuAD, QNLI, and TREC). Our approach can be metatrained and fine-tuned quickly on a single GPU. ## 1 Introduction Large language models (LLMs) have become increasingly popular due to their impressive fewshot performance on many NLP tasks and domains (Brown et al., 2020; Chowdhery et al., 2022). This has resulted in many few-shot learning methods based on LLMs that require ever-larger GPUs and ∗Work done as an intern at Meta. increasing computation. Methods requiring no parameter updates such as in-context learning (Brown et al., 2020) and parameter-efficient methods like Adapters (Houlsby et al., 2019) partially mitigate these downsides, but ultimately, larger computation budgets are increasingly necessary to achieve stateof-the-art few-shot performance—even to simply load models and perform inference. Meta-learning (Vilalta and Drissi, 2002; Finn et al., 2017) and meta-training (Min et al., 2022a) are methods that make smaller language models capable of quicker and more robust few-shot performance across multiple tasks and domains. However, smaller models may not be able to store enough knowledge for effective generalization in many domains and tasks simultaneously. Retrieval is one way to overcome this: by separating parametric knowledge in the language model from external knowledge (stored as retrievable text), one can leverage much more information than could be stored in the parameters of a language model. For example, retrieval-augmented generation (RAG; Lewis et al., 2020) and retrieval-enhanced transformers (RETRO; Borgeaud et al., 2022) retrieve natural language passages to improve performance on knowledge-intensive NLP tasks, although they do not perform meta-learning or meta-training and only evaluate on high-resource knowledgeintensive tasks. We thus propose **meta-training with demonstration retrieval** as a more parameter-efficient way to leverage demonstrations for few-shot learning. We retrieve semantically similar labeled demonstrations for each training and test example during meta-training and fine-tuning. On a relatively small sequence-to-sequence model (BARTlarge, 440M parameters), we show our proposed approach is capable of generalizing quickly and well on a variety of downstream tasks (Table 1). Inspired by retrieval-augmented generation (RAG) models (Lewis et al., 2020), we use a dense passage retriever (DPR; Karpukhin et al., 2020) to retrieve demonstrations instead of Wikipedia passages. We retrieve semantically similar demonstrations from a large and diverse bank (§3.3) that is compiled from many existing question answering tasks (App. A), rather than randomly sampling demonstrations from the training set of the target task like most contemporary work (Min et al., 2022a; Brown et al., 2020; Gao et al., 2021). Our experiments show that our method (§3) outperforms tailored efficient few-shot baselines and other retrieval-augmented models on various tasks, including natural language inference (NLI), paraphrase detection, and extractive question answering (§5). To our knowledge, our work is the first to combine retrieval with meta-training (or multitask training more broadly), to use DPR models to retrieve demonstrations, and to leverage demonstrations from many tasks simultaneously, rather than retrieving random or k-nearest demonstrations from the training set of the target task. Our code is available on GitHub.1 ## 2 Related Work Meta-learning (Vilalta and Drissi, 2002; Finn et al., 2017) is a class of methods that supervise a model on *how to learn*; the goal is to leverage a collection of meta-training tasks to learn a better learning algorithm that generalizes to held-out tasks. Inspired by meta-learning, some recent stud-1https://github.com/facebookresearch/ metatrained-demRAG ies have attempted to induce specific abilities in language models in a task- and domain-agnostic manner via **meta-training**; this entails directly supervising a model on labeled examples from various tasks (sometimes using some controlled format or template (Chen et al., 2022; Wei et al., 2022)) to directly induce specific abilities or better inductive biases that improve generalization. Metatraining is typically accomplished via some form of controlled multi-task learning, as in Min et al. (2022a). Many studies have explored multi-task and multi-domain learning (Khashabi et al., 2020; Zhong et al., 2021; Aghajanyan et al., 2021; Ye et al., 2021; Wei et al., 2022), but these studies often leverage tasks that improve a model's abilities for some specific (set of) downstream tasks. In meta-training, we aim to directly improve the learning algorithm via controlled supervision, which should improve out-of-distribution generalization by teaching a model some helpful ability—such as in-context learning—that can result in gains on various downstream tasks (Min et al., 2022a). We focus on meta-training with examples from QA datasets. Few-shot learning is a common setting in which a model is supervised on only a few labeled examples. Many methods for improving fewshot performance are based on scaling model and data size (Brown et al., 2020; Chowdhery et al., 2022). Our goal is to improve few-shot performance across tasks in a computation- and memoryefficient manner, so we focus on smaller models that can be trained efficiently on a single GPU. Some parameter-efficient few-shot methods have been proposed, including cloze-style prompting (Schick and Schütze, 2021b), fine-tuning with manually tuned (Schick and Schütze, 2021a) and automatically tuned prompts and demonstrations (Gao et al., 2021), and meta-learning (Yu et al., 2018; Bansal et al., 2020; Bao et al., 2020). One advantage of our approach is that it does not require significant prompt tuning: rather, we standardize all of our tasks into a single format, similar to Chada and Natarajan (2021). This saves human time and computational resources. Crucially, these approaches compare probabilities of single tokens or small pre-selected label sets; thus, they cannot be used for open-domain tasks like question answering. Some work has proposed *generative* few-shot methods for opendomain tasks: this includes reformatting the input data to match a model's pre-training format (Chada and Natarajan, 2021), pre-training models to select relevant spans from context passages (Ram et al., 2021), and running a secondary pre-training step on labeled classification data (Mueller et al., 2022). Our model should be effective on many tasks, even when the label space is large and differs across examples; thus, our method is based on a *generative* sequence-to-sequence model. In-context learning (ICL; Brown et al., 2020) is increasingly used in few-shot methods; here, labeled *demonstrations* are concatenated to the same context as a test example to teach a model how to perform a task without additional gradient updates. Studies have analyzed what kinds of demonstrations are most effective (Liu et al., 2022), as well as what makes demonstrations effective (Min et al., 2022b; Xie et al., 2022). Our demonstration retrieval approach is most similar to Liu et al. (2022), who encode demonstrations and test examples into a sentence embedding space and retrieve the knearest demonstrations. Our method differs in multiple ways: we use dense passage retrievers instead of sentence embeddings; we use demonstrations from many training sets instead of the training set of the target task; and we perform gradient updates with demonstrations, which is more feasible on our relatively small BARTlarge-based model. Wei et al. (2022) find that very large LMs (>68B parameters) are required for ICL to be effective, but Min et al. (2022a) find that meta-training can be used to make a much smaller model (GPT2large, 774M parameters) capable of leveraging demonstrations. Here, we make BARTlarge (440M parameters) better at leveraging demonstrations through meta-training with demonstrations, like Min et al. (2022a); however, their method is designed for zero-shot generalization, and it selects from a constrained set of pre-defined labels. Our method is designed for *few-shot* settings and can be applied to open-domain tasks. Retrieval-augmented generation models consist of two components: *generators* and *retrievers*. The generator is typically a decoder-only LM (Guu et al., 2020) or sequence-to-sequence (seq2seq) model (Lewis et al., 2020; Izacard and Grave, 2021); we use seq2seq models. The retriever is most often a dense passage retrieval (DPR; Karpukhin et al., 2020) model based on BERTbase. RAG models are typically evaluated on knowledge-intensive tasks like abstractive QA and fact verification. Thus, the memory bank typically consists of Wikipedia passages, which augments the model with additional factual knowledge separate from the generator's parameters. Izacard et al. (2022) adapts this architecture for few-shot knowledge-intensive tasks using a very large generator (T5X(X)L) and a Contriever-based (Izacard et al., 2021) retriever. However, we are interested in more general-purpose methods, as well as more parameter- and memory-efficient methods that train or fine-tune quickly on a single GPU. Thus, we propose a task-agnostic and domain-general method to improve smaller generative models for few-shot settings: specifically, a retrieval-augmented metatraining step and a memory bank of labeled QA demonstrations instead of Wikipedia passages. ## 3 Method 3.1 Retrieval-Augmented Generation As we wish to retrieve similar labeled examples for every input, our architecture takes inspiration from retrieval-augmented generation (RAG) models (Lewis et al., 2020), which consist of a pretrained sequence-to-sequence component (we use BARTlarge) and a pre-trained dense passage retriever (DPR) component. Given an input x, the DPR component retrieves the K most semantically similar memory entries {zk}1*,...,K* from the memory bank z. Retrieval is performed using a BERTbased input encoder EI on x and BERT-based demonstration encoder ED on z to encode both into a vector space, and then running maximum inner product search:2 $$\{z_{k}\}_{1,\ldots,K}=\mathrm{{\bf top-}}K\left\{E_{I}(x)^{\top}E_{D}(z))\right\}\quad\quad(1)$$ The DPR component also returns the inner products themselves as document scores pη(zk|x). The input and retrieved entries are then passed to a pre-trained sequence-to-sequence model, BARTlarge, for autoregressive generation. At each timestep, we marginalize over the retrieved demonstrations by creating K separate input contexts, consisting of the input x and one retrieved entry zk. We then sum over BART's token probabilities pθ given each context, weighted by zk's document | Category | Dataset | Type | #Train | #Test | L | |Y| | |------------------------------------|-------------------------------|---------|----------|----------|----------|-------| | SQuAD (Rajpurkar et al., 2016) | Open QA | 86,588 | 10,507 | 10 / 120 | - | | | BioASQ (Tsatsaronis et al., 2015) | Open QA | 24,559 | 1,504 | 10 / 200 | - | | | QASC (Khot et al., 2020) | Multi-choice QA | 8,134 | 926 | 8 / 18 | - | | | Knowledgeintensive QA | TriviaQA (Joshi et al., 2017) | Open QA | 61,688 | 7,785 | 13 / 677 | - | | TextbookQA (Kembhavi et al., 2017) | Open QA | 15,154 | 1,503 | 10 / 581 | - | | | TREC (Voorhees and Tice, 2000) | Question class. | 5,452 | 500 | 10 | 6 | | | MRPC (Dolan and Brockett, 2005) | Paraphrase class. | 3,668 | 408 | 22 / 21 | 2 | | | MNLI (Williams et al., 2018) | NLI | 392,702 | 9,815 | 22 / 11 | 3 | | | MNLI-mm (ibid.) | NLI | 392,702 | 9,832 | 22 / 11 | 3 | | | QNLI (Wang et al., 2018) | NLI | 104,743 | 5,463 | 11 / 30 | 3 | | score:${}^{3}$ $$p(y|x)\approx\prod_{i}^{N}\sum_{k}^{K}p_{\eta}(z_{k}|x)p_{\theta}(y|x,z_{k},y_{1:i-1})\tag{2}$$ ## 3.2 Meta-Training To adapt a sequence-to-sequence model for generalpurpose demonstration retrieval and answer generation, we perform a meta-training step by supervising the model with demonstrations on a collection4 of 18 QA tasks (Table 7). We update the parameters of the BART component of our model during meta-training by supervising BART (using its normal cross-entropy loss) to generate the question and its answer given the question and a set of retrieved demonstrations. We use QA tasks due to the semantic diversity of inputs and labels; compare to text classification tasks, where the label space is much smaller and labels are often less informative. We modify and use the QA meta-training task collections from (Min et al., 2022a).This consists of various extractive, multiple-choice, and/or abstractive QA tasks from CROSSFIT and a subsample of UNIFIEDQA (Khashabi et al., 2020, 2022), including NaturalQuestions, MCTest, BIOMRC, inter alia. We modify the meta-training collections by (1) removing our evaluation sets if they are present,5and (2) standardizing the format of each 3This is similar to the RAG-Token approach in Lewis et al. (2020). The number of demonstrations we can use is not limited by the context length since we marginalize over each demonstration in its own separate context. 4Throughout this study, we use "task" to refer to a single dataset like SQuAD or NaturalQuestions, and "collection" to refer to the dataset obtained by concatenating a set of tasks. 5We also remove any examples where the question has a task. Our final meta-training collection contains 32 tasks, which we subsample to 18 tasks based on semantic similarity to our evaluation tasks; see Appendix A for a full list of tasks and details on our semantic subsampling procedure, and §5.2 for a description of the downstream effect of semantic subsampling. Following Chada and Natarajan (2021), we standardize each input in the meta-training data to a "question:. . . \n answer: [MASK] \n context:. . . " format. Then, the output sequence consists of both the question and answer sequences,6 which aligns with BART's pre-training objective of reconstructing the entire input sequence (not just masked spans). Like Chada and Natarajan (2021), we find that aligning the input/output format with BART's pre-training objective makes a positive difference for downstream performance. For QASC, which is a multiple-choice QA task, we put all of the answer options in the context field before the two context sentences and generate the full answer string. This outperformed all other formats we tried by a significant margin.7 For classification tasks, we use the same question/answer/context format. For our singlesentence classification task (TREC), we place the input in the question field, and present all of the possible labels in the context field using a similar format as for QASC. For sentence-pair classifica- | Extractive QA Classification | |--------------------------------| tion tasks (MRPC, MNLI(-mm), QNLI), we place the first sentence or hypothesis in the question field and place the second sentence or premise in the context field. As with QA tasks, we generate both the question and answer fields in the target sequence, but only evaluate F1 on answer sequences. ## 3.3 Demonstration Memory For the demonstration memory bank, we use training sets from UNIFIEDQA, excluding our evaluation tasks; the memory contains examples from 16 tasks. UnifiedQA has approximately 40% overlap with the QA meta-training collection, and no overlap with the non-QA collection. See Table 8 in Appendix A for a full list of tasks in our demonstration memory bank. We format each demonstration in the memory bank in the same question/answer/context format as described above, except that demonstrations have the ground-truth label after the answer: header instead of a [MASK] token. Note that memory entries consist of a text passage (the demonstration) and a title; for the title, we simply use the answer to the question. ## 4 Experimental Setup We evaluate on a variety of QA and classification tasks (Table 1). We select open-domain QA tasks from the MRQA shared task (Fisch et al., 2019) to reflect a variety of extractive QA formats, including a standard QA benchmark (SQuAD), a domainspecific challenging benchmark (BioASQ), and two knowledge-intensive QA benchmarks (TriviaQA and TextbookQA).8 Our few-shot QA splits of size {16, 32, 64, 128} for these tasks are from Ram et al. (2021), which are themselves derived from MRQA (Fisch et al., 2019). We also generate few-shot splits for QASC, which is a multiplechoice QA task; we evaluate on QASC to determine whether our model is also effective in dealing with much shorter contexts, and to ensure that it is not overfitting to more typical MRQA-style extractive tasks. Our few-shot classification task splits are from Gao et al. (2021). We evaluate on sentence pair 8While "knowledge-intensive" does not have a standard definition or straightforward measurement, the length of the contexts may act as a proxy for how knowledge-intensive a question answering task is. Contexts for our knowledgeintensive tasks are much longer, and thus require a model to synthesize much more information and/or retrieve information that is more relevant to the inputs to semantically prime the model for question-specific information. classification tasks which are not contained in our meta-training or demonstration tasks; sentence pair classification tasks like natural language inference (NLI) and paraphrase classification can be easily reformatted to our question/answer/context format. We also evaluate on TREC, which is a singlesentence text classification task where the model must guess the *category* of the answer to a question (e.g., human, location, number), rather than the answer itself. For each task and few-shot split size, we average scores across 5 random few-shot samples. ## 4.1 Baselines We compare against strong efficient few-shot methods, as well as similar models that will tell us why our method performs better. Note that our approach is *generative*, unlike iPET and LM-BFF; thus, it is usable on a wider variety of tasks. FewshotQA (Chada and Natarajan, 2021). A few-shot question answering method. We compare to the FewshotBARTL model, which is based on BARTlarge like our model and is the bestperforming variant. We use the same few-shot splits such that we can directly compare to the numbers reported in that paper. We also try metatraining this non-retrieval-augmented model, which is essentially our method without retrieval; we call this baseline FewshotQA-m. Splinter (Ram et al., 2021). A few-shot question answering model pre-trained to select salient spans from context passages. RAG (Lewis et al., 2020). The original RAGToken model with a memory of Wikipedia passages. We use the released model fine-tuned on NaturalQuestions (NQ), as this was the bestperforming RAG model on our tasks. To see whether our demonstration memory is more effective than Wikipedia passages when meta-training, we also try meta-training the RAG model with its Wikipedia memory; we call this baseline RAG-m. iPET (Schick and Schütze, 2021b). A manual prompt-tuning approach that induces better fewshot performance than GPT3 with much smaller LMs. We tune the best-performing ALBERTxxl (Lan et al., 2020) model on our tasks. LM-BFF (Gao et al., 2021). An automatic prompt-tuning approach based on RoBERTalarge (Liu et al., 2019). It requires no unlabeled text data to work well, unlike iPET. This model and iPET compare token probabilities to perform classifica- ![5_image_0.png](5_image_0.png) tion, so we cannot use them for open-domain tasks like question answering. Thus, we only compare to these models on classification. ## 4.2 Hyperparameters For meta-training, we use hyperparameters from Min et al. (2022a) where possible: init. LR 1×10−5, effective batch size 8,9training for a maximum of 30,000 steps. We checkpoint every 2,000 steps and select the checkpoint with the lowest mean loss on our 16-shot QA training sets. Metatraining finishes in ≈14 hours on 1 A100 GPU (40GB).10 For fine-tuning, we use hyperparameters from Chada and Natarajan (2021) where possible: init. LR 2 × 10−5, batch size 4, fine-tuning for a maximum of 1,000 steps or 35 epochs (whichever is larger). We checkpoint every 2 epochs and select the checkpoint with the highest exact match on the training set. Fine-tuning finishes in 30–60 minutes on 1 A100 GPU (40GB). For each meta-training and fine-tuning input, we retrieve 5 demonstrations from the memory.11 ## 5 Results Our model's F1 scores for extractive question answering (Figure 2) are higher than models of similar parameterizations, including similar models that have been meta-trained using the same training data. Our model also outperforms strong clas- | TREC | MNLI | MNLI-mm | QNLI | MRPC Avg. | | | |-------------|-------------------|-------------------|-------------------|--------------|-----------|------| | Majority | 18.8 | 32.7 | 33.3 | 49.5 | 81.2 43.1 | | | RoBERTa | *88.82.1 *45.86.4 | *47.86.8 *60.26.5 | 76.62.5 | 63.8 | | | | iPET | *85.04.1 | 71.21.7 | 71.82.6 *70.36.2 | 70.44.7 73.7 | | | | LM-BFF | *89.41.7 | 70.71.3 | *72.01.2 *69.21.9 | *78.13.4 | 75.9 | | | FewshotQA | 91.02.0 *47.96.3 | *46.15.9 *61.06.4 | *67.64.8 62.7 | | | | | FewshotQA-m | 92.41.4 *50.11.0 | *50.62.5 *71.82.1 | 74.03.7 | 67.8 | | | | RAG | *81.12.0 *62.40.9 | *61.81.2 *74.91.5 | 70.23.3 | 70.1 | | | | RAG-m | *87.81.7 *70.01.4 | 69.11.4 *83.21.5 | 74.92.8 | 77.0 | | | | Ours | 91.71.3 | 72.91.7 | 69.61.4 | 84.41.8 | 73.42.5 | 78.4 | sification approaches on TREC, MNLI, and QNLI (Table 2). Thus, **meta-training with semantically** similar demonstrations induces a more generalpurpose system that can perform well across a ## Variety Of Low-Resource Downstream Tasks. Contrast this with RAG, which often performs worst out of each model we test across tasks. Thus, the architecture itself is not inherently strong in few-shot settings, suggesting that meta-training makes a significant contribution to increased performance. This is also supported by the increased performance we observe with FewshotQA and RAG after meta-training, though note that meta-training does not help FewshotQA to the same extent it helps retrieval-augmented models. Also note that FewshotQA does not perform well on classification tasks, whereas our method achieves performance exceeding or close to the strongest baselines. This means that the combination of meta-training and retrieval enables a more general-purpose model than ![6_image_0.png](6_image_0.png) either of these components separately. With meta-training, RAG-m obtains performance much closer to our model. This tells us that meta-training is responsible for much of the performance gains we observe, though the demonstration memory bank also improves performance to a lesser extent. On MRPC, RAG-m outperforms our model, indicating that there exist some nonknowledge-intensive tasks where Wikipedia passages are more helpful than QA demonstrations. ## 5.1 Knowledge-Intensive Qa We also evaluate on few-shot knowledge-intensive QA tasks (Figure 3): here, TriviaQA and TextbookQA, using the few-shot splits from the MRQA shared task. While these are also technically extractive QA tasks, their contexts have an average length of 677 and 581 words, respectively, meaning that BART will likely struggle more to synthesize all of the information in these tasks (even with retrieval). We find that FewshotQA outperforms our method on both of these tasks, and that even Splinter outperforms our method at larger split sizes for TextbookQA. This means that demonstration retrieval | Model | SQuAD | BioASQ | QASC | TriviaQA | TbQA | |---------------|---------|----------|--------|------------|--------| | FewshotQA | 68.9 | 63.0 | 82.6 | 65.2 | 37.7 | | FewshotQA-m | 76.6 | 63.4 | 85.9 | 65.9 | 38.2 | | RAG-m | 80.0 | 62.9 | 88.9 | 66.6 | 27.7 | | Ours | 83.9 | 64.7 | 89.2 | 62.9 | 37.2 | | Ours (oracle) | 93.5 | 94.2 | 99.1 | 80.7 | 83.2 | may be actively harmful for these tasks. Thus, our meta-training method is optimizing RAG architectures for non-knowledge-intensive tasks, but not for knowledge-intensive tasks. Wikipedia passages are more effective than demonstrations in the memory bank for TriviaQA as well, as indicated by RAG-m outperforming our approach. However, meta-training with or without the memory bank still induces far better performance than the base RAG model, which performs worse than all baselines except Splinter. Thus, our method is still improving over RAG, making this model more versatile and better able to handle such tasks even if it is not the optimal approach. ## 5.2 Ablations Here, we perform further analyses to understand the contribution of individual model components and (meta-)training decisions. Memory bank. We find that performance is generally higher for question answering and classification when retrieving demonstrations instead of Wikipedia passages, as in Figure 2 and Table 2. This raises two questions: how much could the memory bank impact downstream performance in the best-case scenario? Relatedly, what is the upper bound on performance for our model given the best possible demonstration memory bank? To obtain an estimate, we create an *oracle* memory consisting of labeled test examples from our evaluation data. We find that scores significantly improve over our method and others in this setting, indicating that **this architecture has significant** potential to achieve further gains if the memory bank is improved. Number of retrieved demonstrations. Is retrieving more demonstrations always better? We compare performance when retrieving ![7_image_0.png](7_image_0.png) F1 ![7_image_1.png](7_image_1.png) K = {0, 1, 5, 10, 25, 50} demonstrations during fine-tuning and evaluation on non-knowledgeintensive QA (SQuAD) and sentence-pair classification (MNLI). Our results (Figure 4) show that F1 scores begin to saturate at 5–10 demonstrations for both tasks. However, using more demonstrations generally does not harm performance; the model is able to handle less helpful demonstrations without performance decreasing significantly. Why is retrieval helpful? Is the model abstracting semantic content from the retrieved demonstrations for improved performance, or is it simply learning to copy token sequences from the retrieved demonstrations? As an initial test, we can correlate the frequency of the ground-truth answer sequence in the retrieved documents with F1 scores on our QA tasks. Our results (Figure 5) suggest that the model is indeed learning to retrieve certain text strings from the demonstrations. This provides one possible path forward for improving the memory bank: higher semantic overlap with one's evaluation task increases the likelihood of these overlaps, so future work could focus on collecting (or perhaps generating) more semantically similar demon- | Retriever | SQuAD | BioASQ | QASC | TriviaQA | TbQA | |-------------|---------|----------|--------|------------|--------| | Random | 1.8 | 1.5 | 1.2 | 1.8 | 2.3 | | DPR (Wiki) | 11.5 | 1.8 | 15.7 | 4.9 | 24.3 | | DPR (PAQ) | 16.9 | 1.5 | 26.1 | 29.3 | 24.0 | | Contriever | 14.1 | 7.3 | 28.0 | 27.9 | 24.3 | | Retriever | SQuAD | BioASQ | QASC | TriviaQA | TbQA | |-------------|---------|----------|--------|------------|--------| | Random | 74.3 | 61.8 | 88.7 | 56.5 | 29.6 | | DPR (Wiki) | 83.9 | 64.7 | 89.2 | 62.9 | 37.2 | | DPR (PAQ) | 78.8 | 63.5 | 86.8 | 57.6 | 33.5 | | Contriever | 81.1 | 62.5 | 88.7 | 58.9 | 32.4 | strations that feature more lexical overlaps. However, this does not explain how retrieval improves performance on classification tasks, where the label space is small and labels are less informative. For NLI, the label space includes "entailment"/"neutral"/"contradiction", which we would not expect to see often in our demonstrations and which do not carry significant semantic content. Yet retrieval-augmented models outperform FewshotQA by a large margin on MNLI(-mm), so what is helping our model? There could exist some QA demonstrations which semantically prime our model toward correct completions, though sentence embedding similarity may not capture this helpfulness. Future work could ablate over specific features in the demonstrations. What type of retriever is best? For our experiments thus far, we have used the DPR component of the RAG-Token (NQ) model, which is pre-trained on Wikipedia and fine-tuned on NaturalQuestions. Is this an optimal starting point, or would some other retreiver be better? We compare against a DPR model pre-trained on the ProbablyAsked Questions (PAQ; Lewis et al., 2021) dataset, as well as the Contriever model (Izacard et al., 2021). Contrievers are unsupervised, whereas DPR models receive explicit supervision during pre- | Memory | SQuAD | BioASQ | QASC | TriviaQA | TbQA | |----------------------------|---------|----------|--------|------------|--------| | All tasks | 83.5 | 63.2 | 89.2 | 61.4 | 36.8 | | Semantically similar tasks | 83.9 | 64.7 | 89.2 | 62.9 | 37.2 | training. DPR tends to perform better when the downstream task is similar to the pre-training or fine-tuning data; however, in our case, demonstration retrieval is dissimilar from Wikipedia passage retrieval, and Contriever may handle larger traintest shifts better (Izacard et al., 2021). We evaluate both the relevance of the retrieved demonstrations (Table 4) and downstream F1 (Table 5) on our QA tasks. We find that DPR (PAQ) and Contriever are both better at retrieving similar demonstrations, as measured by the frequency with which they retrieve examples that contain the answer. For BioASQ, only Contriever retrieves more relevant demonstrations than a random retriever. However, retrieving more relevant demonstrations does not translate into increased downstream performance: DPR (Wiki) consistently outperforms the others. Why? Through qualitative analysis, we find that DPR (Wiki) retrieves more semantically diverse demonstrations, whereas DPR (PAQ) and Contriever retrieve demonstrations that are technically more similar to the test example, but also less diverse *across* test examples.Thus, there should be a balance between diversity and relevance: completely random retrieval is not effective (as indicated by our random retrieval baseline scoring worst), but neither is the more constrained demonstration set we retrieve using an arguably more optimal retriever. Meta-training data. Is meta-training helpful because of the variety of tasks included in our setup (the *more is better* hypothesis), or would it be better to select meta-training data in a more principled way (the *similar datasets are better* hypothesis)? We compare downstream performance when meta-training on all QA tasks from MetaICL versus the top tasks by mean instancelevel semantic similarity to our evaluation tasks (Table 6). To compute semantic similarity, we use the stsb-roberta-base-v2 model from SentenceTransformers (Reimers and Gurevych, 2019) and compute the mean pairwise cosine similarity between the 16-shot training examples in our evaluation tasks and all examples in a meta-training task. ![8_image_0.png](8_image_0.png) We then select the top tasks by similarity until we have over 240,000 examples (enough for 30,000 training steps using batch size 8). See Appendix A for a list of meta-training tasks before and after subsampling. We find that **selecting meta-training data** based on semantic similarity to our evaluation tasks is helpful for both our QA and **non-QA** tasks: F1 increases across tasks when only metatraining on the most similar data. This contrasts with the findings of Min et al. (2022a), who find that more meta-training tasks is generally better. ## 6 Conclusions We have proposed a meta-training method (§3.2) that retrieves (§3.1) semantically similar demonstrations from a diverse demonstration bank (§3.3). Our method achieves higher performance on average across many tasks than other strong parameterefficient few-shot baselines (§5). In future work, one could explore a mixture of demonstration retrieval and passage retrieval for improved performance on a wider variety of tasks—including knowledge-intensive tasks. ## Limitations Our method requires access to a large set of labeled examples for the memory bank—ideally with some relevance to the evaluation tasks. This limits the languages and tasks that are optimal for this method: there does not exist a large variety of training examples for low-resource language varieties, nor for certain much more specific tasks—as in, for example, industry applications with domainspecific customer data. And while multilingual models could leverage cross-lingual transfer, it is unclear how well this model would generalize into low-resource languages when (for example) using multilingual BART. When using the full demonstration memory, meta-training does not run on a 16GB GPU using our current implementation. While this does exclude more common GPUs, our approach could still run quickly on a 32GB GPU in a few hours, thus costing far less than pre-training a language model of comparable few-shot performance from scratch. ## References Armen Aghajanyan, Anchit Gupta, Akshat Shrivastava, Xilun Chen, Luke Zettlemoyer, and Sonal Gupta. 2021. Muppet: Massive multi-task representations with pre-finetuning. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language* Processing, pages 5799–5811, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Trapit Bansal, Rishikesh Jha, Tsendsuren Munkhdalai, and Andrew McCallum. 2020. Self-supervised metalearning for few-shot natural language classification tasks. In *Proceedings of the 2020 Conference on* Empirical Methods in Natural Language Processing (EMNLP), pages 522–534, Online. Association for Computational Linguistics. Yujia Bao, Menghua Wu, Shiyu Chang, and Regina Barzilay. 2020. Few-shot text classification with distributional signatures. In *8th International Conference on Learning Representations, ICLR 2020, Addis* Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George van den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, Diego de Las Casas, Aurelia Guy, Jacob Menick, Roman Ring, Tom Hennigan, Saffron Huang, Loren Maggiore, Chris Jones, Albin Cassirer, Andy Brock, Michela Paganini, Geoffrey Irving, Oriol Vinyals, Simon Osindero, Karen Simonyan, Jack W. Rae, Erich Elsen, and Laurent Sifre. 2022. Improving language models by retrieving from trillions of tokens. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In *Advances in Neural Information Processing Systems*, volume 33, pages 1877–1901. Curran Associates, Inc. Rakesh Chada and Pradeep Natarajan. 2021. FewshotQA: A simple framework for few-shot learning of question answering tasks using pre-trained text-totext models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6081–6090, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Yanda Chen, Ruiqi Zhong, Sheng Zha, George Karypis, and He He. 2022. Meta-learning via language model in-context tuning. In *Proceedings of the 60th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 719–730, Dublin, Ireland. Association for Computational Linguistics. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022. Palm: Scaling language modeling with pathways. *CoRR*, abs/2204.02311. William B. Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005). Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of *Proceedings of Machine Learning Research*, pages 1126–1135. PMLR. Adam Fisch, Alon Talmor, Robin Jia, Minjoon Seo, Eunsol Choi, and Danqi Chen. 2019. MRQA 2019 shared task: Evaluating generalization in reading comprehension. In *Proceedings of the 2nd Workshop on Machine Reading for Question Answering,* MRQA@EMNLP 2019, Hong Kong, China, November 4, 2019, pages 1–13. Association for Computational Linguistics. Tianyu Gao, Adam Fisch, and Danqi Chen. 2021. Making pre-trained language models better few-shot learners. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3816–3830, Online. Association for Computational Linguistics. Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. 2020. Retrieval augmented language model pre-training. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of *Proceedings of Machine Learning Research*, pages 3929–3938. PMLR. Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for NLP. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of *Proceedings* of Machine Learning Research, pages 2790–2799. PMLR. Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, and Edouard Grave. 2021. Towards unsupervised dense information retrieval with contrastive learning. CoRR, abs/2112.09118. Gautier Izacard and Edouard Grave. 2021. Leveraging passage retrieval with generative models for open domain question answering. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, EACL 2021, Online, April 19 - 23, 2021, pages 874– 880. Association for Computational Linguistics. Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane DwivediYu, Armand Joulin, Sebastian Riedel, and Edouard Grave. 2022. Few-shot learning with retrieval augmented language models. *CoRR*, abs/2208.03299. Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2021. Billion-scale similarity search with gpus. IEEE Trans. Big Data, 7(3):535–547. Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In *Proceedings of the 55th Annual Meeting of* the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601–1611, Vancouver, Canada. Association for Computational Linguistics. Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for opendomain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769–6781, Online. Association for Computational Linguistics. Aniruddha Kembhavi, Minjoon Seo, Dustin Schwenk, Jonghyun Choi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Are you smarter than a sixth grader? textbook question answering for multimodal machine comprehension. In *2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, pages 5376–5384. Daniel Khashabi, Yeganeh Kordi, and Hannaneh Hajishirzi. 2022. Unifiedqa-v2: Stronger generalization via broader cross-format training. *CoRR*, abs/2202.12359. Daniel Khashabi, Sewon Min, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter Clark, and Hannaneh Hajishirzi. 2020. Unifiedqa: Crossing format boundaries with a single QA system. In *Findings of the* Association for Computational Linguistics: EMNLP 2020, Online Event, 16-20 November 2020, volume EMNLP 2020 of *Findings of ACL*, pages 1896–1907. Association for Computational Linguistics. Tushar Khot, Peter Clark, Michal Guerquin, Peter Jansen, and Ashish Sabharwal. 2020. Qasc: A dataset for question answering via sentence composition. *Proceedings of the AAAI Conference on* Artificial Intelligence, 34(05):8082–8090. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A lite BERT for self-supervised learning of language representations. In *8th International Conference on Learning Representations,* ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. 2020. Retrieval-augmented generation for knowledgeintensive nlp tasks. In *Advances in Neural Information Processing Systems*, volume 33, pages 9459– 9474. Curran Associates, Inc. Patrick Lewis, Yuxiang Wu, Linqing Liu, Pasquale Minervini, Heinrich Küttler, Aleksandra Piktus, Pontus Stenetorp, and Sebastian Riedel. 2021. PAQ: 65 million probably-asked questions and what you can do with them. *Transactions of the Association for Computational Linguistics*, 9:1098–1115. Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, and Weizhu Chen. 2022. What makes good in-context examples for GPT-3? In Proceedings of Deep Learning Inside Out (DeeLIO 2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, pages 100–114, Dublin, Ireland and Online. Association for Computational Linguistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. *CoRR*, abs/1907.11692. Sewon Min, Mike Lewis, Luke Zettlemoyer, and Hannaneh Hajishirzi. 2022a. MetaICL: Learning to learn in context. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2791–2809, Seattle, United States. Association for Computational Linguistics. Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2022b. Rethinking the role of demonstrations: What makes in-context learning work? *CoRR*, abs/2202.12837. Aaron Mueller, Jason Krone, Salvatore Romeo, Saab Mansour, Elman Mansimov, Yi Zhang, and Dan Roth. 2022. Label semantic aware pre-training for fewshot text classification. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8318– 8334, Dublin, Ireland. Association for Computational Linguistics. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In *Proceedings of* the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics. Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, and Omer Levy. 2021. Few-shot question answering by pretraining span selection. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3066–3079, Online. Association for Computational Linguistics. Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong, China. Association for Computational Linguistics. Timo Schick and Hinrich Schütze. 2021a. Exploiting cloze-questions for few-shot text classification and natural language inference. In *Proceedings of the* 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 255–269, Online. Association for Computational Linguistics. Timo Schick and Hinrich Schütze. 2021b. It's not just size that matters: Small language models are also fewshot learners. In *Proceedings of the 2021 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2339–2352, Online. Association for Computational Linguistics. George Tsatsaronis, Georgios Balikas, Prodromos Malakasiotis, Ioannis Partalas, Matthias Zschunke, Michael R. Alvers, Dirk Weissenborn, Anastasia Krithara, Sergios Petridis, Dimitris Polychronopoulos, Yannis Almirantis, John Pavlopoulos, Nicolas Baskiotis, Patrick Gallinari, Thierry Artières, Axel-Cyrille Ngonga Ngomo, Norman Heino, Éric Gaussier, Liliana Barrio-Alvers, Michael Schroeder, Ion Androutsopoulos, and Georgios Paliouras. 2015. An overview of the BIOASQ large-scale biomedical semantic indexing and question answering competition. *BMC Bioinform.*, 16:138:1–138:28. Ricardo Vilalta and Youssef Drissi. 2002. A perspective view and survey of meta-learning. *Artif. Intell. Rev.*, 18(2):77–95. Ellen M. Voorhees and Dawn M. Tice. 2000. Building a question answering test collection. In *Proceedings* of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '00, page 200–207, New York, NY, USA. Association for Computing Machinery. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353–355, Brussels, Belgium. Association for Computational Linguistics. Jason Wei, Maarten Paul Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew Mingbo Dai, and Quoc V. Le. 2022. Finetuned language models are zero-shot learners. In *International Conference on Learning Representations*. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguistics. Sang Michael Xie, Aditi Raghunathan, Percy Liang, and Tengyu Ma. 2022. An explanation of in-context learning as implicit bayesian inference. In *International Conference on Learning Representations*. Qinyuan Ye, Bill Yuchen Lin, and Xiang Ren. 2021. CrossFit: A few-shot learning challenge for crosstask generalization in NLP. In *Proceedings of the* 2021 Conference on Empirical Methods in Natural Language Processing, pages 7163–7189, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Mo Yu, Xiaoxiao Guo, Jinfeng Yi, Shiyu Chang, Saloni Potdar, Yu Cheng, Gerald Tesauro, Haoyu Wang, and Bowen Zhou. 2018. Diverse few-shot text classification with multiple metrics. In *Proceedings of* the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1206–1215, New Orleans, Louisiana. Association for Computational Linguistics. Ruiqi Zhong, Kristy Lee, Zheng Zhang, and Dan Klein. 2021. Adapting language models for zero-shot learning by meta-tuning on dataset and prompt collections. In *Findings of the Association for Computational* Linguistics: EMNLP 2021, pages 2856–2878, Punta Cana, Dominican Republic. Association for Computational Linguistics. ## A Tasks Meta-training. Our meta-training data is from MetaICL's (Min et al., 2022a) meta-training sets. Specifically, we use the QA task collection from the paper, which is a mixture of CROSSFIT and UNIFIEDQA tasks as shown in Table 7. We exclude any task on which we evaluate. As in MetaICL, we subsample 16,384 examples per task such that no individual task is overrepresented during metatraining. Some tasks are sampled twice due to the inclusion of both CROSSFIT and UNIFIEDQA versions of some tasks, as in Min et al. (2022a). All meta-training tasks: biomrc, boolq, freebase_qa, hotpot_qa, kilt_hotpotqa, kilt_nq, kilt_trex, kilt_zsre, lama-conceptnet, lama-google_re, lama-trex, mc_taco, numer_sense, quoref, ropes, search_qa, supergluemultirc, superglue-record, tweet_qa, web_questions, unifiedqa:boolq, unifiedqa:commonsenseqa, unifiedqa:drop, unifiedqa:narrativeqa, unifiedqa:natural_questions_with_dpr_para, unifiedqa:newsqa, unifiedqa:physical_iqa, unifiedqa:quoref, unifiedqa:race_string, unifiedqa:ropes, unifiedqa:social_iqa, unifiedqa:winogrande_xl Subsampled by similarity: biomrc, boolq, freebase_qa, hotpot_qa, lama-google_re, quoref, ropes, superglue-multirc, superglue-record, unifiedqa:boolq, unifiedqa:commonsenseqa, unifiedqa:drop, unifiedqa:narrativeqa, unifiedqa:natural_questions_with_dpr_para, unifiedqa:newsqa, unifiedqa:quoref, unifiedqa:race_string, unifiedqa:ropes Table 7: Tasks used in our meta-training data. We subsample 16,384 examples per task to ensure balanced supervision during meta-training. All tasks are from CROSSFIT unless prefixed with "unifiedqa:". We also perform a targeted subsampling procedure, where we select tasks by semantic similarity to our evaluation tasks. For this, we compute the mean pairwise semantic similarity between a meta-training task's examples and one 16-shot split of each of our evaluation tasks, then select metatraining tasks in decreasing order of similarity. Semantic similarity is computed by calculating the cosine similarity of the sentence embeddings from the stsb-roberta-base-v2 model in SentenceTransformers (Reimers and Gurevych, 2019). Demonstrations. Our demonstrations are from the UNIFIEDQA collection, which includes extractive, abstractive, and multiple-choice QA tasks as shown in Table 8. We exclude any task on which we evaluate. Note that there is some overlap between the demonstration set and the meta-training set, though the demonstrations contain the correct answer whereas the meta-training examples do not. Demonstration task bank: unifiedqa:ai2_science_middle, unifiedqa:boolq, unifiedqa:commonsenseqa, unifiedqa:drop, unifiedqa:mctest, unifiedqa:narrativeqa, unifiedqa:natural_questions_with_dpr_para, unifiedqa:newsqa, unifiedqa:openbookqa, unifiedqa:openbookqa_with_ir, unifiedqa:physical_iqa, unifiedqa:quoref, unifiedqa:race_string, unifiedqa:ropes, unifiedqa:social_iqa, unifiedqa:winogrande_xl Table 8: Tasks used in our demonstration memory bank. Note that there is no subsampling within each task, since the retriever can simply ignore irrelevant demonstrations. All tasks are from UNIFIEDQA. ## B **Format Tuning For Multiple-Choice Qa** Chada and Natarajan (2021) observe significant performance gains by simply changing the format of the QA inputs and outputs. We use a format similar to theirs for most QA tasks, but it is not immediately clear how to extend the question/answer/context format to multiple-choice QA, or if including the answer options in the context would be helpful at all. Thus, we try three different formats for QASC and compare performance. Every example consists of a question q, two context sentences c1 and c2, a set of 8 answer options with letter labels {aA, aB, *. . .*, aH}, and a correct answer a ∈ {aA*, . . . , a*H}. We can generate either the full answer string, or the letter label of the answer i, where i ∈ {*A, B, . . . , H*}. We try putting the answer options in the question or the context, excluding the answer options altogether, generating the answer string a, and generating the answer letter i. Our results using BARTlarge (Table 9) indicate that generating the answer is better than just generating the letter label, that including the options in the context is helpful, and that excluding the options from the context or putting the options in the question is harmful to performance. The performance gap between different formats is *very* large, which aligns with the findings of Chada and Natarajan (2021): using an example format aligned with the model's pre-training format is one of the most important factors contributing to few-shot performance. | Format name | Format | Example | F1 | |---------------------------------|--------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------| | Options in question, | question: | q? | {aA, . . . , aH} \n | | generate letter | answer: [MASK] \n context: c1. c2. ⇒ question: q? \n answer: i | question: What does sunlight do for a plant? (A) during the day (B) Kills it (C) it can be seen (D) Helps it survive (E) Helps it drink water (F) It gets heated up (G) adding heat (H) Makes the color darker \n answer: [MASK] \n context: A plant requires food for survival. All plants require sunlight to make their food. ⇒ question: . . . \n answer: D | 15.6 | | Options in question, | question: | q? | {aA, . . . , aH} \n | | answer: [MASK] \n context: c1. | | | | | generate answer | c2. ⇒ question: q? \n answer: a | question: What does sunlight do for a plant? (A) during the day (B) Kills it (C) it can be seen (D) Helps it survive (E) Helps it drink water (F) It gets heated up (G) adding heat (H) Makes the color darker \n answer: [MASK] \n context: A plant requires food for survival. All plants require sunlight to make their food. ⇒ question: . . . \n answer: Helps it survive | 39.4 | | Options in context, | question: q? \n answer: [MASK] | | | | generate answer | \n context: | {aA, . . . , aH}. | c1. | | c2. ⇒ question: q? \n answer: a | question: What does sunlight do for a plant? \n answer: [MASK] \n context: (A) during the day (B) Kills it (C) it can be seen (D) Helps it survive (E) Helps it drink water (F) It gets heated up (G) adding heat (H) Makes the color darker. A plant requires food for survival. All plants require sunlight to make their food. ⇒ question: . . . \n answer: Helps it survive | 82.6 | | | No options, generate answer | question: q? \n answer: [MASK] \n context: c1. c2. ⇒ question: q? \n answer: a | question: What does sunlight do for a plant? \n answer: [MASK] \n context: A plant requires food for survival. All plants require sunlight to make their food. ⇒ question: . . . \n answer: Helps it survive | 49.8 | ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? No section number. Final section after conclusions. ✓ A2. Did you discuss any potential risks of your work? Yes, in limitations. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Yes (Sections 3 and 4). UnifiedQA, CrossFit, MetaICL data collection scripts. ✓ B1. Did you cite the creators of artifacts you used? Yes (Sections 3 and 4). ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? No. Will be releasing data and models upon approval on GitHub under a permissive license. ✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? No. We use existing QA and classification datasets for a similar research purposes as was originally intended. ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No. Our data are pre-existing common public datasets and do not contain PII. ✗ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? No. These datasets and their domains are diverse and better documented and described in their original papers (which we cite). ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Sections 3 and 4. ## C ✓ **Did You Run Computational Experiments?** Sections 3, 4, 5. ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 4. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4. ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Sections 4 and 5. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Sections 3, 4, 5. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
wu-etal-2023-vcsum
{VCSUM}: A Versatile {C}hinese Meeting Summarization Dataset
https://aclanthology.org/2023.findings-acl.377
Compared to news and chat summarization, the development of meeting summarization is hugely decelerated by the limited data. To this end, we introduce a versatile Chinese meeting summarization dataset, dubbed VCSum, consisting of 239 real-life meetings, with a total duration of over 230 hours. We claim our dataset is versatile because we provide the annotations of topic segmentation, headlines, segmentation summaries, overall meeting summaries, and salient sentences for each meeting transcript. As such, the dataset can adapt to various summarization tasks or methods, including segmentation-based summarization, multi-granularity summarization and retrieval-then-generate summarization. Our analysis confirms the effectiveness and robustness of VCSum. We also provide a set of benchmark models regarding different downstream summarization tasks on VCSum to facilitate further research.
# Vcsum**: A Versatile Chinese Meeting Summarization Dataset** Han Wu1,2, Mingjie Zhan3,†, Haochen Tan1,2,†, Zhaohui Hou3,†, Ding Liang3**, Linqi Song**1,2 1 Department of Computer Science, City University of Hong Kong 2 City University of Hong Kong Shenzhen Research Institute 3 SenseTime Research {hanwu32-c,haochetan-2}@my.cityu.edu.hk {zhanmingjie,houzhaohui,liangding}@sensetime.com [email protected] ## Abstract Compared to news and chat summarization, the development of meeting summarization is hugely decelerated by the limited data. To this end, we introduce a versatile Chinese meeting summarization dataset, dubbed VCSUM, consisting of 239 real-life meetings, with a total duration of over 230 hours. We claim our dataset is *versatile* because we provide the annotations of topic segmentation, headlines, segmentation summaries, overall meeting summaries, and salient sentences for each meeting transcript. As such, the dataset can adapt to various summarization tasks or methods, including segmentation-based summarization, multi-granularity summarization and retrievalthen-generate summarization. Our analysis confirms the effectiveness and robustness of VCSUM. We also provide a set of benchmark models regarding different downstream summarization tasks on VCSUM to facilitate further research. The dataset and code will be released at https://github.com/ hahahawu/VCSum. ## 1 Introduction Meeting summarization (Janin et al., 2003; McCowan et al., 2005; Zhong et al., 2021) is the task of distilling the meeting transcript into a concise and readable summary that contains the most salient parts of a meeting. The summary can help the participants or absentees to quickly grape the highlight points. Therefore, a set of models have been proposed to comprehensively and succinctly summarize the content of a meeting (Zhu et al., 2020; Feng et al., 2021; Zhong et al., 2022b). Compared to standard text summarization (Nallapati et al., 2016; Narayan et al., 2018), meeting summarization is a much more challenging task because it has more informal and oral expressions, topic shifts, multiple participants and longer context. For this reason, existing datasets for meeting †Equal contribution. Meeting Transcript about Fundamental Education Speaker1: 我想请问一下,我们平时对于基础教育,包括现在因为就 是只有哈工大有本科,所以哈工大是直接有对接这个本科高中生的 这个需求的,在这方面有没有一些围绕着这个方面的分享。 Speaker2: 深圳这个教育和医疗是两个短板,尤其教育的话这个尤其 是基础教育,现在高端教育慢慢这个步伐已经开始在加快了。但是 在基础教育这块 ... Speaker3: 另一方面跟家长的这种这个观念也比较有关系。因为 ... [EOS] Headline: 基础教育的短板 Segmentation summary: 目前基础教育是短板,处于未解决温饱问 题阶段,学校重视不够。家长的观念也有影响,一直都在走应试教 育的老路,最终目标都是在高考中去考高分 ... Speaker1: 老师们讲得特别全面系统,而且把我们整个工作的衔接都 联系起来了。接下来请 ... Speaker4: 我说一下我对人才培养的这个看法 ... [EOS] Headline: 人才培养的方式 Segmentation summary: 要提供一个平台,为青少年的特长提供机 会,同时还能培养特长,要从小发掘小朋友的特长, ... ,同时新技 术新思想也要通过大城市蔓延到小城市。 ... Overall summary: 基础教育要多提供平台,重视基础教育的同时, 也要多关注孩子的兴趣,学校要提供层次的选人标准,不要太局限 在成绩上,也要有差异化 ... Table 1: An example from our dataset. The green texts are the highlighted sentences. The token [EOS] is used to distinguish different segmentations. We provide multi-granularity summaries to a meeting transcript, including headline, segmentation summary and overall summary. See the English example in Appendix A.1. summarization, i.e., AMI (McCowan et al., 2005) and ICSI (Janin et al., 2003), can hardly be used to train a robust summarization model owing to 1) their small size - AMI and ICSI only contain 137 and 59 pairs of meeting transcripts and summaries, respectively; 2) the specific domain - AMI just focuses on the product design and development while ICSI concentrates on the academic discussions; 3) coarse-grained summaries - the summaries in these two datasets are directly written for the whole meeting that might involve various topics. Although some larger dialogue summarization datasets have been created, e.g., SAMSum (Gliwa et al., 2019), DialogSUM (Chen et al., 2021) and MediaSum (Zhu et al., 2021), their context and summaries are still much shorter than the meeting summarization data. On the other hand, some variants of the summarization task are recently studied in news summarization, such as segmentationbased summarization (Liu et al., 2022) aiming at jointly segmenting and summarizing the lengthy context, multi-granularity summarization (Zhong et al., 2022a) aiming at generating coarse-, middleand fined-grained summaries at the same time, and retrieval-then-generate summarization (Mao et al., 2022) aiming at generating the summaries on the extracted salient sentences. While these variant tasks and methods have demonstrated their capacities on improving the news summarization performance, there is a huge demand for a large-scale versatile meeting summarization dataset to adapt these variants into the field. To this end, we collect a Versatile Chinese meeting Summarization dataset based on the real-life meeting recordings, called VCSUM. The dataset contains 239 meetings, with a total duration of over 230 hours, and each meeting transcript has over 14K tokens on average. To make the dataset versatile, we provide various annotations, including segmentation-based annotations, multi-granularity annotations and extractive annotations. Table 1 illustrates an example from our dataset. The meeting transcript is segmented into several sections according to the topics discussed. Then multigranularity summaries are provided for each section, i.e., a coarse-grained headline summary with 5-20 words and a find-grained segmentation summary with 100-150 words. An overall summary with 200-250 words for the whole meeting is also annotated to formulate the challenging summarization tasks, such as lengthy meeting transcript summarization. Furthermore, we instruct annotators to highlight salient sentences in the meeting whose content is further verified to be highly consistent with the summaries. As such, our dataset is also a good testbed for the *extract-then-generate* summarization. In the experiment part, we evaluate several benchmark models on segmentation-based summarization, multi-granularity summarization, extract-then-generate summarization and highlight sentence extraction. We also provide a conversation solution to the dataset. We summarize our contributions as follows: (1) we are the first one to collect a large and highquality meeting summarization dataset from reallife videos in the last around 20 years; (2) we propose the first *versatile* summarization dataset that contains the annotations of extractive highlight sentences, topic segmentation, and multi-granularity summaries; (3) we conduct extensive experiments on the proposed dataset, constructing the benchmark to facilitate further research. ## 2 Related Work 2.1 Dialogue Summarization Dialogue summarization aims to extract or summarize the most important information in the dialogue. Dialogue can take many forms, including chit-chat, meetings, and emails (Li et al., 2017; Zhang et al., 2021; Chen et al., 2021). A bunch of chat summarization datasets (Gliwa et al., 2019; Zhu et al., 2021; Chen et al., 2021) have been created yet. For example, the most widely used chat summarization dataset, i.e., SAMSum (Gliwa et al., 2019), is collected by linguists writing messenger-like conversations. This high-quality dataset greatly facilitates this research direction. A set of follow-up algorithms are proposed to solve the task by enhancing the dialogue context modeling(Chen and Yang, 2020; Zhong et al., 2022b) or dialogue participant modeling (Narayan et al., 2021). However, due to the much higher cost of collecting and annotating meetings, existing meeting summarization datasets are really limited, only AMI (McCowan et al., 2005) and ICSI (Janin et al., 2003), which were constructed around 20 years ago. The small size and inferior annotation quality of these two datasets are scarce to support the training of a robust meeting summarization model, while meeting summarization suffers more challenges than chat summarization, e.g., longer context and more topic shifts. Although some data augmentation techniques(Zhu et al., 2020; Zhong et al., 2022b) have been attempted, insufficient meeting summarization data remains a great barrier to this field moving forward. In this work, we create a highquality and larger meeting summarization dataset to fill the gap of no new meeting summarization datasets proposed in the last 20 years. ## 2.2 Advanced Summarization Tasks Thanks to the abundant data and simpler annotations, many variants of the standard summarization task have been attempted on news summarization, including segmentation-based summarization (Liu et al., 2022), multi-granularity summarization ![2_image_0.png](2_image_0.png) (Zhong et al., 2022a). The task of segmentationbased summarization is proposed to address the problem of multiple topics discussed in the lengthy document. The multi-granularity summarization aims to provide summaries with different degrees of semantic coverage. Although these tasks could improve the summarization performance, they cannot be tested on meeting summarization due to a lack of data. In this work, we provide the meeting summarization data with segmentation-based and multi-granularity annotations and build the benchmarks of these tasks on meeting summarization. Besides, superior to SEGNEWS dataset (Liu et al., 2022), VCSUM focuses on real-life meetings and provides human-annotated summaries. ## 3 The Vcsum **Corpus** 3.1 Data Selection We collect the roundtable meetings from some Chinese video-sharing websites. We first obtain 1,419 videos from the websites by searching the keyword "圆桌会议(roundtable)". To select high-quality videos and alleviate potential ethical issues, we crowdsource the basic meeting information by asking the following questions: 1) are the audio and video of the meeting clear? 2) how many *valid* participants in the meeting? Participants are valid only when they have lots of expressions and clearly articulate their opinions. 3) is there any reference in the meeting? The reference can be the supporting materials of the meeting, such as slides, technical documentation, requirement document, reports, etc. 4) does the meeting involve any offensive or ethical content, including politics, religious issues, gender concerns, or violence? Each meeting will be marked by two annotators, and the disagreements will be tackled by the third annotator. Then the candidate videos are selected by the conditions of 1) having clear audio and video; 2) the number of valid participants ranging from 1 to 10; 3) no references; 4) not involving any offensive or ethical concerns; and 5) the meeting duration ranging from 10 minutes to 200 minutes. We finally obtained 541 valid videos. The creation year of these videos ranges from 2017 to 2022. Owing to the limited budget for the annotation, we further selected 239 meetings, covering as many different domains as possible, including technology, finance, daily life and so on. We provide the topic distribution in Figure 1. ## 3.2 Data Annotation The annotation is conducted on Feishu Minutes 1, an integrated platform for video parsing and automatic speech recognition (ASR). We upload candidate videos to the platform and parse them into meeting transcripts. Then annotators are asked to read the transcripts and provide five kinds of annotations. Highlight sentences. Annotators should highlight the salient and informative sentences in the original meeting transcripts. The marked sentences must be fluent and complete. To avoid excessive labeling, the highlighted sentences should not exceed 10% of the entire transcript. Topic segmentation. As a meeting generally involves multiple topics, we instruct annotators to identify different topic segments by inserting a special token [EOS] at the end of each segment. Note that the topic segmentation is annotated on the utterance level. Segmentation headline. After obtaining topic segments, annotators should provide a headline to identify the topic of each segmentation. The word count for each headline should fall within the range of 5-20 words. Segmentation summary. Annotators should write a summary for each segment. Different from headlines, segmentation summaries focus more on details of the content. Each segmentation summary should contain 100-150 words. Overall summary. In the end, annotators should provide an overall summary that covers the most salient and informative content of the meeting. The word count for each overall summary should range from 200 to 250 words. 1https://meetings.feishu.cn/minutes | Dataset | Lan. style | Scenario | Domain | #Transcripts | #Tokens/trans. | #Turns/trans. | #Speakers/trans. | #Tokens/sum. | |-----------|-------------------|------------|----------|----------------|------------------|-----------------|--------------------|----------------| | CNN | written | news | multiple | 92465 | 654.0 | - | - | 42.1 | | DailyMail | written | news | multiple | 219506 | 702.9 | - | - | 51.5 | | SAMSum | written | online | multiple | 16369 | 93.8 | 11.2 | 2.4 | 20.3 | | DialogSum | spoken | - | multiple | 13460 | 131.0 | - | - | 23.6 | | MediaSum | spoken | interview | multiple | 463596 | 1553.7 | 30.0 | 6.5 | 14.4 | | CSDS | written (Chinese) | online | customer | 10701 | 401.1 | 26.0 | 2.0 | 83.3 | | AMI | spoken | meeting | product | 137 | 6007.7 | 535.6 | 4.0 | 296.6 | | ICSI | spoken | meeting | academia | 59 | 13317.3 | 819.0 | 6.3 | 488.5 | | VCSUM | spoken (Chinese) | meeting | multiple | 239 | 14106.9 | 73.1 | 5.6 | 231.9 | | VCSUMseg* | 1359 | 2480.9 | 12.9 | 3.0 | 139.1 | | | | | VCSUM | #hl./trans. | #Token/hl. | R-1/2/L | |--------------|---------------|--------------|-------------------| | Whole | 71.7 | 32.5 | 88.65/50.46/62.56 | | Segmentation | 12.7 | 32.5 | 81.17/49.22/67.54 | ## 3.3 Quality Control To obtain high-quality data, we release a set of data samples to select the most suitable and professional annotators. Finally, we recruit eight annotators majoring in law, finance or Chinese culture from the top universities in China (4 females and 4 males). Before the formal annotation, all annotators were asked to study the annotation protocols and practice on the training samples for a period of time. The annotation process began after all annotators passed our examination. We also have two annotation inspectors from our research group to monitor the whole process. During the annotation process, each sample is annotated by an annotator and checked by another annotator and an inspector. The annotation would be accepted only if both two checkers approved it. After the annotation, all results are further validated by ourselves. If any errors are found in an annotation batch, the corresponding annotator and checkers would be instructed to self-check and re-annotate the batch until the result meets our requirements. Additionally, to alleviate the error propagation from ASR, we manually compared several ASR technologies and finally selected Feishu Minutes. Our analysis of 100 randomly sampled meeting segmentations reveals a word error rate of 8%, which is significantly lower than the error rate of 30% reported in the AMI dataset. To further ![3_image_0.png](3_image_0.png) ## 3.4 The Characteristics Of Vcsum Table 2 illustrates the comparison between VCSUM and other news, dialogue or meeting summarization datasets. We see that VCSUM has much longer context and summaries compared to existing news and dialogue summarization datasets. The transcripts in VCSUM contain 72.8 turns and 5.7 speakers on average, suggesting that the dataset also has the characteristics of multi-turn and multiparty dialogues. While the traditional meeting datasets, like AMI and ICSI, only focus on a single domain, VCSUM involves multiple domains, ranging from daily life to academic discussions. Moreover, we also provide the extractive highlight sentences in VCSUM. Each meeting transcript averagely has 71.7 highlighted sentences, with each sentence containing 32.5 tokens. We calculate the ROUGE scores (Lin, 2004) between highlight sentences and the corresponding summary. The larger value of ROUGE-1 indicates that the written summaries are semantically consistent with the highlight annotations, while the smaller ROUGE-2 scores suggest the abstractiveness of the summary. Previous study (Kedzie et al., 2018) reveals a critical problem of summarization task, called *positional bias*, which is that in existing datasets (McCowan et al., 2005; Chen et al., 2021; Zhu et al., 2021), most important information is often shown at the beginning of the context. This problem might bias the model to focus on the early words rather than the entire context. To this end, we also study the positional bias on our dataset. We evenly partition the transcript into 100 bins and count the frequency of the non-stop summary words appearing in each bin. As shown in Figure 2, the meeting transcripts contain more summary words near the beginning and end while segmentation transcripts hold more summary words in the middle part. However, the summary words of VCSUM are smoothly distributed in the transcripts. This observation indicates that our dataset does not suffer the positional bias, thus being a more challenging dataset. ## 4 Task Overview Based on the annotation of our dataset, we propose three main challenging tasks. Given a meeting transcript C = {U1, U2*, ..., U*N } consisting of N utterances, wherein each utterance Ui = {w i1 , wi2 , ..., wi |Ui|} is a sequence of words, we formulate the tasks as follows. Highlight Sentence Extraction The task of highlight sentence extraction (HSE) aims to find a set of spans H = {(w i j , wik )|0 < i ≤ N, 0 < j < k ≤ |Ui|} that contain the most important information of the meeting. All highlight sentences are not overlapped and not across different utterances. Segmentation-based Multi-granularity Summarization This task is essentially comprised of two sub-tasks, i.e., topic segmentation and multigranularity summarization. Formally, we aim to generate three summaries at different granularities, i.e., a headline Yh, a segmentation summary Ys and a joint summary Yh:Ys, based on the transcript segments S, where S ∈ {(Uj , ..., Uk)|0 < j ≤ k ≤ N}. The segments are partitioned in the level of utterance, and there are no overlapping among different segments. Abstractive Meeting Summarization The goal of this task is to generate the overall summary Y*gold* | Model | Chunk | F1 | Gold R1 | |------------|---------|-------|-----------| | 512 | 39.47 | 84.72 | | | BERT | 1024 | 35.41 | 82.28 | | 512 | 39.24 | 85.17 | | | Longformer | 1024 | 38.31 | 84.35 | | 2048 | 37.31 | 84.16 | | ## 5 Experiments To jointly accommodate the segmentation-based summarization and overall meeting summarization, we divide the train/dev/test datasets that contain around 80%/10%/10% segmentation summaries as well as overall summaries. There are totally 193/25/21 overall summaries and 1076/135/136 segmentation summaries in train/dev/test sets. ## 5.1 Highlight Sentence Extraction Experiment Setup We solve the task as a sequence labeling problem. Due to the lengthy context of meeting transcripts, we try to split the transcript into small chunks. Here we evaluate the BERT (Devlin et al., 2019) and Longformer (Beltagy et al., 2020) models with chunk sizes of 512, 1024 and 2048. We report the F1 score and gold ROUGE-1 recall score as our evaluation metrics, wherein the gold ROUGE recall score is calculated between the generated highlight sentences and the gold overall meeting summary. Find the implementation details in Appendix B. Results Table 4 illustrates the results of highlight sentence extraction on the test set. We can see that both BERT and Longformer perform better with the shorter input length. However, a huge performance drop is observed when BERT works on the longer inputs while Longformer could perform stably. This finding is consistent with the characteristics of these two models. ## 5.2 Segmentation-Based Multi-Granularity Summarization Experiment Setup This task is comprised of two sub-tasks, i.e., topic segmentation and multi- | Model | Chunk Turns | Pk↓ | WinDiff ↓ | |------------|---------------|-------|-------------| | RANDOM | - | 0.421 | 0.555 | | EVEN | - | 0.441 | 0.505 | | 5 | 0.293 | 0.310 | | | 10 | 0.216 | 0.214 | | | 15 | 0.230 | 0.234 | | | BERTSUMEXT | 5 | 0.334 | 0.380 | | BARTenc | 10 | 0.222 | 0.216 | | 15 | 0.244 | 0.242 | | granularity summarization, wherein the summarization method is applied on the segmented sections. For the topic segmentation, we formulate the task as a sequence labeling problem. We solve it by BERTSUMEXT model Liu and Lapata (2019) and BART encoder model (Lewis et al., 2020). Specifically, we insert a special token [CLS] at the beginning of each utterance, which would be used to classify whether the utterance is the end of a segment. Then we use an interval segmentation indicator to distinguish different utterances. However, due to the context length exceeding 10K in most samples, we split the meeting transcript into chunks where each chunk contains 5/10/15 turns of utterance. In the stage of inference, we make the predictions on segmented chunks but calculate the scores on the entire meeting transcript. Following previous work (Zhong et al., 2022b), we use the standard metrics Pk (Beeferman et al., 1999) and WinDiff (Pevzner and Hearst, 2002) to evaluate the segmentation models. For multi-granularity summarization, we evaluate two settings, i.e., *with predicted segments* and *with gold segments*. For the former setting, the model is trained to first segment the transcript and then generate summaries for the predicted segments. For the latter, we provide the gold segments and evaluate the capacities of generating multi-granularity summaries with correct segments. As the number of predicted segments is uncertain, we calculate the ROUGE-1/2/L scores for each meeting by joining all predicted segmentation summaries together with a special token [Y_SEP]. We employ two widely-used summarization models, i.e., BART and Pegasus (Zhang et al., 2020), as our backbone. We also report the results of RANDOM and ORACLE baselines. RANDOM is to randomly select sentences from the context as the summary, while ORACLE is to select the sentences with the highest ROUGE-1 scores against the ground truth. In the setting of *with predicted segments*, we adopt the best-performing topic segmentation model, i.e., BERTSUMEXT trained with 10-turn chunks, to segment the transcripts. Then the summaries are generated by BART and Pegasus models on the segmented sections. Furthermore, we also try to initialize the encoder part of the generative BART with the weights from BARTenc. In this way, we expect the generative model can be aware of the segmentation features. Note that we do not follow the previous work (Liu et al., 2022) which jointly optimizes the segmentation and generation tasks with the pre-trained language models because our transcripts and summaries are much longer than theirs and the model cannot work well with the such lengthy context. Results Table 5 illustrates the evaluation results of topic segmentation. As we can see, both BERTSUMEXT and BART models are much better than the baselines, i.e., RANDOM and EVEN. RANDOM is the baseline that randomly selects the utterances as the boundary of a segment, while EVEN is to partition the whole transcript evenly. BERTSUMEXT outperforms the BARTenc across the board. This is reasonable since BERTSUMEXT captures more inter-segment features by the interval indicator and high-level interact layers. The results of these two models also show the same trend that segmenting the transcript into 10 turns is the best choice. This is consistent with the findings in Sankar et al. (2019) that around 8 turns of dialogue context are enough to capture the contextual features. Table 6 shows the evaluation results on segmentation-based multi-granularity summarization. When given gold segments, Pegasus (l = 2048) achieves the best performance on most metrics. Both BART and Pegasus models outperform the extractive oracle methods across the board, suggesting that the written summaries are abstractive enough. For the summary generation at different granularities, we find that the headline generation is harder than the summary generation owing to its highly condensed information. Comparing the generation of segmentation summary and joint summary, slight improvements are spotted when prompting the segmentation summary generation with the headline. | Model | Headline | Segmentation Summary | Joint Summary | | | | | | | | |-------------------------|-------------------|------------------------|-----------------|-------|-------|-------|-------|-------|-------|-------| | Segmentor | Generator | R1 | R2 | RL | R1 | R2 | RL | R1 | R2 | RL | | With gold segments | | | | | | | | | | | | - | RANDOM | 21.21 | 2.53 | 1.03 | 43.22 | 7.63 | 13.41 | 43.18 | 7.66 | 13.49 | | - | ORACLE | 39.90 | 20.01 | 34.72 | 55.10 | 25.19 | 32.66 | 55.21 | 24.82 | 31.77 | | - | BART(l = 1024) | 42.85 | 25.53 | 35.96 | 58.18 | 23.56 | 29.65 | 59.49 | 24.38 | 29.80 | | - | BART(l = 2048) | 41.92 | 23.73 | 34.53 | 59.06 | 24.35 | 29.25 | 59.14 | 24.35 | 29.25 | | - | Pegasus(l = 1024) | 46.49 | 27.69 | 38.92 | 59.31 | 25.10 | 33.19 | 59.78 | 25.66 | 34.11 | | - | Pegasus(l = 2048) | 45.68 | 27.22 | 39.04 | 59.59 | 25.31 | 33.57 | 59.80 | 26.03 | 34.55 | | With predicted segments | | | | | | | | | | | | BERTSUMEXT | BARTed(l = 1024) | 40.10 | 21.92 | 32.43 | 56.63 | 22.17 | 25.69 | 57.60 | 22.73 | 27.16 | | BERTSUMEXT | Pegasus(l = 1024) | 41.41 | 22.43 | 34.93 | 57.90 | 23.45 | 30.98 | 58.14 | 23.39 | 30.80 | | BARTenc | Pegasus(l = 1024) | 40.14 | 20.13 | 32.76 | 54.16 | 21.15 | 26.89 | 54.01 | 20.97 | 27.51 | | BARTenc | BARTed | 37.44 | 19.74 | 30.40 | 53.30 | 20.12 | 23.46 | 54.20 | 20.33 | 23.90 | Table 6: Evaluation results of ROUGE F1 on segmentation-based multi-granularity summarization. The scores here is calculated without sentence splitting. l stands for the truncation length. BARTed means the standard encoder-decoder-based BART model. | Method | R1 | R2 | RL | |----------------------------|-------|-------|-------| | Vanilla BART(l = 1024) | 37.61 | 11.38 | 18.16 | | Vanilla BART(l = 2048) | 38.83 | 12.05 | 18.02 | | Vanilla Pegasus(l = 1024) | 29.43 | 17.19 | 19.42 | | Vanilla Pegasus(l = 2048) | 27.62 | 8.56 | 18.49 | | Pred. Joint Summary + BART | 42.29 | 13.77 | 19.17 | | Pred. Highlights + BART | 45.76 | 16.33 | 22.26 | | Gold Joint Summary + BART | 55.85 | 31.33 | 34.19 | | Gold Highlights + BART | 47.75 | 18.47 | 24.14 | When evaluating with the predicted segments, we find that comparable performance could be reached with a strong topic segmentation model. After conducting more detailed analyses regarding the results of segmentation and generation, we surprisingly find that the most errors of our segmentation model are within three utterances, e.g., the label is the 10th utterance but the prediction is 8th, while the generation model can easily tolerate such deviations. An exception is tying the weights of the segmentation model with the generation model, i.e., BARTenc + BARTed, which performs much worse than others. We think this is because the generation process is not largely dependent on the segmentation features. ## 5.3 Abstractive Meeting Summarization Experiment Setup It is a kind of *long text summarization* task (Liu et al., 2020; Gidiotis and Tsoumakas, 2020), which encourages the model to generate a concise summary (< 512 words) containing the main content of a long text (> 5,000 words). We evaluate the vanilla sequence-to-sequence models and *retrieval-then-generate* methods. Specifically, we fine-tune the BART and Pegasus models with the truncation length of 1024 and 2048 to generate the overall summary. For *retrievalthen-generate* methods, we first *retrieve* the joint summaries or highlight sentences of a meeting transcript, and then produce the overall summaries on the retrieval results. This method hugely decreases the input length to the summarization model. We use the best-performing highlight model and joint summary generation model to finish the retrieval process. We report the ROUGE-1/2/L F1 scores for performance comparison. Results As shown in Table 7, the vanilla models perform weakly on meeting summarization owing to the lengthy input. As said in Section 3.4, the summary words are evenly distributed in the entire transcript. Therefore, directly truncating the input would cause information loss. The paradigm of retrieval-then-generate is a good solution to this problem since it retrieves the key information first and then generates the summary on the retrieval results. When the summarization is based on gold content, the joint summary is a better ground than the highlight information. This is reasonable since the joint summary contains more concentrated information with fewer words. However, when the summarization is based on the retrieval results, predicted highlights even achieve comparable summarization performance to the gold highlights. This is attributed to the effectiveness of our highlight ex- | Meeting ID: 71962448; Segment ID: 2 | | |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 中国当代艺术蓬勃发展的原因:中国有庞大的艺术教育的底盘,学校的空间很大,可以让新的年轻艺术家 有很好的活动的条件,中国的材料很丰富,加工便宜,人力资源很便宜,在市场上还有快速发展的过程, | | | Pegasus | 至少有潜在的购买力吸引这个艺术家,在世界议坛上很流行的中国政治口号,在文化信息的交往中,中 国艺术家比西方艺术家有更宽拓的空间。 艺术教育问题的分析:从艺术教育的角度来看,中国当代艺术蓬勃发展有六大原因,一是中国有庞大的艺 术教育地盘,二是学校空间大,可以让新的年轻艺术家有很好的活动条件,三是中国的材料丰富,并且 | | Gold | 人力资源便宜,四是市场上有快速发展的过程,存在潜在的购买力,五是中国的元素,文化信息交流中, 中国艺术家更具宽拓空间,六是中国正处于巨大变革中,批判性的建构是当代艺术的重要动力。 Meeting ID: 634267454 区块链是一个学科化的研究而不仅仅是热点,它是一种从特殊走向另一种特殊的时代,它一直尝试对社 会进行范式的迁移,从历史交汇到文化主导,在未来艺术形式的一个很关键的线索,可以发现不断不断 地将权力分散到更多人的手中,能够参与到社会的运动中,这也是未来的艺术选择艺术家的标准。艺术 作品的表达方式可以用画面表达和用音像的方式表达,或者是用任何一种行为方式,关键是能否用擅长 的工具、或者愿意拓展你的想象力去表达你对加密世界的思考。 | | Pred. Sum. + BART | 加密的爆发搅活了整个艺术品市场,带动了艺术领域改革开放,打破了原有的概念,带来了新的创作媒 介,是对艺术行业的一个挑战,是一个转向,让创作者与世界更好的融合,但是艺术家还是要有标准的 | | Gold | ,要有人品、有创新的能力、要让作品更够更清晰的表达,加密是从底层起来的,有着蓬勃的生命力, 每一个参与社会推进的人就已经是在参与艺术的创作了。 | | Method | Headline | Seg. Sum | Joint Sum | |-------------------|------------|------------|-------------| | BART(l = 2048) | 41.92 | 59.06 | 59.14 | | Pegasus(l = 2048) | 45.68 | 59.59 | 59.80 | | CONVLM(l = 2048) | 45.25 | 59.98 | 60.70 | Table 9: ROUGE-1 F1 scores of different models on multi-granularity summarization with gold segments. traction model and the redundancy information of highlight sentences. The worse performance from the predicted joint summary is because the generation of joint summaries is essentially a challenging task, which might cause more error propagation. ## 6 Further Discussions 6.1 Conversational Solutions DIALOGUELM (Zhong et al., 2022b) is a strong baseline for meeting summarization in English. Based on the fact that meeting is a kind of long conversation, DIALOGUELM adopts sparse architecture and dialogue-specific pre-training objectives, such as speaker masking, turn splitting, turn merging and turn permutation, to capture the conversational features. It finally demonstrates that the pre-trained dialogue model is also a good solution to the meeting summarization task. Inspired by this observation, we make a preliminary attempt at the conversational solution to VCSUM. Specifically, we pre-train an encoder-decoder-based dialogue model, dubbed CONVLM, using our in-house Chinese dialogue data. We provide details of CON-VLM in Appendix C. We train the model with the objectives of speaker identification and response generation. After the pre-training, we fine-tune the model on VCSUM. To simplify the comparison, we evaluate on the task of multi-granularity summarization with gold segments. As shown in Table 9, CONVLM achieves comparable or better performance against the summarization models. Especially for the generation of segmentation and joint summary, CONVLM consistently outperforms the baseline models, suggesting that modeling conversational features could benefit the summarization task on VCSUM. This finding demonstrates the conversational characteristics of our dataset and sheds light on using dialoguespecific pre-trained language models to solve the tasks of VCSUM. ## 6.2 Error Analysis To study the difficulties of VCSUM, we take a detailed analysis of the error cases. We find that around 50% of errors is information missing which is essentially caused by lengthy input truncation. For example, in the first row of Table 8, the segmentation summary is much semantically close to the ground truth with only one key point (the red words) missing. This phenomenon is more severe when generating the overall summaries owing to the long and informative context. There are also some errors from irrelevant information or redundant information, accounting for around 20%. These kinds of errors are mostly found in the retrieval-then-generate methods that retrieve some insignificant content in the first stage, thus finally misleading the generation process, like the blue words in Table 8. The remaining 30% errors include factual errors and syntactic errors. ## 7 Conclusion And Future Work In this work, we collect a large and high-quality Chinese meeting summarization dataset from reallife videos, namely VCSUM. The dataset is versatile to support the tasks of highlight sentence extraction, segmentation-based summarization, multigranularity summarization and meeting summarization. Depth analyses demonstrate the superiority of our dataset. We then provide a strong benchmark for different downstream tasks on VCSUM. For future work, we believe the development of an endto-end framework that could jointly solve all tasks of VCSUM is a promising direction. Furthermore, we will also consider to release the video and audio files of the annotated meetings to facilitate the research of multi-modality meeting summarization. ## Acknowledgement We sincerely appreciate the valuable and constructive comments from the reviewers. This work was supported in part by the Hong Kong ITF grant PRP/079/22FX, the Technological Breakthrough Project of Science, Technology and Innovation Commission of Shenzhen Municipality under Grants JSGG20201102162000001, InnoHK initiative, the Government of the HKSAR, Laboratory for AI-Powered Financial Technologies. ## Limitations A potential limitation of this work is that we just try some straightforward methods on the summarization tasks, such as vanilla generative pre-trained language models or pipeline retrieval-then-generate methods. We do not try the end-to-end two-stage approaches in this work since we are more focused on the contributions of the dataset construction and building the benchmark. We leave the development of advanced models as future work. ## Ethics Statement The construction of dataset. All videos in our newly-introduced dataset are available on the Chinese video sharing websites and are public to the download. To avoid the potential ethical issues, we carefully checked all videos in multiple aspects, as said in Section 3.1. We try to guarantee that all videos do not involve any offensive, genderbiased, political content and any other ethical issues. During the annotation, we instruct annotators to anonymize or remove the sensitive or private information. We recruit eight annotators that passed our examinations from the crowdsourcing platform, and two quality inspectors from our research team. To fairly paid the annotations, we first take an in-house annotation to evaluate the speed and difficulty of the large-scale annotations. Finally, we pay each annotator $25-$30 per hour. Typically, it would take around 2 hours to annotate a one-hour meeting. So, the workers are compensated $50-$60 per sample. Applications. We will release the code and dataset along with friendly instructions to support its correct use. However, we still need to emphasize that abstractive summarization is a kind of generation task, which is not as controllable as we think. It still would generate some novel or unexpected words occasionally. Therefore, further research on the summarization faithfulness is warmly needed. ## References Doug Beeferman, Adam Berger, and John Lafferty. 1999. Statistical models for text segmentation. *Machine learning*, 34(1):177–210. Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv:2004.05150. Jiaao Chen and Diyi Yang. 2020. Multi-view sequenceto-sequence models with conversational structure for abstractive dialogue summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4106– 4118. Yulong Chen, Yang Liu, Liang Chen, and Yue Zhang. 2021. Dialogsum: A real-life scenario dialogue summarization dataset. In *Findings of the Association* for Computational Linguistics: ACL-IJCNLP 2021, pages 5062–5074. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171– 4186. Xiachong Feng, Xiaocheng Feng, Libo Qin, Bing Qin, and Ting Liu. 2021. Language model as an annotator: Exploring dialogpt for dialogue summarization. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1479–1491. Alexios Gidiotis and Grigorios Tsoumakas. 2020. A divide-and-conquer approach to the summarization of long documents. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 28:3029–3040. Bogdan Gliwa, Iwona Mochol, Maciej Biesek, and Aleksander Wawer. 2019. Samsum corpus: A humanannotated dialogue dataset for abstractive summarization. In Proceedings of the 2nd Workshop on New Frontiers in Summarization, pages 70–79. Adam Janin, Don Baron, Jane Edwards, Dan Ellis, David Gelbart, Nelson Morgan, Barbara Peskin, Thilo Pfau, Elizabeth Shriberg, Andreas Stolcke, et al. 2003. The icsi meeting corpus. In 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings.(ICASSP'03), volume 1, pages I–I. IEEE. Chris Kedzie, Kathleen Mckeown, and Hal Daumé III. 2018. Content selection in deep learning models of summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1818–1828. Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In *ICLR (Poster)*. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of* the Association for Computational Linguistics, pages 7871–7880. Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. 2017. Dailydialog: A manually labelled multi-turn dialogue dataset. In *Proceedings* of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 986–995. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74–81. Xiaojun Liu, Chuang Zhang, Xiaojun Chen, Yanan Cao, and Jinpeng Li. 2020. Clts: A new chinese long text summarization dataset. In *Natural Language* Processing and Chinese Computing: 9th CCF International Conference, NLPCC 2020, Zhengzhou, China, October 14–18, 2020, Proceedings, Part I, pages 531–542. Yang Liu and Mirella Lapata. 2019. Text summarization with pretrained encoders. In *Proceedings of* the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3730–3740. Yang Liu, Chenguang Zhu, and Michael Zeng. 2022. End-to-end segmentation-based news summarization. In Findings of the Association for Computational Linguistics: ACL 2022, pages 544–554. Ziming Mao, Chen Henry Wu, Ansong Ni, Yusen Zhang, Rui Zhang, Tao Yu, Budhaditya Deb, Chenguang Zhu, Ahmed Awadallah, and Dragomir Radev. 2022. Dyle: Dynamic latent extraction for abstractive longinput summarization. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1687– 1698. I McCowan, J Carletta, W Kraaij, S Ashby, S Bourban, M Flynn, M Guillemot, T Hain, J Kadlec, V Karaiskos, et al. 2005. The ami meeting corpus. In Proceedings of the 5th International Conference on Methods and Techniques in Behavioral Research., pages 88–100. Ramesh Nallapati, Bowen Zhou, Caglar Gulcehre, and Bing Xiang. 2016. Abstractive text summarization using sequence-to-sequence rnns and beyond. In *Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning*, pages 280–290. Shashi Narayan, Shay B Cohen, and Mirella Lapata. 2018. Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1797–1807. Shashi Narayan, Yao Zhao, Joshua Maynez, Gonçalo Simões, Vitaly Nikolaev, and Ryan McDonald. 2021. Planning with learned entity prompts for abstractive summarization. *Transactions of the Association for* Computational Linguistics, 9:1475–1492. Lev Pevzner and Marti A Hearst. 2002. A critique and improvement of an evaluation metric for text segmentation. *Computational Linguistics*, 28(1):19– 36. Chinnadhurai Sankar, Sandeep Subramanian, Christopher Pal, Sarath Chandar, and Yoshua Bengio. 2019. Do neural dialog systems use the conversation history effectively? an empirical study. In *Proceedings* of the 57th Annual Meeting of the Association for Computational Linguistics, pages 32–37. Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter Liu. 2020. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. In *International Conference on Machine Learning*, pages 11328–11339. PMLR. Shiyue Zhang, Asli Celikyilmaz, Jianfeng Gao, and Mohit Bansal. 2021. Emailsum: Abstractive email thread summarization. In *Proceedings of the 59th* Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6895–6909. Ming Zhong, Yang Liu, Suyu Ge, Yuning Mao, Yizhu Jiao, Xingxing Zhang, Yichong Xu, Chenguang Zhu, Michael Zeng, and Jiawei Han. 2022a. Unsupervised summarization with customized granularities. *arXiv* preprint arXiv:2201.12502. Ming Zhong, Yang Liu, Yichong Xu, Chenguang Zhu, and Michael Zeng. 2022b. Dialoglm: Pre-trained model for long dialogue understanding and summarization. In *Proceedings of the AAAI Conference* on Artificial Intelligence, volume 36, pages 11765– 11773. Ming Zhong, Da Yin, Tao Yu, Ahmad Zaidi, Mutethia Mutuma, Rahul Jha, Ahmed Hassan, Asli Celikyilmaz, Yang Liu, Xipeng Qiu, et al. 2021. Qmsum: A new benchmark for query-based multi-domain meeting summarization. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, pages 5905–5921. Chenguang Zhu, Yang Liu, Jie Mei, and Michael Zeng. 2021. Mediasum: A large-scale media interview dataset for dialogue summarization. In *Proceedings* of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5927–5934. Chenguang Zhu, Ruochen Xu, Michael Zeng, and Xuedong Huang. 2020. A hierarchical network for abstractive meeting summarization with cross-domain pretraining. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 194– 203. 2https://huggingface.co/ ## A Translations A.1 Translated Examples. B Implementation Meeting Transcript about Fundamental Education Speaker1: I would like to ask about the basic education. Now only Harbin Institute of Technology has undergraduates in Shenzhen. So Harbin Institute of Technology needs to educate the fresh undergraduates who just finish the high school courses. Do you have any sharing regarding this aspect. Speaker2: Education and medical care are two shortcomings of Shenzen. Especially in terms of basic education, although the development of high-end education has been greatly improved, the basic education ... Speaker3: On the other hand, it is also related to the thoughts of parents. Because ... [EOS] Headline: Shortcomings of Fundamental Education Segmentation summary: At present, basic education is weak, and it is still in the stage of unresolved food and clothing problems, and schools do not pay enough attention to it. The thought of parents also has an influence. They have been following the old path of exam-oriented education, and the ultimate goal is to get high scores in the college entrance examination..... Speaker1: It is a very comprehensive sharing, covering the all aspects of our work. Next, we invite ... Speaker4: OK, I would like to share my view of talent training ... [EOS] Headline: The methods of talent training Segmentation summary: It is necessary to provide a platform to provide opportunities for the young people, and at the same time to cultivate their strengths. It is necessary to discover the strengths of children from an early age. ... At the same time, new technologies and new ideas must be spread to small cities through big cities. ... Overall summary: Basic education should provide more platforms. While paying attention to basic education, we should also pay more attention to children's interests. Schools should provide a level of selection criteria, not too limited in grades, but also differentiated ... Table 10: An example from our dataset in English. The green texts are the highlighted sentences. The token [EOS] is used to distinguish different segmentations. We provide multi-granularity summaries to a meeting transcript, including headline, segmentation summary and overall summary. of each sentence to represent the sentence. The binary classification is conducted on the special tokens to decide whether a sentence should be highlighted or not. Owing to the unbalanced ratio between positive and negative samples, we only active 40% negative samples in the training stage. The batch size is set to 16 or 32, depending on the chunk size. The models are optimized by Adam (Kingma and Ba, 2015) optimizer with an initial learning rate of 5e-5. The pre-trained BERT (hfl/chinese-roberta-wwm-ext-large) and Longformer (IDEA-CCNL/ErlangshenLongformer-330M) models are loaded from Huggingface2. We train the models for 10 epochs and select the best model on the validation set to evaluate on the test set. All experiments are conducted on 8 NVIDIA A100 GPUs. See Table 10. We provide the details of implementations here. For all experiments, we take five runs with different random seeds and report the average score. Segmentation-based Multi-granularity Summarization For topic segmentation, the BERTSUMEXT is based on the large BERT model while Highlight Sentence Extraction We solve the task at sentence-level. Specifically, we split the utterances into small sentences by *comma*. Then, we insert a special token [CLS] at the beginning A.2 Translated Case Study See Table 11. BARTenc is based on large BART. We truncate each utterance with maximum utterance length of 128 words. The batch size is set to 16 or 32, depending on the chunk turn size. The other settings are mostly same to highlight sentence extraction. For multi-granularity summarization, we employ the BART (fnlp/bart-large-chinese) and Pegasus (IDEA-CCNL/Randeng-Pegasus-523MSummary-Chinese) models from Huggingface as our backbones. The input sequence is truncated to the maximum length of 1024 or 2048. To incorporate speaker information, we add a speaker indicator at the beginning of each utterance, e.g., [Speaker_1: utterance_1; ...;Speaker_n: utterance_n;]. The batch size is set to 64. All models are optimized by Adam, and the learning rate is initialized to 5e-5 and linearly updated. During the training process, the best model is selected on the validation loss. In the stage of inference, we generate the summary using beam search with beam size of 5 and length penalty of 1.0. Abstractive Meeting Summarization For the vanilla generative models, we truncate the input sequence to the maximum length of 1024 or 2048. The batch size is set to 64 or 32, depending on the truncation length. For the retrieval-then-generate methods, all predictions are obtained from the corresponding best-performing models, i.e., BERTSUMEXT+Pegasus(l = 1024) and BERT trained with chunk size of 512. During the decoding, we generate the summary using beam search with beam size of 5 and length penalty of 1.2. ## C Details Of Convlm CONVLM employs the architecture of large BART model. The training data is collected from Chinese social media, including Zhihu3, Weibo4and Douban5. We totally crawl 10 billion pieces of dialogue data, and select 10 million pieces of highquality data to train CONVLM. The pre-training objectives are speaker identification and response generation, wherein speaker identification is to predict the masked speaker indicators and response generation is to generate the response based on the context. We truncate the input sequence into 512. The batch size is set to 128. The learning rate is set to 5e-5 and linear updated along with the training process. We train the CONVLM for 300K steps, 3https://www.zhihu.com/ 4https://www.weibo.com/ 5https://www.douban.com/ which takes around 50 hours on 16 NVIDIA A100 GPUs. We validate the model each 1K steps. We save the model with the best validation loss and the last 10 checkpoints. ## D Screenshot Of Annotation Platform See Figure 3. | Meeting ID: 71962448; Segment ID: 2 | | |---------------------------------------|----| | Reasons for the vigorous development of Chinese contemporary art: China has a huge infrastructure for art education, and the school has a lot of space, which allows new young artists to have good conditions for activities. China's materials are abundant, processing is cheap, human resources are cheap, and there is still rapid development in the market. At least there is the potential purchasing power to attract the artist. Chinese political slogans are very popular in the world forum. In the exchange of cultural information, Chinese artists have a wider space than Western artists. | | | Pegasus | Analysis of Art Education Problems:From the perspective of art education, there are six reasons for the vigorous development of Chinese contemporary art. One is that China has a huge territory for art education. Second, the school has a large space, which allows new young artists to have good conditions for activities. Third, China is rich in materials and cheap in human resources. Fourth, there is a process of rapid development in the market and potential purchasing power. The fifth is Chinese elements. In the exchange of cultural information, Chinese artists have more room to expand. Sixth, China is undergoing tremendous changes, and critical construction is an important driving force for contemporary art. Meeting ID: 634267454 | | Gold | Blockchain is a disciplined research rather than just a hotspot. It is an era from a special to another special, and it has been trying to transfer the paradigm of society, from the intersection of history to the dominance of culture. A key clue in the future of art forms can be found in the continued decentralization of power into more hands. Being able to participate in social movements is also the standard for future artists to choose artists.The expression of works of art can be expressed in pictures, audio and video, or in any kind of behavior. The key is whether you can use the tools you are good at, or be willing to expand your imagination to express your thinking about the encrypted world. | | Pred. Sum. + BART | The outbreak of encryption has stirred up the entire art market, driven the reform and opening up of the art field, broken the original concept, and brought new creative media. This is a challenge to the art industry and a turning point, allowing creators to better integrate with the world. But artists still need standards, they must have character, they must have the ability to innovate, and they must make their works clearer and more expressive. Encryption starts from the bottom and has vigorous vitality. Everyone who participates in social advancement is already participating in the creation of art. | | Gold | | Table 11: Case studies on the joint summary (the first part) of the segment and the overall summary (the second part). | Figure 3: The screenshot of the annotation platform. | |--------------------------------------------------------| ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? See the Limitation part. ✓ A2. Did you discuss any potential risks of your work? See the Ethics Statement part. ✓ A3. Do the abstract and introduction summarize the paper's main claims? See the abstract and Section 1. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** See Section 3. B1. Did you cite the creators of artifacts you used? Not applicable. Left blank. ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Upon acceptance, we will release our dataset and provide the guidelines for its use and distribution. ✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Upon acceptance, we will release our dataset and provide the guidelines for its use and distribution. ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? See Section 3. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? See Section 3. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. See Section 3. ## C ✓ **Did You Run Computational Experiments?** See Section 5 And Appendix B. ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? See Appendix B. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? See Appendix B. ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? See Appendix B. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? See Section 5 and Appendix B. D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** See Section 3 and Ethics Statement. ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? See Section 3 and Appendix D. ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? See Section 3 and Ethics Statement. ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? See Section 3 and Ethics Statement. ✓ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? See Section 3. ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? See Section 3 and Ethics Statement.
karan-etal-2023-leda
{LEDA}: a Large-Organization Email-Based Decision-Dialogue-Act Analysis Dataset
https://aclanthology.org/2023.findings-acl.378
Collaboration increasingly happens online. This is especially true for large groups working on global tasks, with collaborators all around the globe. The size and distributed nature of such groups makes decision-making challenging. This paper proposes a set of dialog acts for the study of decision-making mechanisms in such groups, and provides a new annotated dataset based on real-world data from the public mail-archives of one such organisation {--} the Internet Engineering Task Force (IETF). We provide an initial data analysis showing that this dataset can be used to better understand decision-making in such organisations. Finally, we experiment with a preliminary transformer-based dialog act tagging model.
## Leda: A Large-Organization Email-Based Decision-Dialogue-Act Analysis Dataset Mladen Karan∗, Prashant Khare∗, Ravi Shekhar†**, Stephen McQuistin**‡, Colin Perkins‡, Ignacio Castro∗, Gareth Tyson∗§, Patrick G.T. Healey∗**, Matthew Purver**∗¶ ∗Queen Mary University of London, †University of Essex, ‡University of Glasgow §Hong Kong University of Science & Technology, ¶Jožef Stefan Institute {p.khare, m.karan, i.castro, g.tyson, p.healey, m.purver}@qmul.ac.uk, [email protected], [email protected], [email protected] ## Abstract Collaboration increasingly happens online. This is especially true for large groups working on global tasks, with collaborators all around the world. The size and distributed nature of such groups make decision-making challenging. This paper proposes a set of dialog acts for the study of decision-making mechanisms in such groups, and provides a new annotated dataset based on real-world data from the public mail-archives of one such organization - the Internet Engineering Task Force (IETF). We provide an initial data analysis showing that this dataset can be used to better understand decision-making in such organizations. Finally, we experiment with a preliminary transformerbased dialog act tagging model. ## 1 Introduction And Related Work Motivation Online collaboration has been used for many years by large distributed organizations. The increasing availability of high-speed Internet connections and collaboration tools, along with the Covid-19 pandemic, are making it ever more prevalent. Large distributed organizations of this type often undertake important tasks. For example, the Internet Engineering Task Force (IETF) and the World Wide Web Consortium (W3C) are responsible for developing the technical standards that underpin the Internet. Consequently, understanding the decision-making processes in this type of organization is essential to increase transparency and accountability, to facilitate tracking of decisions and the reasoning behind them, and to understand alternatives that were considered (or not) and the voices that were (or were not) heard. Goals Most studies of decision making in text (e.g. Hsueh and Moore, 2007; Fernández et al., 2008; Bui and Peters, 2010) rely on annotation and analysis of *Dialogue Acts* (DAs). We adopt this approach and label emails from public IETF mailing lists with DAs. Our aim is to answer the following research questions: **RQ1:** What is an appropriate set of DAs to use for this annotation task?; **RQ2:** How do communication patterns change through the life-cycle of a decision discussion?; and **RQ3:** How do different types of participants differ in how they contribute to the process? The overall goal of these questions is to better understand the mechanisms underlying the decision-making process in a large, distributed, collaborative organization. Related Datasets The most notable email-based related dataset is the Enron Corpus (Klimt and Yang, 2004), covering over 200K messages of Enron employees in various positions within the organization. However, in-house emails of a single closed company are not representative of communication in larger, more diverse collaborations. Datasets specifically relevant for studying decision making include AMI (McCowan et al., 2005) and ICSI/MRDA (Janin et al., 2003; Shriberg et al., 2004). However, the AMI dataset is not "real": it uses actors acting out small-group meetings on predefined topics. In contrast, the ICSI dataset is based on naturally occurring meetings at the International Computer Science Institute (ICSI). While both are annotated with general dialogue act labels, AMI also includes specific decision-oriented dialogue acts provided by Fernández et al. (2008). Despite this, they are not representative of interaction in large groups, or online collaborative settings. Consequently, we annotate a new dataset tailored to address our research questions. We denote it as Largeorganization Email-based Decision-dialogue-act Analysis dataset - **LEDA**. There are important differences between LEDA and AMI/ICSI. First, while AMI/ICSI are transcribed face-to-face, real-time, in-person, and small-group meetings. LEDA contains emails from mailing-lists, asynchronous, and from a large decentralized, globally spread group. Second, AMI/ICSI discuss mostly self-contained, focused 6080 topics (design, research-group progress); LEDA discusses the more long-term, complex task of designing Internet-standards. We further provide a more detailed comparison of LEDA with AMI in Appendix A. Contributions First, we propose a taxonomy of DA labels for large-group email decision-making. Second, we provide a novel dataset labeled with DAs. Third, we provide data analyses exploring decision-making communication patterns within the IETF. Fourth, we provide a preliminary DA prediction model on this dataset, which can serve as a reference baseline model for future work. ## 2 Dataset Our data consists of emails from the IETF mailing list archive.1 The IETF is a typical example of decision making in a large, distributed, online collaborative community; it has rich metadata available via the IETF DataTracker;2and the data is publicly available with appropriate consent.3 IETF background The IETF is a large, open, voluntary organization tasked with developing Internet standards (Flanagan, 2019; McQuistin et al., 2021; Khare et al., 2022). It is comprised of *working groups* (WGs), each focusing on a relatively narrow field: e.g., RMCAT4 WG focuses on specific Real-time Media Congestion Avoidance Techniques. Each WG has one or more participants as chairs. During its development, an Internet standard is called a *draft*. Drafts are discussed in the mailing lists (the archive has >2M emails, predominantly in English, between 56k participants over 20 years) and in several live meetings yearly. After sufficient revision and review, a draft becomes an Internet standard. Data preparation The email archive consists of threads (sets of emails connected with reply-to relations, forming a tree-like structure). Given a particular draft, we extract all threads with at least one message that mentions the draft in either the subject or body. We do this for four drafts, chosen by an IETF expert to span a range of technical areas. We opted for entire threads over a smaller number of drafts (rather than more drafts but with partial threads) to ensure a full view of the draft discussion and agreement process over its life-cycle. We then preprocess all messages, splitting them into Quote, *Signature*, or *Normal* segments using custom heuristics developed for this data. A *Normal* segment contains text written by the author of the message. A *Quote* segment contains text written by someone else, which is being quoted. A Signature segment contains signatures (name, company name, website). *Normal* segments are useful for analysis, while the rest introduce noise. We also keep track of quoting relations between segments. Label set calibration As our starting point, we take the DA labels defined in the ISO 24617-2 standard (Bunt et al., 2012). Cross-referencing with labels in datasets from related work and manual inspection of the IETF data suggested that much of the complexity in the standard is not needed for our goals. This was confirmed in several initial rounds of annotations where we observed considerable confusion between the very fine grained ISO 24617-2 DAs on our data. After each iteration, we simplified the label set by removing irrelevant labels for email communication (e.g., rhetorical devices such as pauses) and aggregating hard to distinguish labels (e.g., accepting a request and agreeing to an opinion). Table 1 presents our twolevel taxonomy with three coarse grained labels divided into eleven fine-grained ones, which was obtained after four rounds of calibration. Annotation Annotation of each segment with DA labels was carried out by seven student annotators, all with a background in linguistics. A segment can be assigned several DAs simultaneously (a multi-label setting). During the calibration rounds, annotators provided feedback which helped modify the taxonomy and instructions. For the final annotation, they were provided a detailed set of instructions and an annotation tool specifically developed in-house. Table 1 reports data statistics and inter-annotator agreement (IAA). Each thread is annotated by at least two annotators. To measure IAA, we considered both Fleiss' Kappa and Krippendorff's Alpha, but neither supports multi-label annotation. Instead, we consider one annotator's labels as "gold labels," and another's as "classifier predictions." We calculate the F1 score for all annotator pairs and average them. This calculation is performed on a subset of 15 threads labeled by all annotators. For some labels, the annotation is inherently difficult, as reflected in the IAA. Manual inspection reveals that many of these disagreements may be impossible to completely resolve as the task is subjective (Uma et al., 2021). For example, *ClarificationElicitation* is more often implicit (*"I don't see why ...*") than explicit (*"Can you explain why ...*"), introducing disagreement. However, recent work (Pavlick and Kwiatkowski, 2019; Basile et al., 2021; Leonardelli et al., 2021) shows it is viable to design models and evaluation measures that account for this inherent ambiguity instead of trying to resolve it. Accordingly, we release all individual annotators' labels with the text data and code. 5 While covering only four drafts, LEDA is of substantial size (8230 segments, 2196 messages, 363 authors), with the drafts hand-picked by an IETF expert to ensure they are representative. We focus on trends that are very prominent and supported by statistical significance tests. Finally, an inspection of plots for individual drafts revealed that the main trends outlined in the remaining sections were consistent across all four drafts. ## 3 Analysis Of Gold-Standard Labels 3.1 Draft Life-Cycle To address RQ1, we divide the period between the submission and publication of a draft into five equal time intervals (T1 - T5), each representing 20% of the period. We visualize the distribution of DAs falling into each of the periods. in Figure 1. 6 Answer and *Question* are more common in the early phases, likely due to more new issues being raised and unresolved issues discussed. ContextSetting and *Extension* are very frequent, increasingly so towards the end phases; we conjecture this is because those phases cover more complex issues requiring more background description. The frequency of *ProposeAction* is stable throughout the cycle and noticeably higher than StateDecision. This may imply that participants prefer to discuss actionable options rather than explicitly deciding on a single one. ## 3.2 Different Groups (2) influential - following (Khare et al., 2022), having top-10% centrality in the email interaction graph - or not; (3) chairs of any IETF WG, or not; (4) everyone (all participants). Figure 2 gives a visualization of DA distributions for each group. Authors vs. non-Authors Authors are more social, give more answers, and ask fewer questions (including clarification questions). Also, they use fewer NeutralResponse, *Extension*, and *ContextSetting*, indicating shorter, more focused messages. These trends imply they take a more reactive role in the discussion. Finally, they make the most decisions in the discussion, as would be expected, since they are in charge of the writing process. Influential vs. non-Influential Influential people use Answer, *Agreement*, and *NeutralResponse* more, making them generally more responsive. They use less Extension, *ContextSetting* and Thanking, implying a concise, focused communication style. As expected, they make more decisions and propose slightly more actions. Chairs vs. non-Chairs Similar to influential participants, chairs use *NeutralResponse* more than non-Chairs. However, they use more *ContextSetting* and *Extension*, and do more *Thanking*. We find this is because chairs send a lot of emails initiating and managing discussions and review assignments. Such emails are often composed of many small segments and contain a lot of these labels. Feedback to questions We further explored how likely the different groups are to have their questions answered. From the labeled data we obtain percentages for authors (22%), chairs (51%), influential (34%), and everyone (37%). Authors have the lowest ratio, possibly because their questions are, on average, more complex. The chairs, while they tend not to ask many questions, are the most likely to to get an answer. This is expected, as it is difficult to ignore a question from someone in that position. Surprisingly, the difference between ratios of influential participants and everyone are not statistically significant.7 Another surprising finding is that, on average, around two thirds of all questions appear to remain unanswered. ## 3.3 Other Observations ClarificationElicitation is almost nonexistent, implying either very little misunderstandings or un7We used a z-test with significance level 0.05. | Label | IAA | | | | |-------------------------------|----------------------------------------------|------------------------------|----------|-----------| | Description | Example | Count | | | | InformationProviding | Any type of providing information | 7643 | .86.86 | | | Agreement | Agreeing with opinion or accepting a task | That's a good idea. | 651 | .74.74 | | It is 42 bytes. | | | | | | Answer | Answering a question | 656655 | .73.73 | | | ContextSetting | Providing context before other DAs | Imagine the case when ... | 212 | .25.25 | | Disagreement | Disagreeing with opinion on rejecting a task | I don't think so. | 365 | .68.68 | | Extension | Natural continuation of the previous one. | Moreover, it's faster. | 3007 | .65.65 | | NeutralResponse | Response without clear (dis)agreement | Your idea seems interesting. | 2066 | .71.71 | | ProposeAction | Propose an actionable activity | We should update the text. | 225 | .65.65.65 | | Explicitly express a decision | We will incorporate this. | 359 | | | | StateDecision | .63.63 | | | | | InformationSeeking | Any type of seeking information | 1146 | 18.84.84 | | | ClarificationElicitation | Expresses need for further elaboration. | Could you explain again … | 326 | .29.29 | | Question | Any type of question. | How big is the header? | 865 | .86 | | Social | Social acts (thanking, apologizing etc.) | 1040 | .67.67 | | | Thanking | Conveying thanks. | Thanks for the comment. | 249 | .98.98 | Table 1: Labels at the higher (bold) and lower levels of the taxonomy with corresponding counts and inter-annotator agreement. ![3_image_0.png](3_image_0.png) Figure 1: DA distribution across time. Each column is a DA distribution in a particular time period of the draft life-cycle. Colors convey the probability mass assigned to a DA in emails from that period. ![3_image_1.png](3_image_1.png) willingness to explicitly voice it. Research on misunderstandings in dialog (Aberdeen and Ferro, 2003) implies it is likely the latter. Most participants tend to use *NeutralResponse*, as opposed to Agreement or *Disagreement*, and between the latter two they prefer *Agreement*. This tendency is confirmed by related research on agreement (Stromer-Galley and Muhlberger, 2009). ContextSetting, *Extension*, and *NeutralResponse* are, expectedly, very frequent. This implies there are a lot of boilerplate explanations around segments with more relevant DAs. ## 4 Automated Dialogue Act Tagging We provide a preliminary DA tagging model to investigate the predictability of our DA tags, and to serve as a baseline for future work. We use a hierarchical sequence model, inspired by work in DA tagging for spoken dialogue (e.g. Li et al., 2019): the input is a sequence of segments (each one a sequence of words), and the output is a sequence of predictions, one 14-dimensional vector for each input segment, representing DA probabilities. Each input segment is encoded into a vector; we use the [CLS] token of BERT (Devlin et al., 2019). The sequence of segment vectors is then passed to a Bidirectional-LSTM (Hochreiter and Schmidhuber, 1997); each BiLSTM hidden state vector is passed through a linear layer (shared for all time steps) to produce the output prediction vector sequence. The loss function is binary cross-entropy averaged across all labels and all elements of the sequence. The model is implemented using PyTorch (Paszke et al., 2019) and scikit-learn (Pedregosa et al., 2011). We used a learning rate of 2−5, batchsize of 32, and LSTM hidden-layer size of 256. All other hyper-parameters are left at default values. We experiment with two variants of BERT: bertbase and bert-base-ietf (fine-tuned using language modeling loss on the entire IETF mail archive). We split the data into train (60%), validation (20%), and test threads (20%). We report results on test threads by the model best on the validation threads. The input sequences for the model are the possible root-to-leaf paths in the input threads, following (e.g. Zubiaga et al., 2016).8 Results are given in Table 2. Predicting higherlevel labels is easier, as expected. For lower-level | bert-base | bert-base-ietf | | | | | | |------------------|------------------|------|-----|-----|-----|-----| | Label | P | R | F1 | P | R | F1 | | InfProviding | .89 | .96 | .93 | .88 | .97 | .93 | | Agreement | .67 | .72 | .69 | .47 | .67 | .55 | | Answer | .44 | .40 | .41 | .35 | .49 | .41 | | ContextSetting | .38 | .67 | .49 | .36 | .67 | .47 | | Disagreement | .14 | .24 | .17 | .10 | .29 | .15 | | Extension | .64 | .72 | .67 | .66 | .62 | .64 | | NeutralResponse | .45 | .52 | .48 | .43 | .52 | .47 | | ProposeAction | .47 | .72 | .57 | .44 | .67 | .53 | | StateDecision | .39 | .28 | .47 | .19 | .30 | .23 | | InfSeeking | .85 | .87 | .86 | .78 | .84 | .81 | | ClarificationEl. | .25 | .46 | .33 | .21 | .51 | .30 | | Question | .78 | .98 | .87 | .84 | .88 | .86 | | Social | .33 | .67 | .44 | .45 | .52 | .48 | | Thanking | .75 | .99 | .86 | .33 | .92 | .48 | | Macro-average | .53 | . 66 | .59 | .46 | 63 | .52 | labels, performance is worst for labels that are conceptually more subjective (as reflected by IAA) or have very few examples. Curiously, bert-base-ietf performs comparably to or worse than bert-base. We hypothesize the reason for this may be the specific language of the IETF (technical discussions). It may cause the additional language model training step to make the bert-baseietf model forget information generally useful for DA tagging. On the other hand, this information is retained in bert-base. If this is the case, it would hurt the performance of bert-base-ietf after further fine-tuning on the DA tagging task. However, we leave investigation of this and other hypotheses for this unexpected result to future work. ## 5 Conclusion We have presented a taxonomy of dialogue acts (DAs) and a labeled dataset of emails. Moreover, we provided a data analysis and a preliminary DA prediction model. We hope this dataset will be useful to facilitate further research on the interaction behavior of participants in online collaboration settings. Future work could include a more detailed investigation into the underlying reasons for the observed trends. Another possibility is looking into the interaction of DAs and the participant interaction graph as described by (Khare et al., 2022). Finally, to get further insights, it would be interesting to annotate segments of with a particular DA with additional labels, e.g., explicit/implicit for Agreement or different sub-types of *Question*. ## 6 Limitations One of the main limitations is that we focus solely on the IETF. Consequently, we can never be completely sure how well our findings generalize to other similar organizations without further annotation. We are also limited by not conducting a hyperparameter search on our models. We omit this step as the main goal is not maximizing performance, but rather data annotation and analysis. In a similar vein, it is likely possible to increase performance by using a more advanced model that is either trained on dialogue-like data or is specifically designed to exploit phenomena specific to dialogue (e.g., having speaker embeddings). We also acknowledge that many emails are longer than 512 tokens which is the limit of our BERT model and thus might have been cut short. However, most of the emails do fit into this limit. ## 7 Ethical Considerations The IETF conditions participation by agreements and policies that explicitly state mailing list discussions and Datatracker metadata will be made publicly available.9In our analysis we use only this publicly available data. We have discussed our work with the IETF leadership and confirmed it is conforming to all their policies. ## Acknowledgements We thank the anonymous reviewers for their helpful comments. This work was supported by the UK EPSRC under grants EP/S033564/1 and EP/S036075/1 (Sodestream: Streamlining Social Decision Making for Enhanced Internet Standards). Purver was also supported by the Slovenian Research Agency via research core funding for the programme Knowledge Technologies (P2-0103). ## References John Aberdeen and Lisa Ferro. 2003. Dialogue patterns and misunderstandings. Technical report, MITRE Corp. McLean VA. Valerio Basile, Michael Fell, Tommaso Fornaciari, Dirk Hovy, Silviu Paun, Barbara Plank, Massimo Poesio, and Alexandra Uma. 2021. We need to consider disagreement in evaluation. In *Proceedings of the* 9For details see https://www.ietf.org/about/ note-well/ and the IETF privacy policy available at https://www.ietf.org/privacy-statement/. 1st Workshop on Benchmarking: Past, Present and Future, pages 15–21, Online. Association for Computational Linguistics. Trung Bui and Stanley Peters. 2010. Decision detection using hierarchical graphical models. In *Proceedings* of the ACL 2010 Conference Short Papers, pages 307–312. Harry Bunt, Jan Alexandersson, Jae-Woong Choe, Alex C Fang, Koiti Hasida, Volha Petukhova, Andrei Popescu-Belis, and David Traum. 2012. Iso 24617-2: A semantically-based standard for dialogue annotation. Technical report, University of Southern California Los Angeles. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Raquel Fernández, Matthew Frampton, Patrick Ehlen, Matthew Purver, and Stanley Peters. 2008. Modelling and detecting decisions in multi-party dialogue. In *Proceedings of the 9th SIGdial Workshop on Discourse and Dialogue*, pages 156–163. H Flanagan. 2019. RFC 8700: Fifty Years of RFCs. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. *Neural computation*, 9(8):1735– 1780. Pei-Yun Hsueh and Johanna D Moore. 2007. What decisions have you made?: Automatic decision detection in meeting conversations. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 25–32. Adam Janin, Don Baron, Jane Edwards, Dan Ellis, David Gelbart, Nelson Morgan, Barbara Peskin, Thilo Pfau, Elizabeth Shriberg, Andreas Stolcke, et al. 2003. The icsi meeting corpus. In *2003 IEEE International Conference on Acoustics, Speech, and* Signal Processing, 2003. Proceedings.(ICASSP'03)., volume 1, pages I–I. IEEE. Prashant Khare, Mladen Karan, Stephen McQuistin, Colin Perkins, Gareth Tyson, Matthew Purver, Patrick Healey, and Ignacio Castro. 2022. The web we weave: Untangling the social graph of the IETF. In *Proceedings of the 16th International AAAI Conference on Web and Social Media (ICWSM)*, pages 500–511, Palo Alto, CA, USA. AAAI Press. Bryan Klimt and Yiming Yang. 2004. The enron corpus: A new dataset for email classification research. In European conference on machine learning, pages 217–226. Springer. Elisa Leonardelli, Stefano Menini, Alessio Palmero Aprosio, Marco Guerini, and Sara Tonelli. 2021. Agreeing to disagree: Annotating offensive language datasets with annotators' disagreement. In *Proceedings of the 2021 Conference on Empirical Methods in* Natural Language Processing, pages 10528–10539. Ruizhe Li, Chenghua Lin, Matthew Collinson, Xiao Li, and Guanyi Chen. 2019. A dual-attention hierarchical recurrent neural network for dialogue act classification. In *Proceedings of the 23rd Conference on Computational Natural Language Learning* (CoNLL), pages 383–392, Hong Kong, China. Association for Computational Linguistics. Iain McCowan, Jean Carletta, Wessel Kraaij, Simone Ashby, S Bourban, M Flynn, M Guillemot, Thomas Hain, J Kadlec, Vasilis Karaiskos, et al. 2005. The AMI meeting corpus. In Proceedings of the 5th International Conference on Methods and Techniques in Behavioral Research, volume 88, page 100. Citeseer. Stephen McQuistin, Mladen Karan, Prashant Khare, Colin Perkins, Gareth Tyson, Matthew Purver, Patrick Healey, Waleed Iqbal, Junaid Qadir, and Ignacio Castro. 2021. Characterising the IETF through the lens of RFC deployment. In Proceedings of the 21st ACM Internet Measurement Conference, pages 137–149. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d' Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 8024–8035. Curran Associates, Inc. Ellie Pavlick and Tom Kwiatkowski. 2019. Inherent disagreements in human textual inferences. *Transactions of the Association for Computational Linguistics*, 7:677–694. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. *Journal of Machine Learning Research*, 12:2825–2830. Elizabeth Shriberg, Raj Dhillon, Sonali Bhagat, Jeremy Ang, and Hannah Carvey. 2004. The icsi meeting recorder dialog act (mrda) corpus. Technical report, International Computer Science Institute, Berkeley CA. Jennifer Stromer-Galley and Peter Muhlberger. 2009. Agreement and disagreement in group deliberation: Effects on deliberation satisfaction, future engagement, and decision legitimacy. *Political communication*, 26(2):173–192. Alexandra N Uma, Tommaso Fornaciari, Dirk Hovy, Silviu Paun, Barbara Plank, and Massimo Poesio. 2021. Learning from disagreement: A survey. *Journal of* Artificial Intelligence Research, 72:1385–1470. Arkaitz Zubiaga, Elena Kochkina, Maria Liakata, Rob Procter, and Michal Lukasik. 2016. Stance classification in rumours as a sequential task exploiting the tree structure of social media conversations. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 2438–2448, Osaka, Japan. The COLING 2016 Organizing Committee. ## A Appendix A: Comparison With The Ami Dataset In this section, we compare our dataset with the AMI dataset (McCowan et al., 2005). The counts are given in Table 3, after removing those AMI DA categories that make sense in AMI's spoken, faceto-face setting but do not exist in the email given the text modality and non-synchronous nature (e.g. Stall, Backchannel). The distributions are roughly similar. The main difference is a lot more *ClarificationElicitation* and *Answer* in AMI. The former may reflect the explicitly decision-oriented setting of AMI (actors were tasked with making design decisions on how to build a remote control, and therefore decisions and clarity were the primary focus), and/or its synchronous speech, which participants must clarify immediately (while email can be studied over more time before replying). The latter may reflect the fact that AMI is built on live face-to-face conversations, thus leaving an articulated question ignored and unanswered would be considered rude, while in email communication, this is less problematic. ## B Appendix B: Computing Resources The prediction model experiments (two of them – bert-base and bert-base-ietf) were run on a single Nvidia QUADRO RTX 6000 GPU for 100 epochs each. For both experiments, one epoch took approximately 4 minutes. In preliminary experiments, we found the models with our hyperparameters need 14GB of video memory. They can, however, run with less memory with reduced batch size. Alternatively, larger batches could be emulated using several smaller batches and gradient accumulation (this is not implemented in our code). | AMI | This work | | | |----------------------------------------------------------------|-------------|--------------------------|-------| | label | count | label | count | | Inform | 33484 | InformationProviding | 7643 | | Assess | 21391 | Answer | 655 | | Suggest / Offer | 10921 | ProposeAction | 2225 | | Elicit-Inform / Elicit-Offer-Or-Suggestion / Elicit-Assessment | 7191 | Question | 865 | | Comment-About-Understanding / Elicit-Comment-Understanding | 2560 | ClarificationElicitation | 326 | | Be-Positive | 2210 | Agreement | 651 | | Be-Negative | 98 | Disagreement | 365 | ## C Appendix C: Annotation Details The annotators come from diverse backgrounds but were primarily chosen as skilled linguists from the population of graduate and Ph.D. level linguistics students. They all lived in the UK and were paid an hourly wage that was slightly above average for similar tasks in the UK. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 6 ✓ A2. Did you discuss any potential risks of your work? 6 ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** 2,4 ✓ B1. Did you cite the creators of artifacts you used? 2,4 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? 2 ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? 2,7 ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? 7 ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? 2 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. 2,4 ## C ✓ **Did You Run Computational Experiments?** 3,4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix 2 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 4,6 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 4 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 4 D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** 2 ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? 2, Appendix 3 ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Appendix 3 ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? 7 D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. The data is already public. ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Appendix 3
wu-sun-2023-negation
Negation Scope Refinement via Boundary Shift Loss
https://aclanthology.org/2023.findings-acl.379
Negation in natural language may affect many NLP applications, e.g., information extraction and sentiment analysis. The key sub-task of negation detection is negation scope resolution which aims to extract the portion of a sentence that is being negated by a negation cue (e.g., keyword {``}not{''} and never{''}) in the sentence. Due to the long spans, existing methods tend to make wrong predictions around the scope boundaries. In this paper, we propose a simple yet effective model named R-BSL which engages the Boundary Shift Loss to refine the predicted boundary. On multiple benchmark datasets, we show that the extremely simple R-BSL achieves best results.
# Negation Scope Refinement Via Boundary Shift Loss Yin Wu and **Aixin Sun** Nanyang Technological University, Singapore [email protected], [email protected] ## Abstract Negation in language may affect many NLP applications, *e.g.,* information extraction and sentiment analysis. The key sub-task of negation detection is *negation scope resolution* which aims to extract the portion of a sentence that is being negated by a negation cue (*e.g.,* keyword "not" and "never") in the sentence. Due to the long spans, existing methods tend to make wrong predictions around the scope boundaries. In this paper, we propose a simple yet effective model named **R-BSL** which engages the Boundary Shift Loss to refine the predicted boundary.1 On multiple benchmark datasets, we show that the extremely simple R-BSL achieves best results. ## 1 Introduction Negation is a complex linguistic phenomenon. Even though there does not exist a widely agreed task definition for negation detection, two sub-tasks are commonly performed: (i) *negation cue detection*, and (ii) *negation scope resolution*. Negation cue is a keyword (*e.g.,* not, never) in a sentence that acts as an indicator of semantic negation, and its detection is relatively easy. Negation scope refers to the portion(s) in a sentence being semantically affected (*i.e.,* negated) by the cue. There could be multiple cues in one sentence and each corresponds to its own scope. Table 1 lists three cues in the same sentence and their scopes. Different datasets may adopt different annotation guideline of scopes, *e.g.,* whether or not a cue itself is a part of its scope. The example sentence in Table 1 well demonstrates the unique characteristics of this task compared to other span extraction tasks like Named Entity Recognition (NER). They are: (i) a negation scope is defined by (or associated to) a given cue, (ii) the negation spans are usually longer than a named entity, and (iii) a good number 1Our code is available at https://github.com/ LuciusLan/BSL_Negation of negation spans are discontinuous, depending on the adopted annotation guideline. In recent years, pretrained language models (PLMs) like BERT (Devlin et al., 2019) have been explored to improve negation detection (Khandelwal and Sawant, 2020; Khandelwal and Britto, 2020). Specially designed pre-training material that focuses on negation has also been explored and achieves state-of-the-art performance (Truong et al., 2022). Nevertheless, we believe that negation detection shall be considered as a pre-processing step for downstream subtasks and its model shall not be over-complicated. In this paper, we enhance a simple baseline by Khandelwal and Sawant (2020) with an effective Boundary Shift Loss (BSL), to refine the predicted negation scope boundaries. BSL is derived based on the positions of span boundaries. For each token, boundary shift tells the direction of the nearest span boundary: left or right. With the simple BERT + Feed-forward architecture, our R-BSL model outperform baselines on all well-known datasets. ## 2 Related Work Negation detection was firstly studied in biomedical and health texts, represented by NegEx (Chapman et al., 2001) developed for EHRs. NegEx is built on top of regular expressions; its negation scopes are mainly named entities. The definition of negation scope becomes largely different and more generic in later datasets. The BioScope corpus (Vincze et al., 2008) annotates negation scope in biological full papers and scientific abstracts. The "Sherlock" corpus (Morante and Blanco, 2012), annotates Conan Doyle's novels *Sherlock Holmes* series. SFU Review Negation corpus (Konstantinova et al., 2012) annotates negations and speculations in the SFU Review corpus (Taboada et al., 2006) for sentiment analysis. Like many other NLP tasks, BERT leads to significant improvement on scope resolution (Khan- Cue Negation scope marked in discontinuous "*span*" s in- Mr. Sherlock Holmes, who was usually very late in the mornings, save upon "*those*" not [cue] in- [/cue] "frequent occasions when he was up all night", was seated at the breakfast table. not Mr. Sherlock Holmes, who was usually very late in the mornings, save upon "*those*" [cue] not [/cue] "infrequent occasions when he was up all night", was seated at the breakfast table. save Mr. Sherlock Holmes, "*who was*" usually "*very late in the mornings*", [cue] save [/cue] "upon those not infrequent occasions when he was up all night", was seated at the breakfast table. Table 1: An example sentence with three different negation cues, and their corresponding scopes. The cues are marked with special tokens [cue] cue [/cue], and scopes "*span*" s are in italic with double quotation marks. delwal and Sawant, 2020). Results are further improved in later research (Khandelwal and Britto, 2020) with more advanced PLMs like RoBERTa (Liu et al., 2019) and XLNet (Yang et al., 2019), together with multi-task training. Recently, Truong et al. (2022) utilize additional pre-training with negation cue masking, achieving better performance on BioScope and SFU, but poorer results on Sherlock. Nevertheless, the higher performance comes with the price of extra training resources and time. ## 3 Problem Definition As a common practice, we assume that negation cue has been successfully detected. Our key focus is *negation scope resolution* for a given cue. For presentation simplicity, we assume there is only one cue in a given sentence. The cases of multiple cues can be easily achieved by sentence duplication, each time with a different known cue being wrapped with special indicator tokens. The model would be trained to predict negation scope of each cue separately. Table 1 gives a typical example of how sentence with three negation cues and three corresponding scopes is being pre-processed by duplication and the special indicator tokens [cue] [/cue]. Given an input sequence S = ⟨t1, t2*, ..., t*n⟩, with a known cue, the task is to predict the cue's negation score in token spans. We adopt the OSC tagging scheme: Y = ⟨y1, y2*, ..., y*n⟩ where yiis O if tiis non-scope, S if tiis part of the scope, and C if tiis the given cue. We use a dedicated label "C" for cue, to satisfy the annotation guidelines in different datasets, *i.e.,* not all annotations consider cue as a part of the scope. ## 4 The R-Bsl Model The central idea of Boundary Shift Loss is inspired by techniques used for semantic segmentation. ![1_image_0.png](1_image_0.png) (c) local distance → direction map ![1_image_1.png](1_image_1.png) Background. Locating accurate segmentation boundary is particularly important for medical images such as MRI, as the boundary for body organ is crucial. In a 2D image, we can represent the deviation of the predicted boundary with ground truth boundary in the form of a distance map, as shown in Figure 1. Each pixel in the example image is mapped with a normalized distance to its nearest ground truth boundary pixel, forming the boundary distance map. For a typical pixel, the distance map could be reduced to a *local distance map* of 3 × 3, containing distance of the pixel itself and that of its eight neighbours. The cell with the smallest distance (*e.g.,* the top left cell in the example) represents the direction to the nearest boundary. To indicate this direction, local distance map can be further reduced to an one-hot *local direction map*, where the "1" cell representing the direction of the nearest boundary. Accordingly, the predicted boundary can be further refined toward this direction for more accurate boundary prediction (Wang et al., 2022). Span extraction tasks in NLP share the same aim to find accurate region boundaries, but in a 1D space, ![2_image_0.png](2_image_0.png) ![2_image_2.png](2_image_2.png) i.e., along token sequence to shift left or right. ## 4.1 Boundary Shift Map To enable boundary shift loss, we convert the scope labels to scope span boundary labels. BS = ⟨bs1, bs2*, ..., bs*n⟩ and BE = ⟨be1, be2*, ..., be*n⟩ are the two label sequences that represent the start and end of boundaries, respectively. bsiis Bs if ti is the start of a scope span, and O otherwise; beiis Be if tiis the end of a scope span, and O otherwise. If a span consists of only one token, the token itself is labeled both Bs and Be. Due to discontinuous spans, there could be multiple bs and be labels for one given cue, as shown in Figure 2. Next, we create the "Boundary Shift Map" (BSM) for tokens that are not on the boundaries, by labeling their **shifting directions**: L for left, and R for right. The 5th and 6th rows in Figure 2 provide a visual illustration, for start and end boundaries respectively. A token is labeled with L / R if the nearest boundary resides on the left / right of the token. For the special case that a token has the same distance to both boundaries on the left and right, we label the token with R. ## 4.2 R-Bsl Model Detail Figure 3 illustrates the model architecture. We use BERT to encode the sentence and then use three feed-forward (FF) layers in parallel, to predict scope label and the BSM labels. The losses for the three label classifiers Lscope, Lstart, Lend are the widely used Cross Entropy loss. L*scope* is formally defined in Eq. 1 and the other two losses are defined similarly. The three losses are then combined to ![2_image_1.png](2_image_1.png) form the final loss in Eq. 2, and we set α = 0.2 $$L_{s c o p e}=-\sum_{i=1}^{N}y^{(i)}\log(\hat{y}^{(i)})\qquad\qquad(1)$$ $$L_{s o p e}=\alpha L_{s o p e}+\frac{1-\alpha}{2}\left(L_{s t a r t}+L_{e n d}\right)\quad(2)$$ Warm Up. In training, there is a "warm up" phase to train the model solely with scope loss L*scope* for the first 5 epochs (where the validation loss is reasonably stable). Then boundary shift losses kick in to for scope refinement. ## 5 Experiments 5.1 Experiment Results We conduct experiments on all three benchmark datasets: Sherlock, BioScope, and SFU. Among them, BioScope and SFU datasets do not come with official train-validation-test split. Following the previous studies, we use random split on 7015-15 ratios; however the randomness in split may slightly affect model performance. Hence, we also report the result of our re-implemented baseline model Khandelwal and Sawant (2020), which is a BERT + Feed-forward with OSC scope tags. Table 2 reports the results of F1 over scope tokens, defined by Morante and Blanco (2012). For each scope, token-wise F1 is computed between ground truth and predicted scope tokens. For all our implemented models, the reported results are average scores of 5 out of 7 runs, excluding the highest and lowest scores. All the runs are set with randomly generated seeds. Since Truong et al. (2022) use RoBERTa instead of BERT, we also report R-BSL (RoBERTa-base) for fair comparison. R-BSL achieves best performance on all three datasets, particularly on Sherlock which comes with official train/test split. Note that on Sherlock dataset, our re-implemented baseline does not reach the scores reported in Khandelwal and | Dataset | Sherlock | BioScope-Abstract | SFU | | | | | | | |-------------------------------------|------------|---------------------|-------|-------|-------|-------|-------|-------|-------| | Method | Pr | Re | F1 | Pr | Re | F1 | Pr | Re | F1 | | Khandelwal and Sawant, 2020 | - | - | 92.36 | - | - | 95.68 | - | - | 90.95 | | Khandelwal and Britto, 2020 | - | - | - | - | - | 96.68 | - | - | 92.39 | | Kurtz et al., 2020 * | - | - | 89.71 | - | - | - | - | - | - | | Truong et al., 2022 ** - Baseline † | - | - | 91.51 | - | - | 94.23 | - | - | 90.44 | | Truong et al., 2022 ** - CueNB | - | - | 91.24 | - | - | 95.81 | - | - | 91.03 | | Baseline (Re-Implementation) | 94.79 | 89.50 | 92.06 | 95.90 | 97.30 | 96.59 | 91.22 | 91.20 | 91.21 | | R-BSL (BERT-base-cased) | 95.12 | 90.57 | 92.77 | 96.33 | 97.37 | 96.85 | 91.55 | 91.27 | 91.43 | | R-BSL (RoBERTa-base) | 94.54 | 91.24 | 92.85 | 96.29 | 98.54 | 97.40 | 90.80 | 91.51 | 91.14 | Sawant (2020).2 Truong et al. (2022) also reports lower results (mean of 5 runs) using the code released by Khandelwal and Sawant (2020). Nevertheless, both our R-BSL variants outperform all baselines on Sherlock, and on BioScope dataset. On SFU, our models' improvement is marginal. The main reason is the distributional bias, for the negation scopes largely align with punctuation or special tokens (see Appendix C). For comprehensive evaluation, Table 3 shows the scope level F1 scores by exact match. That is, when the predicted scope exactly matches the ground truth, it is considered as True Positive. There exists True Negative and False Positive cases due to "void negation" as discussed in Appendix C. When the ground-truth has no negation scope, if the model predicts any scope, that would be a False Positive. The scope exact match F1 is similar to "Scope CM" metric defined in Morante and Blanco (2012). However, as we do not focus on cue detection but using cues as input, the results is not directly comparable with Scope CM results in earlier studies. Compared to token-level measure, the improvements of our model over baseline is now by a much larger margin, particularly the variant with RoBERTa. In other words, the boundary refinement by BSL enables the model to resolve more accurate negation scopes in terms of exact scope span match, which is a stricter measure. ## 5.2 Ablation Study We conduct two ablation studies on Sherlock dataset, and the results are reported in Table 4. Method Sherlock BioScope-A SFU Baseline (Re-Implemented) 84.19 94.11 88.06 R-BSL (BERT-base-cased) 85.35 94.94 88.50 R-BSL (RoBERTa-base) **87.10 96.16 89.91** Table 3: Scope exact match F1 scores on three datasets Table 4: Ablation studies on the Sherlock dataset ## Boundary Label Vs Boundary Shift Map. We first replace Boundary Shift Map with the start/end boundary labels (*i.e.,* Bs, Be, O tagging) as the prediction target for the boundary classifiers. Tiny improvement is observed over baseline, which indicates the usefulness of boundary labels. However, if not using scope classifier (or OSC tags) and using the boundary classifier with BSL loss, there is a significant drop in F1. This result suggests that the scope span detection remains the key focus of the model and boundary classifier shall focus on boundary refinement. "Warm Up" of Scope Classifier. We "warm up" the training with the first 5 epochs for scope classifier only. The boundary classifier with BSL loss then comes into the picture. To study its impact, we train all the three classifiers from the beginning. Shown in Table 4, the removal of warm up leads to negative impact on results. This ablation study suggests that the BSL can further improve the results when the span boundaries have been de- | Model | Pr | Re | F1 | |----------------------------------|-------|-------|-------| | Baseline (Re-Implemented) | 94.79 | 89.50 | 92.06 | | R-BSL (BERT-base-cased) | 95.12 | 90.57 | 92.77 | | Replace BSL with Boundary Labels | 95.34 | 89.27 | 92.10 | | Boundary classifier only | 94.06 | 75.47 | 83.74 | | Without "warm up" | 95.45 | 89.22 | 92.22 | tected by the base model, *i.e.,*, the scope classifier, at reasonably good accuracy. ## 6 Conclusion We propose a simple sequence labelling training strategy to enhance boundary prediction for negation scope resolution. Through experiments, we demonstrate the effectiveness of boundary shift loss on complex span extraction tasks on three benchmark datasets. In particular, our simple model achieves the state-of-the-art results on the Sherlock dataset which is considered more challenging for this task. Our model is simple and can be used as a pre-processing for downstream tasks where negation is an important consideration. ## Limitations As shown in the ablation studies, using the Boundary Shift Loss without the base model for scope prediction leads to a huge negative impact on the performance. That is, BSL strongly relies on the assumption that the proposed candidate spans are, to some extent, being an accurate estimation of the target spans. The experiment of using BSL solely could be seen as an extreme case, that no candidate spans are proposed at all. For our task, BSL could benefit from the strong base model. For the case of noisy datasets or on a more challenging task, where a base model could not generalize to a reasonably good coarse span proposal, the benefit of BSL might be limited. ## Acknowledgments This research is supported by the Agency for Science, Technology and Research (A*STAR) Singapore, under its AME Programmatic Funding Scheme (Project \#A19E2b0098). ## References Wendy W. Chapman, Will Bridewell, Paul Hanbury, Gregory F. Cooper, and Bruce G. Buchanan. 2001. A simple algorithm for identifying negated findings and diseases in discharge summaries. *Journal of* Biomedical Informatics, 34(5):301–310. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Federico Fancellu, Adam Lopez, Bonnie Webber, and Hangfeng He. 2017. Detecting negation scope is easy, except when it isn't. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 58–63, Valencia, Spain. Association for Computational Linguistics. Aditya Khandelwal and Benita Kathleen Britto. 2020. Multitask learning of negation and speculation using transformers. In *Proceedings of the 11th International Workshop on Health Text Mining and Information Analysis*, pages 79–87, Online. Association for Computational Linguistics. Aditya Khandelwal and Suraj Sawant. 2020. NegBERT: A transfer learning approach for negation detection and scope resolution. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 5739–5748, Marseille, France. European Language Resources Association. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Natalia Konstantinova, Sheila C.M. de Sousa, Noa P. Cruz, Manuel J. Maña, Maite Taboada, and Ruslan Mitkov. 2012. A review corpus annotated for negation, speculation and their scope. In *Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)*, pages 3190–3195, Istanbul, Turkey. European Language Resources Association (ELRA). Robin Kurtz, Stephan Oepen, and Marco Kuhlmann. 2020. End-to-end negation resolution as graph parsing. In Proceedings of the 16th International Conference on Parsing Technologies and the IWPT 2020 Shared Task on Parsing into Enhanced Universal Dependencies, pages 14–24, Online. Association for Computational Linguistics. Tsung-Yi Lin, Michael Maire, Serge J. Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. 2014. Microsoft COCO: common objects in context. In *Computer Vision -* ECCV 2014 - 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V, volume 8693 of Lecture Notes in Computer Science, pages 740–755. Springer. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. *CoRR*, abs/1907.11692. Roser Morante and Eduardo Blanco. 2012. *SEM 2012 shared task: Resolving the scope and focus of negation. In **SEM 2012: The First Joint Conference on* Lexical and Computational Semantics - Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012), pages 265–274, Montréal, Canada. Association for Computational Linguistics. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Z. Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. In *Advances in Neural Information Processing* Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 8024–8035. Maite Taboada, Caroline Anthony, and Kimberly Voll. 2006. Methods for creating semantic orientation dictionaries. In Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC'06), Genoa, Italy. European Language Resources Association (ELRA). Thinh Truong, Timothy Baldwin, Trevor Cohn, and Karin Verspoor. 2022. Improving negation detection with negation-focused pre-training. In *Proceedings* of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4188–4193, Seattle, United States. Association for Computational Linguistics. Veronika Vincze, György Szarvas, Richárd Farkas, György Móra, and János Csirik. 2008. The bioscope corpus: biomedical texts annotated for uncertainty, negation and their scopes. *BMC bioinformatics*, 9(11):1–9. Chi Wang, Yunke Zhang, Miaomiao Cui, Peiran Ren, Yin Yang, Xuansong Xie, Xian-Sheng Hua, Hujun Bao, and Weiwei Xu. 2022. Active boundary loss for semantic segmentation. In *Thirty-Sixth AAAI* Conference on Artificial Intelligence, AAAI 2022, Thirty-Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event, February 22 - March 1, 2022, pages 2397–2405. AAAI Press. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 5754–5764. ## A Implementation Details We use BERT-base-cased and RoBERTa-base as the pretrained LMs. The model parameters are optimized with Adam (Kingma and Ba, 2015). The BERT was trained with initial learning rate of 5e − 6, and the classifier layers were trained with initial learning rate of 5e − 5. This is different from Khandelwal and Sawant (2020) where they set both BERT and classifier layers learning rate to 5e − 5. The learning rate was scheduled with "Reduce Learning Rate on Plateau", which cuts the learning rate by half after 3 consecutive epochs without evaluation results being improved, and having cool-down of 2 epochs. We adopt early stopping threshold of 12, which means the training will be stopped when the evaluation results stop to improve for 12 consecutive epochs. The models were implemented with PyTorch (Paszke et al., 2019) and Huggingface Transformers (Wolf et al., 2020). Experiments were performed mainly on a single Nvidia RTX 3080 GPU. One training run took 40 minutes to 1 GPU hour, varied with the dataset size and early-stopping position. Inference time is 7 seconds on Sherlock test-set, 4 seconds on BioscopeAbstract test-set, 16 seconds on SFU test-set, and requires approximately 3 GB of GPU memory. Following Khandelwal and Sawant (2020), we use special augmentation tokens to indicate the appearance of negation cues. However, our indicator tokens are slightly different from Khandelwal and Sawant (2020) where they only add one special token in front. We have special tokens on both ends of a [cue] cue [/cue]. Another small difference is the treating of affixal cues. The Sherlock dataset defines affixal cues like 'in-" being cue and "-frequent" being inside the scope for word "infrequent". In our implementation, we simply treat the whole word as a cue and use post-processing to handle the affixal cue by regular expressions. This is also for the sake of unifying the model behaviour ![6_image_1.png](6_image_1.png) across datasets, as the SFU and BioScope datasets do not have such special annotation for affixal cues. All three datasets contain not only sentences with negations, but contain sentences without negations, for training model on predicting negation cues. As our focus is on negation scope resolution with the assumption that cue has been detected, we only use the portion of sentences containing negations for training and testing. ## B Impact Of Α **In Training Loss** We perform hyper-parameter search on the value of α in the loss function (Equation 2). As shown in Figure 4, it appears that the Precision improves as α increase; the Recall improves with α initially at lower range, but decreases as α further increase. The "trade-off" between Precision and Recall is best balanced at α = 0.2, where we observe the highest F1. Note that α = 0.3333 denotes the case that no weighting terms are applied, *i.e.,* Loss = Lscope + Lstart + Lend. It is equivalent to α = 0.3333 though no α was actually applied in implementation, for the sake of stability of floating point calculations. Overall, the impact of α on token level F1 score was not significant, since the boundary shift losses serve as auxiliary loss, and the final prediction is still based on scope classifier. ## C Discussion On Distributional Bias Comparing the token-level F1 (Table 2) and the span-level exact match F1 (Table 3), both F1's are similar on BioScope and SFU dataset, but not on the Sherlock dataset. As Fancellu et al. (2017) had suggested, the annotation rules of BioScope ![6_image_0.png](6_image_0.png) and SFU had seemingly over-simplified the problem. They also provided statistics on percentages of negation scopes that can be exactly represented by the closest punctuation tokens to the cue as scope boundaries. The percentage for BioScope Abstracts and SFU are 64% and 80%, respectively, while the value for Sherlock is only 40%. However, for BERT-based models, researchers often rely on using special tokens (*e.g.,* [cue] cue [/cue] for a cue) as indicators for region of interest. The special tokens themselves can be considered as another form of punctuation tokens. Here we provide another set of statistics on scope spans that can be exactly represented by punctuation tokens and cue special tokens, in Table 5. Note that in Sherlock and SFU, the negation cue tokens are not considered as part of scope in their annotation. This would cause considerably number of additional discontinuous negation scopes, hence we adjust the annotation when performing this statistics to also consider cue tokens as part of scope. Also for SFU, the number of scopes (3528) is much higher than the number of scope spans (3068), as there are a good number of "void" scope. These are the cases that the negation in the sentence is the cue itself, such as "Of course not!". If the annotation rule does not consider negation cues as part of negation scope, there will be no negation scope in such sentences, and hence we call them "void negation". Such cases are not considered as negation in annotation guideline of BioScope. Reported in Table 5, the percentage of scope spans that are exactly wrapped by punctuation (considering special tokens also as punctuation) for BioScope dataset is 79.1%, and for SFU 84.0%. Such phenomenon could be due to both the annotation scheme, and the writing style. Such high percentage of "easy cases" could make the model biased to relying more on punctuation information, and yet deliver relatively high scores. In the mean time, the percentage for Sherlock is 47.4%, and the increase of percentage due to cue special token is far less than that of BioScope. The high percentage values also explain that the exact match F1's of BioScope and SFU are quite close to their token-level F1 scores. The Sherlock dataset, hence is considered as a more challenging dataset for this problem. While one would intuitively think of re-sampling the datasets to adjust the portion of easy and hard cases, Fancellu et al. (2017) show that the benefit of under-sampling is marginal on their LSTM-based models. We presume similar behaviour for our BERT-based model. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section Limitations (After 6. Conclusion) ✗ A2. Did you discuss any potential risks of your work? This is a task on information extraction from given text. There is no potential risk. ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1. Introduction ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** 2. Related Work, 5. Experiments ✓ B1. Did you cite the creators of artifacts you used? 2. Related Work, 5. Experiments ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? 2. Related Work, 5. Experiments ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? 2. Related Work, 5. Experiments ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Among all three datasets used, two are anonymized by nature (English novel published more than hundred years ago, and scientific papers abstracts), one was anonymized by the creators. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? 2. Related Work, 5. Experiments ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Appendix C ## C ✓ **Did You Run Computational Experiments?** 5. Experiments ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix A The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 5. Experiments, Appendix A, Appendix B ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 5. Experiments ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 5. Experiments, Appendix A D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
eo-etal-2023-towards
Towards Diverse and Effective Question-Answer Pair Generation from Children Storybooks
https://aclanthology.org/2023.findings-acl.380
Recent advances in QA pair generation (QAG) have raised interest in applying this technique to the educational field. However, the diversity of QA types remains a challenge despite its contributions to comprehensive learning and assessment of children. In this paper, we propose a QAG framework that enhances QA type diversity by producing different interrogative sentences and implicit/explicit answers. Our framework comprises a QFS-based answer generator, an iterative QA generator, and a relevancy-aware ranker. The two generators aim to expand the number of candidates while covering various types. The ranker trained on the in-context negative samples clarifies the top-N outputs based on the ranking score. Extensive evaluations and detailed analyses demonstrate that our approach outperforms previous state-of-the-art results by significant margins, achieving improved diversity and quality. Our task-oriented processes are consistent with real-world demand, which highlights our system{'}s high applicability.
# Towards Diverse And Effective Question-Answer Pair Generation From Children Storybooks Sugyeong Eo1∗, Hyeonseok Moon1∗, Jinsung Kim1∗, Yuna Hur1∗**, Jeongwook Kim**1∗ Songeun Lee2, Changwoo Chun2, Sungsoo Park2**, Heuiseok Lim**1† 1Dept. of Computer Science and Engineering, Korea University 2Hyundai Motor Group {djtnrud,limhseok}@korea.ac.kr [email protected] ## Abstract Recent advances in QA pair generation (QAG) have raised interest in applying this technique to the educational field. However, the diversity of QA types remains a challenge despite its contributions to comprehensive learning and assessment of children. In this paper, we propose a QAG framework that enhances QA type diversity by producing different interrogative sentences and implicit/explicit answers. Our framework comprises a QFS-based answer generator, an iterative QA generator, and a relevancy-aware ranker. The two generators aim to expand the number of candidates while covering various types. The ranker trained on the in-context negative samples clarifies the top-N outputs based on the ranking score. Extensive evaluations and detailed analyses demonstrate that our approach outperforms previous state-of-the-art results by significant margins, achieving improved diversity and quality. Our task-oriented processes are consistent with real-world demand, which highlights our system's high applicability. Our code is available at https://github.com/sugyeonge/ Towards-diverse-QAG.git. ## 1 Introduction Pedagogical studies over the years have demonstrated that asking questions about a given storybook nurtures insight and expands knowledge (Janusheva and Pejchinovska, 2009; Etemadzadeh et al., 2013; Shanmugavelu et al., 2020). Hence, posing questions becomes a fundamental part of education to engage children and promote literacy (Cotton, 1988; Ellis, 1993; Dillon, 2006). Along with the remarkable strides in natural language processing, recent studies have actively explored question-answer pair generation (QAG) systems that target education (Xu et al., 2022; Yao et al., 2022; Zhao et al., 2022). As ∗ Equal Contribution † Corresponding Author QAG is a labor-intensive manual process, it benefits from automated production methods. Furthermore, sustainable system update and utilization emphasize their high applicability (Le et al., 2014; Jerome et al., 2021). A challenge in educational QAG is the diversity of generated QA pairs as well as their quality (Lee et al., 2020; Zhang et al., 2021). Exploiting various QA types facilitates comprehensive learning, as each question inquires information specific to its type and stimulates different brain activities in the answering process (Guszak, 1967; Dillon, 2006). Controlling difficulty by adopting different types of questions or answers also enables a balanced assessment of the reading comprehension skills of children (Xu et al., 2022). Consequently, actively using questions with various interrogative words and answers reflecting both implicitness and explicitness is important. Yet, existing educational QAG studies have rarely considered diversity. Generated questions of existing models are extremely biased to the 'What' and 'Who' type questions. Answer extraction focuses on detecting spans within passages, resulting in an inability to create implicit answers that do not directly appear in the passage. To address the limitation, we propose an effective QAG framework that enhances diversity and quality. Our framework consists of a QFS-based answer generator, an iterative QA generator, and a relevancy-aware ranker. Specifically, **QFS-based** answer generator adopts query-focused summarization (QFS)-based (Vig et al., 2022) answer generation model (AGM), with the aim of obtaining diverse and proper answer candidates. **Iterative QA** generator is designed to increase question type variety by exploiting the interrogative word-indicated question generation model (QGM). We jointly execute this QGM with the question-answering model (QAM) to adjust the final answers. **Relevancyaware ranker** inspects quality to determine the final top-N outputs among the generated candidates. To grasp better pairs with high relevancy, the ranker is trained using in-context negative samples. The experimental results indicate that our framework outperforms the existing state-of-theart method by a large margin, with a gain of up to 0.435→0.503 on MAP@N with RougeL f1 and 0.9077→0.9178 on MAP@N with BERTScore (Yao et al., 2022). Additional statistical and human evaluations with detailed analyses consistently show higher QA type diversity and quality compared to previous studies, which demonstrates the superiority of the proposed approach. The three modules of our framework are process-oriented, providing outputs from each, which is in line with the real-world demand investigated by Wang et al. (2022). This highlights the high applicability to the education field in terms of Human-AI collaboration. We summarize our contributions as follows: (i) We propose a novel QAG framework that enhances the diversity of question and answer types while increasing quality. (ii) Extensive experiments show that our framework remarkably outperforms previous stateof-the-art results with high diversity and relevance. (iii) The task-oriented process is consistent with real-world demand, emphasizing the applicability of our framework in the education field. ## 2 Related Works The question-answer pair generation (QAG) task aims to automatically generate QA pairs based on the input text. In the early days, rule-based QAG systems are dominant (Lindberg et al., 2013; Labutov et al., 2015). With the advent of a deep learningbased paradigm, it was demonstrated for the first time by Du et al. (2017) that a fully end-to-end QAG system generates exceptionally good questions. Accordingly, diverse studies have been conducted and developed (Shakeri et al., 2020; Li et al., 2022; Zhou et al., 2019). Kang et al. (2019) adopt an interrogative words-based approach to clarify the semantics of words from a passage, resulting in the generation of questions containing key information of the context. Scialom et al. (2019) attempt to generate questions in an answer-agnostic manner by adapting the self-attention mechanism of Transformers with a copying mechanism, placeholders, and contextual word embeddings. Dong et al. (2022) propose a QAG model for closedbook setting without access to external knowledge by modeling the semantic relationships between questions and answers at a contextual level and measuring the answerability of the generated questions. In recent times, several attempts have been made to automatically generate valid QA pairs for educational purposes. FairytaleQA proposed by Xu et al. (2022) is a representative dataset in educational QAG. Education experts manually generated QA pairs suitable for learning and assessing children's reading comprehension skills. With the dataset, Yao et al. (2022) present an educational QAG system through a combination of three-step modules. Zhao et al. (2022) deal with the highcognitive demand question generation based on three out of seven narrative elements in the FairytaleQA. Dugan et al. (2022) summarize QA pairs to given book chapters and provide them to the fine-tuned T5 (Raffel et al., 2020) models. However, these studies rarely consider diversity when performing QAG. Leveraging diverse QA types are important aspect in QAG since using various interrogative words promotes different parts of the brain, which facilitates children's comprehensive learning (Guszak, 1967; Dillon, 2006). Varying answer types is also a factor that contributes to a balanced assessment, as the difficulty can be controlled by adjusting whether the answer is revealed in the passage (Xu et al., 2022). The evidence emphasizes the importance of considering a variety of QA pairs from a broad perspective for the effectiveness of reading comprehension (Kim, 2017). ## 3 Method Our QAG framework comprises three task-oriented processes: a QFS-based answer generator, an iterative QA generator, and a relevancy-aware ranker. The main goal of the two generators is to expand QA pair candidates containing diverse question and answer types. The ranker aims to determine the final output by scoring QA pair candidates. The overall QAG architecture of our framework is depicted in Figure 1. ## 3.1 Qfs-Based Answer Generator In the initial answer generation process, we employ query-focused summarization (QFS) to capture salient information related to a given sentence. After the QFS model generates a query-focused ![2_image_0.png](2_image_0.png) summary of a given passage by referring to the relevant key, the summary is fed into the generative answer generation model (AGM) to output implicit or explicit answers. Let Psg denote a passage consisting of n sentences, p1*, ..., p*n, and the corresponding groundtruth (GT) QA pair be (Qgt, Agt) = {(q gt j , a gt j )} m j=1 . First, we generate a query q gt jfocused summary qfsgt j = QFS(Psg, q gt j ) of Psg using the pre-trained QFS model, QFS. We then train AGM, termed as θAGM, with the concatenated input of Psg and qfsgt j in a sequence-to-sequence manner. The loss function for each Psg is estimated as shown in Equation (1). $$L_{\rm AGM}=-\sum_{(q_{j}^{\rm g},a_{j}^{\rm g})\in(Q^{\rm g},A^{\rm g})}{\bf E}_{\theta_{\rm AGM}}(a_{j}^{\rm g}\mid Psg,qfs_{j}^{\rm g})\tag{1}$$ In the inference phase, for each sentence piin Psg, we generate qfsi = QFS(Psg, pi). Then AGM produces a single initial answer a init ifor corresponding qfsi . The resulting answer set A*init* has n answers since answers are generated for every sentence in the passage. A*init* is expressed as follows: $$A^{i n i t}=\{\theta_{\mathrm{AGM}}(P s g,q f s_{i})\mid p_{i}\in P s g\}\quad(2)$$ ## 3.2 Iterative Qa Generator After the initial answer set A*init* is generated, the next step is to expand the QA pair candidates to reflect the question type diversity. To achieve this, we propose an interrogative word-indicated question generation model (QGM), denoted by θQGM, and a generative question-answering model (QAM), denoted by θQAM. The QGM and QAM are sequentially executed based on the initial answer to generate a set of QA pair candidates. The following paragraphs describe the training and inference processes of each model. ## Interrogative Word-Indicated Qgm We Train QGM with GT QA pair set to generate questions by referring to the answers and their passages. Including interrogative words in the training phase allows controllable question generation to follow the desired interrogative type during inference. We denote the interrogative word of each q gt j in a GT QA pair set as whgt j . In our setting, wh is an element of the interrogative word set WH={Who, When, What, Where, Why, How}. θQGM is trained to generate question q gt jby feeding the concatenated input of *P sg*, a gt j , and whgt j . Training is performed in a sequence-to-sequence manner and is optimized using the following loss function: $$L_{\rm QGM}=-\sum_{(q^{\rm g}_{j},a^{\rm g}_{j})\in(Q^{\rm g},A^{\rm g})}{\bf E}_{\theta_{\rm QGM}}(q^{\rm g}_{j}\mid Psg,a^{\rm g}_{j},wh^{\rm g}_{j})\tag{3}$$ In the inference phase, we prioritize diversity and generate questions by considering each interrogative word in WH as an indicator. For each a init i ∈ A*init* generated in the first step and its corresponding passage Psg, θQGM configures QA pair set QA1 which can be expressed as follows: $$QA^{1}=\{(\theta_{\rm QGM}(Psg,a_{i}^{init},wh),a_{i}^{init})\mid wh\in WH,\tag{4}$$ $$a_{i}^{init}\in A^{init}\}$$ In this way, QA pair candidates with high relevance to the passage can be generated. Note that this process encourages the expansion of question types, not all questions generated are related to the initial answers. Answer Adjustment To consider relevancy between QA pairs, we reconstruct answers through θQAM trained with a set of GT QA pairs. This process helps avoid linking inappropriate questions to a given initial answer, such as asking a 'How' question for an answer aimed at a specific person. Training of θQAM is proceeded by optimizing the following loss function. $$L_{\rm QM}=-\sum_{(q_{j}^{\rm g},a_{j}^{\rm gt})\in(Q^{\rm g},A^{\rm g})}{\bf E}_{\theta_{\rm QM}}(a_{j}^{\rm gt}\mid Psg,q_{j}^{\rm gt})\tag{5}$$ In the subsequent inference phase, we adjust the answers to all questions in QA1through θQAM. The reconstructed QA pair set, denoted by QA2, is expressed as Equation (6). $$QA^{2}=\{(q_{i}^{j},\theta_{\mbox{QAM}}(Psg,q_{i}^{j}))\mid(q_{i}^{j},a_{i}^{init})\in QA^{1}\}\tag{6}$$ QA2is a final QA pair candidate set in which the relevance between the pairs is supervised through the QAM while maintaining the diversity of question types. ## 3.3 Relevancy-Aware Ranker With the relevancy-aware ranker model, we select top-N ranked QA pairs that exhibit high relevance between passages and QA pairs. The ranking model denoted by θ*Rank* produces the relevance score for each QA pair. To train the ranking model θ*Rank*, we compose a contrastive training dataset by collecting in-context negative samples in GT QA pair set. In the training data, the GT QA pairs are considered a positive samples, and the other QA pairs within the same passage are considered negative samples. For a given passage *P sg* and the corresponding GT QA pair set (Qgt, Agt), we construct positive sample set POS = {(q gt i , a gt j ) | q gt i ∈ Qgt, a gt j ∈ Agt, i = j} and negative sample set NEG = {(q gt i , a gt j ) | q gt i ∈ Qgt, a gt j ∈ Agt, i ̸= j} 1. Then the QA pairs and their corresponding passages are concatenated to construct the input sequences for training θ*Rank*. By feeding this input 1We consider QA pairs in a different passages as easy negative cases and do not include them as negative samples in the ranker training. sequence, θ*Rank* is trained to classify binary labels representing negative and positive. In the inference phase, θ*Rank* returns the scores of the input QA pair to be classified as positive and negative, respectively. We further rank each QA pair by referring both scores. Through this process, the ranker is trained to prioritize the selection of data that exhibits a high correlation between QA pairs and high relevance to the corresponding passages. Overlap Mitigation While the ranker model enhances the relevance of QA pairs, the issue of duplication exists where the top-ranked pairs constitute similar forms. To alleviate this issue, we compute a re-scaled ranking score to diminish the lexical overlap of answers in the QA pair candidates. We sequentially select QA pairs in the order of high scores computed using the ranking model. To consider lexical overlap in each selection process, we measure the Rouge-L score between the selecting pair and the previously selected QA pairs. The score s of each pair measured by the ranking model is re-scaled as s − Rouge ∗ abs(s). Through this process, we down-scale the scores of the QA pairs that exhibit high lexical overlap with previously selected QA pairs. This allows the selection of various types of QA while reflecting the scores calculated by the ranking model. The detailed procedure of the overlap mitigation algorithm is presented in Algorithm 1 in Appendix A. ## 4 Experiments 4.1 Experimental Setup Dataset In our experiments, we leverage the FairytaleQA dataset (Xu et al., 2022). FairytaleQA is specifically designed for children's storybook learning and assessment, which corresponds to our purpose of education. In the data construction process, educational experts manually created QA pairs to ensure reliability and validity. The training, validation, and test sets contain 8,548 QA pairs from 232 books, 1,025 pairs from 23 books, and 1,007 pairs from 23 books, respectively. Instead of using narrative elements (*i.e.* character, setting, action, etc.) presented in the dataset, we diversify the questions based on interrogative words to induce expanded types of questions beyond these elements. We use the existing answer types, as they are mutually exclusive. | MAP@N (Rouge-L F1) | MAP@N (BERTScore F1) | | | | | | | | |-------------------------|------------------------|---------------|---------------|-------------------------------|-----------------|---------------------------------|---------------------------------|-------| | Method | Top 10 | Top 5 | Top 3 | Top 1 | Top 10 | Top 5 | Top 3 | Top 1 | | FQAG(Yao et al., 2022) | 0.440 / 0.435 | 0.375 / 0.374 | 0.333 / 0.324 | 0.238 / 0.228 0.9077 / 0.9077 | 0.8990 / 0.8997 | 0.8929 / 0.8922 0.8768 / 0.8776 | | | | SQG(Dugan et al., 2022) | 0.460 / 0.455 | 0.392 / 0.388 | 0.344 / 0.337 | 0.234 / 0.242 0.9056 / 0.9062 | 0.8953 / 0.8955 | 0.8876 / 0.8878 | 0.8707 / 0.8723 | | | Ours | 0.500 / 0.503 | 0.426 / 0.429 | 0.369 / 0.372 | 0.247/ 0.254 | 0.9156 / 0.9178 | 0.9046 / 0.9068 | 0.8956 / 0.8977 0.8752 / 0.8783 | | Table 1: The main experimental results for our QAgen framework. We report Map@N score with Rouge-L F1 and BERTScore F1 for each model. The result for the validation split is on the left side, and the right side is for the test split. | global | local | | | | | | | | |-------------------------|---------------|---------------|------------|-------------|-----------------------------|---------------|--------------|------| | Method | Diversity-Q ↓ | Diversity-A ↓ | Quality-E↓ | Relevancy ↑ | Acceptability ↑ Usability ↑ | Readability ↑ | Difficulty ↑ | | | FQAG(Yao et al., 2022) | 3.03 | 3.06 | 2.66 | 2.65 | 2.14 | 1.74 | 2.64 | 1.11 | | SQG(Dugan et al., 2022) | 2.96 | 3.03 | 3.30 | 2.44 | 1.87 | 1.34 | 2.55 | 1.36 | | Ours | 2.35 | 2.18 | 2.35 | 2.69 | 2.22 | 1.9 | 2.35 | 1.98 | | GT | 1.65 | 1.71 | 1.68 | 2.97 | 2.65 | 2.50 | 2.80 | 1.95 | Table 2: Human evaluation results for the QA pairs generated by the QAG systems on eight criteria. *global* represents the human ranking results for the three QAG systems and GT. *local* indicates the human scoring results for each QAG system and GT, on a 0-3 scale. Note that the scores between the two settings are completely different. Models All models comprising our framework are trained with the FairytaleQA dataset. In the case of the QFS model, we produce a summary using model checkpoints provided by Vig et al. (2021). In training AGM, QGM, and QAM, we exploit the pre-trained BART-large (Lewis et al., 2020) model and framework provided by Fairseq2. For the hyper-parameters, 2048 max tokens, early stopping 10, and polynomial decay scheduler are adopted. For the learning rate and dropout, we set 3e-05 and 0.1 in AGM and QGM, 2e-05 and 0.2 in QAM, respectively. All models are trained on 2 RTX8000 GPUs. We used the RoBERTa-base (Liu et al., 2019) model and Huggingface3framework for our ranking model. We train it for five epochs with a fixed learning rate of 5e-07 and a single GPU is used for training ranker. ## 4.2 Evaluation Metrics For the evaluation metric, we adopt the MAP@N score as a primary metric utilized by Yao et al. (2022). MAP@N with Rouge-L refers to the averaged value of the maximum score set added by computing Rouge-L between each GT pair and the top-N generated QA pairs. Each question and answer in the QA pair is concatenated in the process. However, when MAP@N is measured by the Rouge-L precision score as in Yao et al. (2022), short results are advantageous. This is because it measures the longest overlap over the number of 2https://github.com/facebookresearch/ fairseq.git 3https://github.com/huggingface/ transformers candidates. Instead of using precision, we select the F1 score for accurate measurement. Since the metrics based on the n-gram overlap do not guarantee quality (Zhang and Bansal, 2019), we additionally adopt BERTScore for MAP@N to evaluate semantic equivalence based on similarity scores (Zhang et al., 2019). ## 4.3 Baselines We adopt two educational QAG systems as a baseline. FQAG *FQAG* (Yao et al., 2022) is a state-ofthe-art study of FairytaleQA. They perform QAG through a three-step pipeline comprising answer generation, question generation, and ranking modules. For re-implementation, we load the provided checkpoints to generate QA pairs for the validation and test sets of FairytaleQA. SQG SQG (Dugan et al., 2022) is a recently published paper in educational QAG, which utilizes summaries of the given passages. QA pairs are generated leveraging three models: answer generation, question generation, and question answering models. In this case, to match the number of top-N, we select QA pairs based on the generated order or increase outputs by adjusting the beam size. ## 5 Results And Analysis 5.1 Automated Evaluation Result on MAP@N with Rouge-L Table 1 shows the main result of MAP@N with RougeL F1 scores according to the QAG systems. As ![5_image_0.png](5_image_0.png) a result, our system significantly outperforms the baseline model in all splits and top-N outcomes. Especially in the test set, we outperform *FQAG* by +0.068 in the top 10, +0.055 in the MAP@5, and +0.048 in the MAP@3, which is a significant gain. SQG achieves better results than *FQAG* but still does not outperform ours. Compared to the SQG, our system shows improvement in all top-N results mainly from 0.455 to 0.503 (+0.048). The result implies that generating various QA pair candidates and properly establishing plausible pairs serve as one contributing factor to performance improvement. Result on MAP@N with BERTScore We measure MAP@N by employing BERTScore to evaluate the semantic equivalence between GT and generated QA pairs. Namely, we use the F1 value of BERTScore instead of Rouge-L F1 score when measuring MAP@N. As a result, our system achieves higher performance in all settings except for the MAP@1 validation result. In the best case from the test result (MAP@10), FQAG and SQG showed 0.9077 and 0.9062 respectively and we recorded 0.9178, outperforming by +0.0101 and +0.0116. The tendency for our performance to be the highest is consistent with the MAP@N with Rouge-L F1 result. However, we observe that *FQAG* reports higher performance in BERTScore than SQG. Although the performance gap is marginal, this outcome suggests that the generated QA pairs of *FQAG* are semantically better than SQG. ## 5.2 Statistical Evaluation To evaluate the question and answer type diversity of generated QA pairs, we perform a statistical evaluation. The distribution according to interrogative type and answer type is presented in Figure 2. As a result, the reported question types of ours are more balanced than others. Unlike other models that usually create 'what' and 'who', our QAG system is well balanced with questions of 'why' and 'how' that require reasoning. This suggests the potential of children to think from various perspectives by being asked different types of questions. For answer types, our system contains 32.06% of implicit answers, indicating that implicit answers are also well generated, which allows our model to help balance assessments of children. Conversely, other models use the answer span extraction method, resulting in a 0% of implicit answers. ## 5.3 Human Evaluation We further conduct a human evaluation for a detailed inspection. For each paragraph, three human evaluators with degree holders or experts in education rate each of the three QA pairs generated by the GT and three QAG systems. Human evaluation is performed on a total of 20 passages, and we select three QA pairs sequentially for unscored GT and SQG. Due to the brevity of space, we further describe human evaluation details in the Appendix C. The following criteria are used for human evaluation. For the *global* setting, we instruct the evaluator to rank the entire system, and in the *local* cases to select how many of the three QA pairs generated by each system correspond to property items. (*global* setting) **Diversity-Q**: This ranks the generation results of GT and three QAG systems in terms of question diversity. **Diversity-A**: This ranks the generation results of GT and the three QAG systems in terms of answer diversity. Quality-E: This ranks the entire system quality from an overall perspective. (*local* setting) **Relevancy**: This evaluates the relevance between a passage and a QA pair. If either question or answer is not relevant, it is irrelevant. Acceptability: This evaluates whether a question and its corresponding answer are correctly generated. Relevance with the passage is not considered, and if either of them is awkward, it is considered incorrect. **Usability**: This evaluates whether the generated QA pairs can be used for education purposes. **Readability**: This evaluates whether the generated QA pairs are grammatically right. **Difficulty**: This evaluates whether the generated QA pairs are excessively easy. Table 2 presents the results of the human evaluation. Our approach achieves remarkable performance in terms of the question and answer diversity with an average ranking of 2.35 and 2.18 respectively. In the global setup, we observe that the Quality-E is 2.66 in *FQAG* and 3.30 in SQG, while our system outperforms them with a score of 2.35. These results demonstrate that our QAG is both quantitatively and qualitatively superior in direct comparison with other systems through ranking while enhancing diversity. The results of the local setting indicate that we outperform both *FQAG* and SQG except for the readability. As an evaluation result of the QA pairs we generated, the relevance of the passages to the generated QA (2.69), the acceptance of the questions to the answers (2.22), and the usability for educational purposes (1.9) show the highest result compared to other systems. We even observe a slight performance gain over GT in case of difficulty. However, in readability, our result showed 2.35, which is lower than the 2.64 and 2.55 of the existing model. We speculate that the average length of our generated QA pairs is longer, resulting in a small trade-off with difficulty. From the results, we conclude that our generated QA pairs are truly effective in not only ensuring quality but also diversity. ## 5.4 Ablation Study We perform ablation studies to further analyze the contribution of each process in our framework to the overall performance. The results are shown in Table 3. Impact of Query-focused Summarization Instead of the AGM model, we generate answers using noun phrase and noun entity extraction method performed in *FQAG*. When the AGM model is changed, the performance decreased by -0.031 in MAP@10. This indicates that introducing a summary containing intensive information benefits from generating more plausible answers. We also analyze that introducing the AGM model in a generative manner yields higher performance because it is capable to generate implicit answers. Impact of Iterative QA Pair Generation For investigating the most effective iteration, we reduce the number of iterations in the iterative QA generator. Namely, we eliminate the QAM step and | MAP@N (Rouge-L F1) | | | | | |--------------------------|--------|-------|-------|-------| | Method | Top 10 | Top 5 | Top 3 | Top 1 | | Ours | 0.503 | 0.429 | 0.372 | 0.254 | | w/o QFS | 0.472 | 0.401 | 0.348 | 0.248 | | w/o Iteration | 0.463 | 0.427 | 0.378 | 0.253 | | w/o Contrastive learning | 0.438 | 0.375 | 0.326 | 0.261 | execute only QGM. The experimental results show a marginal change in most of the cases, except for the top 10 results. We attribute this performance drop in the top 10 to the process of additionally adjusting interrogative word-indicated questions to correct answers through QAM. An experiment on increasing iterations is presented in Section 6. Impact of Contrastive Learning To observe the role of our relevancy-aware ranker, we eliminate our ranker and utilize the DistilBERT (Sanh et al., 2019) ranking model of *FQAG*. As a result of the experiment, the overall performance is degraded largely, such as 0.503→0.438 in the MAP@10 and 0.429→0.375 in the MAP@5. We interpret that constituting the training examples for contrastive learning through in-context negative samples boosts the overall performance gain. However, our performance of changing the ranking model to that of *FQAG* can also be compared with the *FQAG* test result of Table 1 (MAP@10: 0.435, MAP@5: 0.374, MAP@3: 0.324, MAP@1: 0.228). This case is a comparison in which the ranking model is unified and only varies the QA pair generation part. Results show an insignificant difference, with *FQAG* test results in Table 1 performing lower than ablation results of contrastive learning. We analyze that our method generates more QA pair candidates with the goal of increasing diversity, but the DistilBERT ranking model does not rank them well. ## 6 Case Study Performance of Multiple AGM We investigate various methods to add a clue that can be a key element when constructing AGM inputs. The clue is then fed into the BART large-based AGM input along with the given passages, and the answer is predicted. A is the baseline where this learns to directly generate answers for given passages. In DS, one to three sentences in the passage closest to the question are retrieved. In *Ext-Ret*, a phrase or sentence closest to the question is retrieved from the external resource NarrativeQA (Kocisk ˇ y et al. ` , 2018). Method **Rouge-L BLEU BERTScore** A 0.216 8.31 0.875 DS1 0.232 10.21 0.879 DS2 0.244 9.65 0.874 DS3 0.256 10.55 0.878 Ext-Ret (Sent) 0.283 14.86 0.89 Ext-Ret (Phrase) 0.304 16.74 0.896 QFS **0.362 23.21 0.903** measurement into two parts of question or answer. Overlap Metric divides the overlap measurement metric into BLEU and Rouge. | Criterion | Overlap | MAP@N (Rouge-L F1) | | | | |-------------|-----------|----------------------|-------|-------|-------| | Metric | Top 10 | Top 5 | Top 3 | Top 1 | | | EM | 0.491 | 0.414 | 0.357 | 0.254 | | | BLEU | 0.497 | 0.431 | 0.369 | 0.254 | | | Answer | Rouge-L | 0.503 | 0.429 | 0.372 | 0.254 | | EM | 0.483 | 0.404 | 0.354 | 0.254 | | | BLEU | 0.491 | 0.421 | 0.365 | 0.254 | | | Question | Rouge-L | 0.493 | 0.431 | 0.366 | 0.254 | Table 4 is the experimental result, and the performance of QFS outperforms all other methodologies. For this result, we judge that the summary, in which the information is compressed and regenerated, contributes more to the final answer generation. Performance on Adding Iteration We observe the performance fluctuation when increasing iteration on the iterative QA generator. We create QA pairs by recursively executing QGM and QAM on the QA pairs generated by our main framework. Experimental results in Table 5 show that the performance degrades as the iteration increases. We judged that no additional performance improvement would be obtained even if iterations were repeated more than this. | Map@N (Rouge-L F1) | | | | | |----------------------|--------|-------|-------|-------| | Method | Top 10 | Top 5 | Top 3 | Top 1 | | Ours | 0.503 | 0.429 | 0.372 | 0.254 | | Ours +1 iteration | 0.506 | 0.423 | 0.361 | 0.246 | | Ours +2 iteration | 0.502 | 0.419 | 0.362 | 0.243 | Performance on Overlap Mitigation Methods In this section, we investigate the effect of overlap mitigation techniques. EM is a baseline, which remains the highest-scored QA pair for each unique Criterion. The experiment is designed to modify two factors: In *Criterion*, we divide the criterion of overlap The experimental results are presented in Table 6. The results represent that re-scaling the scores of the ranking model by using overlap mitigation methods yields higher performance than the method of simply removing overlap based on exact matching. Also, overall performance shows better when the overlap metric is set to Rouge than BLEU. This demonstrates that the output of the ranking model can be utilized more effectively by applying the proposed overlap mitigation method. Notably, the overlap mitigation method based on the answer record higher performance when the question is used as the criterion. ## 7 Conclusion In this paper, we proposed a QAG framework for educational purposes featuring diverse and valid question and answer types. Our framework is structured with three task-oriented processes, with a particular emphasis on expanding diverse and valid types of QA pair candidates in the generator, and selecting high-quality QA pairs in the ranker. We conducted extensive evaluations of generated QA pairs, including quantitative, qualitative, and statistical evaluations with detailed analyses, and observed that our system achieved remarkable performance. Our framework has the potential to promote various cognitive activities in children learning by providing diverse and effective QA pairs for educational purposes. As our modularized task-oriented frameworks are tailored to real-world demand, we further expect the collaborative use of humans and AI. ## Limitations We used only the pre-trained BART-large model when training each model within the QAG framework. We assume that comparative experiments using several sequence-to-sequence language models will be good future works. Also, we only used six interrogative words, and did not consider 'whose' and 'whom' in the process. We considered these as originating from 'who', but generating eight interrogative words including 'whose' and 'whom' would be a good approach. At last, in order to create a robust ranker, it is best to have a dataset that contains positive and negative samples. Since the manual data generation process required a timeconsuming process, we utilize in-context negative samples as an alternative. If there is a dataset for the ranker learning purpose, much better performance can be achieved. ## Ethics Statement Deployment Our approach exploits parametric knowledge in the pre-trained model for language generation, which runs the risk of reflecting the bias of the training data. Undoubtedly, it is a wellknown threat in tasks using a pre-trained model, but we must be more careful about social impact when using this method since our model aims to create educational QAs. Therefore, we plan to request model users to necessarily include a human review process of the generated QA pairs when used for educational purposes. Human evaluation We paid human workers more than the legal minimum wage. We also guided them to work remotely at any time they wanted and to rest when they are in a state of fatigue during work. Their B.A. degree certificate was discarded immediately upon confirmation to prevent personal information leakage. We made a task force to quickly respond to them if they have any questions or concerns by contacting them directly. ## Acknowledgments We thank the anonymous reviewers for their valuable feedback and constructive suggestions. This work was supported by Hyundai Motor Company and Kia. This research was supported by the Ministry of Science and ICT (MSIT), Korea, under the Information Technology Research Center (ITRC) support program (IITP-2023-2018-001405) supervised by the Institute for Information & Communications Technology Planning & Evaluation (IITP) and this work was supported by IITP grant funded by MSIT (No. 2020-0-00368, A Neural-Symbolic Model for Knowledge Acquisition and Inference Techniques) and this research was supported by MSIT, under the ICT Creative Consilience program(IITP-2023-2020-0-01819) supervised by the IITP. ## References Kathleen Cotton. 1988. Classroom questioning. *School* improvement research series, 5:1–22. James T Dillon. 2006. Effect of questions in education and other enterprises. In *Rethinking schooling*, pages 145–174. Routledge. Xiangjue Dong, Jiaying Lu, Jianling Wang, and James Caverlee. 2022. Closed-book question generation via contrastive learning. arXiv preprint arXiv:2210.06781. Xinya Du, Junru Shao, and Claire Cardie. 2017. Learning to ask: Neural question generation for reading comprehension. In *Proceedings of the 55th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1342–1352. Liam Dugan, Eleni Miltsakaki, Shriyash Upadhyay, Etan Ginsberg, Hannah Gonzalez, DaHyeon Choi, Chuning Yuan, and Chris Callison-Burch. 2022. A feasibility study of answer-agnostic question generation for education. In *Findings of the Association for* Computational Linguistics: ACL 2022, pages 1919– 1926, Dublin, Ireland. Association for Computational Linguistics. Kathleen Ellis. 1993. Teacher questioning behavior and student learning: What research says to teachers. Atika Etemadzadeh, Samira Seifi, and Hamid Roohbakhsh Far. 2013. The role of questioning technique in developing thinking skills: The ongoing effect on writing skill. *Procedia-Social* and Behavioral Sciences, 70:1024–1031. Frank J Guszak. 1967. Teacher questioning and reading. The reading teacher, 21(3):227–234. Violeta Janusheva and Milena Pejchinovska. 2009. Questions posing importance and role in the teaching process. Bill Jerome, Rachel Van Campenhout, and Benny G Johnson. 2021. Automatic question generation and the smartstart application. In *Proceedings of the* Eighth ACM Conference on Learning@ Scale, pages 365–366. Junmo Kang, Haritz Puerto San Roman, and SungHyon Myaeng. 2019. Let me know what to ask: Interrogative-word-aware question generation. In Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pages 163–171. Young-Suk Grace Kim. 2017. Why the simple view of reading is not simplistic: Unpacking component skills of reading using a direct and indirect effect model of reading (dier). *Scientific Studies of Reading*, 21(4):310–333. Tomáš Kocisk ˇ y, Jonathan Schwarz, Phil Blunsom, Chris ` Dyer, Karl Moritz Hermann, Gábor Melis, and Edward Grefenstette. 2018. The narrativeqa reading comprehension challenge. *Transactions of the Association for Computational Linguistics*, 6:317–328. Klaus Krippendorff. 2011. Computing krippendorff's alpha-reliability. Igor Labutov, Sumit Basu, and Lucy Vanderwende. 2015. Deep questions without deep understanding. In *Proceedings of the 53rd Annual Meeting of the* Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 889–898. Nguyen-Thinh Le, Tomoko Kojiri, and Niels Pinkwart. 2014. Automatic question generation for educational applications–the state of art. *Advanced computational methods for knowledge engineering*, pages 325–338. Dong Bok Lee, Seanie Lee, Woo Tae Jeong, Donghwan Kim, and Sung Ju Hwang. 2020. Generating diverse and consistent QA pairs from contexts with information-maximizing hierarchical conditional VAEs. In *Proceedings of the 58th Annual* Meeting of the Association for Computational Linguistics, pages 208–224, Online. Association for Computational Linguistics. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871–7880. Yunji Li, Sujian Li, and Xing Shi. 2022. Consecutive question generation via dynamic multitask learning. arXiv preprint arXiv:2211.08850. David Lindberg, Fred Popowich, John Nesbit, and Phil Winne. 2013. Generating natural language questions to support learning on-line. In Proceedings of the 14th European Workshop on Natural Language Generation, pages 105–114. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(140):1–67. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108. Thomas Scialom, Benjamin Piwowarski, and Jacopo Staiano. 2019. Self-attention architectures for answer-agnostic neural question generation. In *Proceedings of the 57th annual meeting of the Association for Computational Linguistics*, pages 6027– 6032. Siamak Shakeri, Cicero Nogueira dos Santos, Henghui Zhu, Patrick Ng, Feng Nan, Zhiguo Wang, Ramesh Nallapati, and Bing Xiang. 2020. End-to-end synthetic data generation for domain adaptation of question answering systems. In *Proceedings of the 2020* Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5445–5460, Online. Association for Computational Linguistics. Ganesan Shanmugavelu, Khairi Ariffin, Manimaran Vadivelu, Zulkufli Mahayudin, and Malar Arasi RK Sundaram. 2020. Questioning techniques and teachers' role in the classroom. Shanlax International Journal of Education, 8(4):45–49. Jesse Vig, Alexander Fabbri, Wojciech Kryscinski, Chien-Sheng Wu, and Wenhao Liu. 2022. Exploring neural models for query-focused summarization. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 1455–1468, Seattle, United States. Association for Computational Linguistics. Jesse Vig, Alexander R Fabbri, Wojciech Krysci ´ nski, ´ Chien-Sheng Wu, and Wenhao Liu. 2021. Exploring neural models for query-focused summarization. arXiv preprint arXiv:2112.07637. Xu Wang, Simin Fan, Jessica Houghton, and Lu Wang. 2022. Towards process-oriented, modular, and versatile question generation that meets educational needs. arXiv preprint arXiv:2205.00355. Ying Xu, Dakuo Wang, Mo Yu, Daniel Ritchie, Bingsheng Yao, Tongshuang Wu, Zheng Zhang, Toby Li, Nora Bradford, Branda Sun, Tran Hoang, Yisi Sang, Yufang Hou, Xiaojuan Ma, Diyi Yang, Nanyun Peng, Zhou Yu, and Mark Warschauer. 2022. Fantastic questions and where to find them: FairytaleQA - an authentic dataset for narrative comprehension. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 447–460, Dublin, Ireland. Association for Computational Linguistics. Bingsheng Yao, Dakuo Wang, Tongshuang Wu, Zheng Zhang, Toby Li, Mo Yu, and Ying Xu. 2022. It is AI's turn to ask humans a question: Questionanswer pair generation for children's story books. In *Proceedings of the 60th Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 731–744, Dublin, Ireland. Association for Computational Linguistics. Ruqing Zhang, Jiafeng Guo, Lu Chen, Yixing Fan, and Xueqi Cheng. 2021. A review on question generation from natural language text. ACM Transactions on Information Systems (TOIS), 40(1):1–43. Shiyue Zhang and Mohit Bansal. 2019. Addressing semantic drift in question generation for semisupervised question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2495–2509. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. arXiv preprint arXiv:1904.09675. Zhenjie Zhao, Yufang Hou, Dakuo Wang, Mo Yu, Chengzhong Liu, and Xiaojuan Ma. 2022. Educational question generation of children storybooks via question type distribution learning and event-centric summarization. In *Proceedings of the 60th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5073–5085, Dublin, Ireland. Association for Computational Linguistics. Wenjie Zhou, Minghua Zhang, and Yunfang Wu. 2019. Multi-task learning with language modeling for question generation. In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference* on Natural Language Processing (EMNLP-IJCNLP), pages 3394–3399, Hong Kong, China. Association for Computational Linguistics. ## A Overlap Mitigation Details The detailed process for overlap mitigation is as follows. We first define **Criterion** and Metric. **Criterion** means a sentence to be subject to overlap checking among questions or answers, and **Metric** means an evaluation metric to measure overlap. In our paper, we suggest using Rouge-L, or BLEU, as Metric. In our main experiments, we choose criterioni as ai (*i.e.* answer in QA pair), and Metric as ROUGE-L. In this process, Metric returns overlap score between 0 and 1. In estimating overlap, we lemmatize all the sentences and remove all the stop words in every QA pair. ## Algorithm 1 Overlapping Based Reranking Given: Passage Psg, Input: Generated QA pair QAgen = {(qi, ai)} N i=1 Parameter: int k Define: scorei ← RankingModule(qi, ai, Psg) Choose: criterion i ← qi or ai Choose: Metric ← ROUGE-L or BLEU 1: output ← [], comparing ← [] 2: **while** len(output) ≤ k do 3: for (qj , aj ) in QAgen do 4: if comparing is not EMPTY **then** 5: overlapsj = [Metric(criterionj , item) for item in comparing] 6: overlapj = max(overlapsj ) 7: **Define**: score∗ j ← scorej - overlapj * |scorej | 8: **else** 9: **Define**: score∗ j ← scorej 10: **end if** 11: **end for** 12: (qi, ai) ← **Pick from** QAgen with highest score∗ j 13: output ← **Append** (qi, ai) 14: comparing ← **Append** criterioni 15: QAgen ← Pop (qi, ai) 16: **end while** 17: **return** output ## B **Implementation Performance On Bart** Qgm And Qam | QGM | QAM | | | | | | |---------|-------|-------------------|-------|-----------|-------|-------| | Rouge-L | BLEU | BERTScore Rouge-L | BLEU | BERTScore | | | | Ours | 0.600 | 28.50 | 0.934 | 0.542 | 43.29 | 0.936 | Table 7: Performances on the BART QGM and QAM model. In the iterative QAGen process, the QGM model and QAM model generate QA pairs, thereby obtaining a variety of QA pair candidates. We leverage FairytaleQA dataset for the model training, and results are shown in Table 7. Our model utilizes a BART-large model identical to Yao et al. (2022). Although the apples-to-apples comparison between the QGM model is impossible since ours are trained with interrogative word indicator, our QAM model performs slightly better than the result of *FQAG* (0.536 Rouge-L). ## C Human Evaluation Details In our human evaluation process, all evaluators are degree holders in education or educational domain experts. We provide an evaluation sheet in the form of an API, and evaluators check the part corresponding to each question or write a rank order. Figure 3 describes the human evaluation script. In the local setting, we instruct evaluators to select how many of the three QA pairs produced by each <Providing Generated QAs (A),(B),(C),(D) for a passage> Figure 3: Human evaluation script. We provide three ![11_image_0.png](11_image_0.png) QA pairs, each generated by the three systems and GT, and instruct the evaluators to score or rank them for a total of eight evaluation categories. system correspond to the given property. Rater responses are further converted to a 0-3 scale. We also inform raters to rank four systems according to each of the three attributes per paragraph in the global setting. Both evaluations are performed for a total of 20 paragraphs in common. The intercoderreliability scores (Krippendorff's alpha) (Krippendorff, 2011) among the evaluators are between 0.59 and 0.61. This is because each attribute of the criterion requires highly subjective assessment. ## D Qagen Results To examine the practical applicability, we analyze the QA pairs generated by our framework. We compare our results to other QAgen systems, including FQAG and SQG, as well as the ground-truth QA pairs. We present the QAGen results for four passages, each containing three QA pairs. From a qualitative perspective, our framework generates QA pairs of higher quality compared to other systems. Passage Long, long ago japan was governed by hohodemi, the fourth mikoto (or augustness) in descent from the illustrious amaterasu, the sun goddess. He was not only as handsome as his ancestress was beautiful, but he was also very strong and brave, and was famous for being the greatest hunter in the land. Because of his matchless skill as a hunter, he was called "yama - sachi - hiko" or "the happy hunter of the mountains." Ours Q) What was yama-sachi-hiko called? A) The happy hunter of the mountains. Q) Why was he called "the happy hunter of the mountains"? A) He was matchless in his skill as a hunter. Q) What was special about hohodemi? A) He was not only as handsome as his ancestress was beautiful, but he was also very strong and brave. FQAG Q) What was yama-sachi? A) The happy hunter of the mountains. Q) What was hohodemi called? A) Yama-sachi-hiko. Q) Who was the greatest hunter in japan? A) The fourth mikoto. SQG Q) What was the name of the fourth mikoto? A) Hohodemi Q) Hohodemi was a descendant of what goddess? A) Amaterasu Q) Hohodemi was the fourth mikoto from what goddess? A) Sun GT Q) Who governed japan long ago? A) Hohodemi. Q) What was special about hohodemi? A) Handsome. Q) Why was hohodemi called yama-sachi-hiko? A) His matchless skill as a hunter. Passage Then the dragon king interviewed the doctor and blamed him for not curing the queen. The doctor was alarmed at rin jin's evident displeasure, and excused his want of skill by saying that although he knew the right kind of medicine to give the invalid, it was impossible to find it in the sea. "Do you mean to tell me that you can't get the medicine here?" asked the dragon king. "It is just as you say!" said the doctor. "Tell me what it is you want for the queen?" demanded rin jin. "I want the liver of a live monkey!" answered the doctor. "The liver of a live monkey! Of course that will be most difficult to get," said the king. "If we could only get that for the queen, her majesty would soon recover," said the doctor. "Very well, that decides it; we must get it somehow or other. But where are we most likely to find a monkey?" asked the king. Ours Q) Where did the doctor say it was impossible to find the right kind of medicine to give the invalid? A) In the sea. Q) What happened after the dragon king interviewed the doctor and blamed him for not curing the queen? A) The doctor was alarmed at rin jin's evident displeasure, and excused his want of skill by saying that although he knew the right kind Q) Who did the doctor think would recover from the liver of a live monkey? A) Her majesty. FQAG Q) Who blamed the doctor for not curing the queen? A) The dragon king. Q) What did rin jinn want for the queen? A) A live monkey. Q) Who did the dragon king ask where they would find a monkey's liver? A) The king. SQG Q) Who interviewed the doctor and blamed him for not curing the queen? A) Dragon king Q) Who blamed rin jin for not curing the queen? A) The dragon king Q) Who did the dragon king blame for not curing the queen? A) Rin jin GT Q) Who was blamed for not curing the queen? A) The doctor. Q) Why was the liver of a live monkey difficult to get? A) They are under water. Q) How did the doctor feel when he was blamed by the king? A) Alarmed. Passage After a time they all came to a town where a king reigned whose daughter was so serious and solemn that no one could ever manage to make her laugh. So the king had decreed that whoever should succeed in making her laugh should marry her. When dullhead heard this he marched before the princess with his goose and its appendages, and as soon as she saw these seven people continually running after each other she burst out laughing, and could not stop herself. Then dullhead claimed her as his bride, but the king, who did not much fancy him as a son-in-law, made all sorts of objections, and told him he must first find a man who could drink up a whole cellarful of wine. Dullhead bethought him of the little grey man, who could, he felt sure, help him; so he went off to the forest, and on the very spot where he had cut down the tree he saw a man sitting with a most dismal expression of face. Ours Q) Where did dullhead see the man sitting with a dismal expression of face? A) The very spot where he cut down the tree. Q) What happened when the princess saw the seven people continually running after each other? A) She burst out laughing and could not stop herself. Q) Who did the king want dullhead to find before he could marry the princess? A) A man who could drink up a whole cellarful of wine. FQAG Q) Who reigned in a town where no one could ever manage to make the daughter laugh? A) A king. Q) Who could help dullhead? A) The little grey man. Q) Where did dullhead go to find the little grey man? A) The forest. SQG Q) What did the king decree that whoever succeeded in making her laugh should do? A) Marry her Q) How many people were running after each other? A) Seven Q) Where did dullhead go to find a man who could help him? A) The forest GT Q) Who did the king decree should marry his daughter? A) Whoever should succeed in making her laugh. Q) How will the little grey man help dullhead? A) Drink up a whole cellarful of wine. Q) How did the king feel about dullhead as a son-in-law? A) Unhappy. Passage Many, many years ago there lived a good old man who had a wen like a tennis-ball growing out of his right cheek. This lump was a great disfigurement to the old man, and so annoyed him that for many years he spent all his time and money in trying to get rid of it. He tried everything he could think of. He consulted many doctors far and near, and took all kinds of medicines both internally and externally. But it was all of no use. The lump only grew bigger and bigger till it was nearly as big as his face, and in despair he gave up all hopes of ever losing it, and resigned himself to the thought of having to carry the lump on his face all his life. Ours Q) What did the good old man have? A) A wen like a tennis-ball growing out of his right cheek. Q) How long did the old man have the wen like a tennis-ball growing out of his right cheek? A) Many, many years. Q) Where did the lump grow out of? A) His right cheek. FQAG Q) Who had a wen like atennis-ball growing out of his right cheek? A) The old man. Q) Where did the lump grow? A) His right cheek. Q) What did the old man do to get rid of his lump? A) He consulted many doctors far and near. SQG Q) What type of ball did the old man have a wen like? A) Tennis Q) What was the wen like a tennis - ball growing out of his right cheek to the old man? A) Great disfigurement Q) What did the old man try to get rid of the lump? A) Everything GT Q) Why was the man not able to get rid of his wen? A) The doctors did not know how to get rid of it. Q) How did the man feel about his wen? A) Annoyed. Q) What did the good old man have growing in his right cheek? A) A wen. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitations ✓ A2. Did you discuss any potential risks of your work? Limitations ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract ✓ A4. Have you used AI writing assistants when working on this paper? ChatGPT. We use it for checking grammar and finding synonyms for some words. This was done for specific sentences, but it appears throughout the section. ## B ✓ **Did You Use Or Create Scientific Artifacts?** 3 Method ✓ B1. Did you cite the creators of artifacts you used? 4.1 Experimental Setup - Models ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? We will discuss terms for use/distribution in the README on github. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? 4.3 Baselines ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? We use existing data that is already anonymized. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? 1 Introduction ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. 4.1. Experimental Setup - Dataset ## C ✓ **Did You Run Computational Experiments?** 4.1. Experimental Setup - Models ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 4.1 Experimental Setup - Models The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 4.1 Experimental Setup - Models C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? No response. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 4.2 Evaluation Metrics ## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** We Conduct A Human Evaluation. 5.3 Human Evaluation ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? 5.3 Human Evaluation, Appendix C Human Evaluation Details ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Ethics Statement, Appendix C Human Evaluation Details D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
jwalapuram-2023-pulling
Pulling Out All The Full Stops: Punctuation Sensitivity in Neural Machine Translation and Evaluation
https://aclanthology.org/2023.findings-acl.381
Much of the work testing machine translation systems for robustness and sensitivity has been adversarial or tended towards testing noisy input such as spelling errors, or non-standard input such as dialects. In this work, we take a step back to investigate a sensitivity problem that can seem trivial and is often overlooked: punctuation. We perform basic sentence-final insertion and deletion perturbation tests with full stops, exclamation and questions marks across source languages and demonstrate a concerning finding: commercial, production-level machine translation systems are vulnerable to mere single punctuation insertion or deletion, resulting in unreliable translations. Moreover, we demonstrate that both string-based and model-based evaluation metrics also suffer from this vulnerability, producing significantly different scores when translations only differ in a single punctuation, with model-based metrics penalizing each punctuation differently. Our work calls into question the reliability of machine translation systems and their evaluation metrics, particularly for real-world use cases, where inconsistent punctuation is often the most common and the least disruptive noise.
## Pulling Out All The Full Stops: Punctuation Sensitivity In Neural Machine Translation And Evaluation Prathyusha Jwalapuram Rakuten Institute of Technology Rakuten Group, Inc. [email protected] ## Abstract Much of the work testing machine translation systems for robustness and sensitivity has been adversarial or tended towards testing noisy input such as spelling errors, or non-standard input such as dialects. In this work, we take a step back to investigate a sensitivity problem that can seem trivial and is often overlooked: punctuation. We perform basic sentence-final insertion and deletion perturbation tests with full stops, exclamation and questions marks across source languages and demonstrate a concerning finding: commercial, production-level machine translation systems are vulnerable to mere single punctuation insertion or deletion, resulting in unreliable translations. Moreover, we demonstrate that both string-based and model-based evaluation metrics also suffer from this vulnerability, producing significantly different scores when translations only differ in a single punctuation, with model-based metrics penalizing each punctuation differently. Our work calls into question the reliability of machine translation systems and their evaluation metrics, particularly for real-world use cases, where inconsistent punctuation is often the most common and the least disruptive noise.1 ## 1 Introduction And Related Work Since the advent of the Transformer models (Vaswani et al., 2017), machine translation (MT) has seen tremendous improvement in performance, with several claims of parity with human translations (Wu et al., 2016; Hassan et al., 2018; Popel et al., 2020). However, one issue that is common to most deep learning models but does not hinder humans is sensitivity to small changes in the input, or a lack of robustness. Robustness in machine translation refers to the ability of the models to produce consistent translations that preserve the meaning of the source sentence regardless of any noise in the input (Heigold 1https://github.com/rakutentech/Punctuation-NMTACL2023 | From | Example | |------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Ours | Iran gibt britischen Tanker frei Iran gibt britischen Tanker frei! | | Niu et al. (2020) | Se kyllä tuntuu sangen luultavalta. Se kyllä tumtuu sangen luultavalta. | | Michel et al. (2019) | Si seulement je pouvais me muscler aussi rapidement. Si seulement je pouvais me muscler asusi rapidement. | | Ebrahimi et al. (2018) | ... er ist Geigenbauer und Psychotherapeut er ist Geigenbauer und Psy6hothearpeiut. | | Tan et al. (2020) | When is the suspended team scheduled to return? When are the suspended team schedule to returned? | | Wallace et al. (2020) | Did you know that adversarial examples can transfer to production models Did you know that adversarial examples can transfer to production models Siehe Siehe Siehe Siehe Siehe Siehe Siehe | Table 1: Common perturbations used in various robustness tests compared to our punctuation insertion. Original words in **bold**, perturbed text highlighted in red. et al., 2018). Changes in the input that preserve the semantics should not significantly change the output of the models. This can be a particularly critical quality for commercial machine translation systems, which are expected to translate real-world data including social media or internet text, which tend to be non-standard and noisy (Li et al., 2019). Models are typically tested for robustness by changing the input to introduce noise, called a perturbation, and checking whether the output is different. Several works have documented the sensitivity of machine translation models to various kinds of noise which commonly occurs in real-world data (Belinkov and Bisk, 2018; Niu et al., 2020; Tan et al., 2020). There has also been work on adversarial attacks, where algorithms with access to model gradients try to find optimal perturbations that result in a significant performance drop, or manipulate the model into producing malicious output (Ebrahimi et al., 2018; Wallace et al., 2020; Zhang et al., 2021). Most of these works have concentrated on robustness to variations in orthography and grammar. Table 1 shows some examples. There has also been some work on MT evaluation metric robustness that has included similar perturbations at the character and word-level, and other linguistic phenomena such as synonyms, named entities, negation, numbers, and others (Sun et al., 2022; Freitag et al., 2022; Karpinska et al., 2022). However, Michel et al. (2019) argue that many of these perturbations do not preserve the meaning on the source side. They propose that "meaningpreserving" perturbations should be limited to nearest neighbours in the embedding space and out-ofvocabulary word-internal character swaps. In this work, we take a further step back from meaning-preserving spelling and grammatical perturbations, and ask: are machine translation models robust to **trivial changes in sentence-final punctuation**? Are the **metrics** used to evaluate machine translation robust to the same changes? To investigate this, we test basic punctuation variation for which robustness may have been taken for granted. We perform simple sentence-final punctuation perturbations, restricting the experiments to two settings: insertion and deletion. Mimicking a very common form of natural noise, we insert or delete full stops, exclamation marks and question marks at the end of the input sentence (§2; see Table 1 for an example). Unlike common perturbation strategies, we make no changes to the content, words, or characters which may cause outof-vocabulary or unseen tokens in the input. Our goal in this work is not to induce as drastic a drop in performance as possible, but to investigate the changes in translation that result from extremely minimal perturbations, and whether we are adequately able to detect these changes. We test commercial MT systems from **Google**, DeepL and **Microsoft** on 3 language pairs from across resource levels and scripts: **German (De)**, Japanese (Ja) and Ukrainian (Uk) to English (En). These systems are intended for real-world use, and can therefore be expected to already be robust to common noise in real-world data. We first investigate whether commonly used evaluation metrics are robust to our perturbations, in order to ensure that our subsequent evaluation of the MT systems is fair (§3). We find that both stringbased and model-based evaluation metrics are not robust to trivial sentence-final punctuation perturbations, significantly penalizing text with mismatched full stops, question marks or exclamations, sometimes more than text with more severe perturbations such as insertion or deletion of random characters. Based on these results, we deviate from the standard robustness testing regime of perturbing the inputs and expecting the translations of both the original and the perturbed source text to match exactly. In the MT setting, adding a punctuation to the source text can naturally induce the model to also produce the corresponding punctuation in the translation. We therefore reset the punctuation changes in the translations in order to perform evaluation, and call for a review of standard MT robustness evaluation in such settings. More importantly, we show that even commercial machine translation systems are extremely sensitive to trivial punctuation changes, particularly in languages such as Japanese and Ukrainian (§4). We show that both insertion and deletion of punctuation causes performance drops, which indicates that models may be biased to expect (or not expect) punctuation in certain types of sentences. We conduct a manual analysis and find that in more severe cases, a mere punctuation change can cause complete changes in the meaning of the translation or introduce hallucinations such as negation, with less severe changes including pronouns, named entities, tense, number, and others (§5). Søgaard et al. (2018) provide some common examples of punctuation variation in real-world data and demonstrate how dependency parsers are sensitive to such punctuation differences. Ek et al. (2020) demonstrate the sensitivity of neural models to punctuation in Natural Language Inference tasks. Though there has also been some work on punctuation-based perturbation for machine translation (Bergmanis et al., 2020; Karpinska et al., 2022), the tendency has been to make more extreme perturbations than we adopt. Unlike previous work, we do not combine all punctuation changes into one bucket, and instead analyse each punctuation separately. We find that models are more sensitive to some punctuation than others. We also unify the usually independent work on machine translation robustness and evaluation metric robustness, and adjust our evaluation based on our observations. Our work exposes serious real-world use-case implications and serves to show that while great strides have been made in both machine translation and its evaluation, we are a long way from building systems that are reliable for real-world use. ## 2 Test Set Creation In this section, we describe the original test sets and perturbation operations we perform to build our test sets. Our perturbations reflect natural noise in punctuation occurrence: we only insert or delete punctuation such as full stops, exclamation marks and question marks from the ends of sentences. ## 2.1 Original Test Data In order to build our perturbation test sets, we need a large test set with naturally occurring noise, *e.g.,* sentences which originally do not have full stops at the end (for insertion) or sentences ending with question marks (for deletion). Test sets typically have a majority of sentences ending with full stops, while other punctuation or punctuation-less sentences occur less often. In order to maximize these sentences, we combine test sets across FLORES101 (Goyal et al., 2021) and WMT 2020-2022 (Barrault et al., 2020, 2021; Kocmi et al., 2022) in both directions for German (De, high-resource), Japanese (Ja, medium-resource) and Ukrainian (Uk, mediumresource) to English (En).2 We choose these 3 language pairs to optimize for diversity in resource levels and scripts, while ensuring we have adequate test data and commercial MT system support. FLORES101 and WMT2022 are general domain test sets, while WMT2020-2021 are news domain. We then split the final combined test set based on whether the sentences originally end with a (i) full stop, (ii) exclamation mark, (iii) question mark, or (iv) no punctuation. In order to balance the test set sizes, we randomly choose 1000 sentences ending with a full stop. All test set sizes are given in Appendix A.1. ## 2.2 Perturbation Tests Insertion. For the insertion perturbation, we start with the test set split that originally occurs with no ending punctuation, and then insert at the end of each sentence: a (i) full stop, (ii) exclamation mark, (iii) question mark, or a (iv) random character for comparison. The insertion of a single punctuation mark at the end of a sentence is an extremely minimal perturbation that does not change any content. We contrast this with the insertion of a random character at the end which changes the final word. Deletion. For the deletion perturbation, we start with the test set splits that originally occur with a punctuation at the end of the sentences (full stop, exclamation or question mark) and delete them. We also contrast this with deleting the final character 2Approximation based on https://www.statmt.org/ wmt22/translation-task.html. of the sentence, for which we use the split with no ending punctuation. ## 3 Evaluation Metrics Before we evaluate the machine translation systems on our punctuation perturbation test sets, we first evaluate the evaluation metrics themselves to see if they are robust to these variations. This **metaevaluation** is crucial; if the metrics are not reliable, we cannot be sure if changes in the scores are due to changes in translation content. We include the string-based metric **BLEU** (Papineni et al., 2002) for convention, and based on the recommendations from Kocmi et al. (2021), we use **chrF** (Popović, 2015), which is another string-based metric, and COMET (Rei et al., 2020), which is a model-based metric shown to have high correlations with human judgements, and also include **BLEURT-20** (Sellam et al., 2020) and **BERTScore** (Zhang* et al., 2020). Metric versions can be found in Appendix A.2. ## 3.1 Meta-Evaluation Typical robustness tests for machine translation evaluate the translations of both the original and the perturbed source texts against the original reference text (Belinkov and Bisk, 2018; Michel et al., 2019; Bergmanis et al., 2020). The implicit assumption here is that given that the semantics are preserved, the ideal MT system should produce the same or a similar translation for both, and that the automatic metrics used to perform evaluation against the original reference translation will accurately measure the translation quality. However, adding or deleting punctuation from the source input can lead to a predictable corresponding presence or absence of punctuation in the machine translation - which the reference translation lacks, since it may match the punctuation in the original source. In such circumstances, it is unclear if this significantly influences the evaluation quality perceived by the metrics. Setup. In order to investigate whether automatic metrics are robust to the "translation of perturbed source but original reference" discrepancy, we conduct experiments comparing the scores produced by the metrics using the original and perturbed source texts as the "reference" and "translation" texts. More concretely, given the original source text X, its perturbed version X′, and a scoring metric f(Y, R) where Y is the translation and R is | Lang. | Insertion Test | BLEU | chrF COMET | BLEURT | BERTScore | | |-----------------|------------------|--------|--------------|----------|-------------|------| | Original Source | 100.0 | 100.0 | 124.7 | 96.6 | 100.0 | | | + Full stop | -9.7 | -0.3 | -7.5 | -4.2 | -5.2 | | | De | + Exclamation | -9.7 | -0.3 | -9.1 | -6.4 | -5.7 | | + Question | -9.7 | -0.3 | -24.9 | -7.1 | -6.0 | | | + Random | -10.6 | -0.3 | -32.2 | -9.1 | -3.0 | | | Original Source | 100.0 | 100.0 | 129.6 | 97.6 | 100.0 | | | + Full stop | -6.7 | -0.7 | -2.8 | -6.3 | -4.3 | | | + Exclamation | -6.7 | -0.7 | -3.7 | -7.8 | -6.0 | | | Ja | + Question | -6.7 | -0.7 | -9.9 | -9.0 | -5.5 | | + Random | -6.9 | -0.7 | -17.4 | -11.7 | -3.1 | | | Original Source | 100.0 | 100.0 | 132.6 | 99.0 | 100.0 | | | + Full stop | -9.3 | -0.4 | -0.8 | -2.6 | -5.0 | | | + Exclamation | -9.3 | -0.4 | -0.7 | -4.5 | -5.0 | | | Uk | + Question | -9.3 | -0.7 | -2.5 | -5.6 | -5.5 | | + Random | -10.2 | -0.4 | -7.6 | -8.5 | -2.7 | | | Original Source | 100.0 | 100.0 | 125.3 | 97.3 | 100.0 | | | + Full stop | -9.1 | -0.4 | -6.8 | -4.1 | -0.9 | | | + Exclamation | -9.1 | -0.4 | -7.6 | -6.3 | -1.2 | | | En | + Question | -9.1 | -0.4 | -18.2 | -6.9 | -1.7 | | + Random | -9.9 | -0.4 | -25.3 | -7.8 | -2.5 | | the reference, we compute the score f(X, X) (perfect match) and f(X′, X) (single punctuation mismatch).3 We conduct this comparison for both the insertion and deletion tests, across all 4 languages (De, Ja, Uk and En). The goal here is to measure, given all else is equal, whether punctuation insertion/deletion at the end of the sentence significantly affects the scores produced by the automatic metrics, and how this compares against a more typical perturbation of inserting or deleting a random final character. Ideally, the metrics should not produce different scores that are statistically significant given trivially perturbed inputs. We can then rely on scores produced by the metrics to perform robustness evaluations. Insertion Results. The meta-evaluation results for the punctuation insertion tests are shown in Table 2. We see that the metrics produce significantly different scores even though the only difference is a single additional punctuation mark at the end of the sentence. The difference is particularly stark for BLEU, COMET and BLEURT while less pronounced for chrF and BERTScore, and is equally poor across languages. More interestingly, we see that while string-based matching metrics such as 3For COMET, which is of the form f(Y, S, R) where S is the source text, we compute f(X, X, X) and f(X′, X, X). | Lang. | Deletion Test | BLEU | chrF COMET | BLEURT | BERTScore | | |-----------------|-----------------|--------|--------------|----------|-------------|-------| | Original Source | 100.0 | 100.0 | 114.3 | 95.9 | 100.0 | | | - Fullstop | -3.7 | -0.6 | -7.0 | -6.6 | -3.2 | | | Original Source | 100.0 | 100.0 | 123.5 | 97.1 | 100.0 | | | - Exclamation | -7.1 | -1.2 | -8.0 | -7.0 | -5.6 | | | Original Source | 100.0 | 100.0 | 125.8 | 97.2 | 100.0 | | | - Question | -8.1 | -1.5 | -15.5 | -7.4 | -7.4 | | | Original Source | 100.0 | 100.0 | 124.7 | 96.6 | 100.0 | | | - Random | -10.6 | -1.3 | -34.3 | -12.0 | -3.6 | | | De | Original Source | 100.0 | 100.0 | 127.6 | 98.0 | 100.0 | | - Fullstop | -3.5 | -1.6 | -3.2 | -5.7 | -2.7 | | | Original Source | 100.0 | 100.0 | 131.3 | 96.7 | 100.0 | | | - Exclamation | -7.5 | -3.6 | -2.3 | -6.3 | -6.3 | | | Original Source | 100.0 | 100.0 | 131.9 | 97.6 | 100.0 | | | - Question | -6.7 | -3.3 | -2.7 | -6.0 | -6.1 | | | Original Source | 100.0 | 100.0 | 129.6 | 97.6 | 100.0 | | | - Random | -7.3 | -3.0 | -23.9 | -14.3 | -3.3 | | | Ja | Original Source | 100.0 | 100.0 | 131.8 | 99.2 | 100.0 | | - Fullstop | -5.4 | -0.9 | -1.0 | -5.0 | -3.2 | | | Original Source | 100.0 | 100.0 | 132.6 | 99.9 | 100.0 | | | - Exclamation | -8.2 | -1.5 | -0.6 | -6.0 | -5.7 | | | Original Source | 100.0 | 100.0 | 132.7 | 99.1 | 100.0 | | | - Question | -8.7 | -1.7 | -1.5 | -6.5 | -5.6 | | | Original Source | 100.0 | 100.0 | 132.6 | 99.0 | 100.0 | | | - Random | -10.2 | -1.6 | -11.6 | -11.4 | -2.8 | | | Uk | Original Source | 100.0 | 100.0 | 116.7 | 97.9 | 100.0 | | - Fullstop | -3.8 | -0.7 | -6.1 | -6.2 | -0.7 | | | Original Source | 100.0 | 100.0 | 124.1 | 97.9 | 100.0 | | | - Exclamation | -6.7 | -1.5 | -7.1 | -8.2 | -1.4 | | | Original Source | 100.0 | 100.0 | 126.2 | 97.7 | 100.0 | | | - Question | -7.9 | -1.7 | -12.2 | -8.7 | -1.7 | | | Original Source | 100.0 | 100.0 | 125.3 | 97.3 | 100.0 | | | - Random | -9.9 | -1.5 | -34.2 | -10.8 | -2.9 | | | En | | | | | | | ![3_image_0.png](3_image_0.png) BLEU and chrF treat all punctuation equally, modelbased metrics assign drastically lower scores for exclamation and question marks. In the case of BERTScore, punctuation insertion results in lower scores than random character insertion for all languages except English. Deletion Results. The meta-evaluation results for the punctuation deletion tests are shown in Table 3. A similar trend is seen here, where the lack of a single punctuation at the end of the sentence causes a significant drop in scores across all metrics. We also see the same trend where missing exclamation or question marks result in more significant drops in scores. Furthermore, punctuation deletion more often results in lower scores than deleting a random final character compared to punctuation insertion. Surprisingly, results for Uk are relatively more sta- | chrF | COMET | | | |--------------|-------------------------------------|------|------| | Original | 1) Deauthorize your e-book reader | 79.3 | 94.9 | | Perturbed | 1) Deauthorize your e-book reader . | 78.6 | 93.0 | | Score ∆ -0.7 | -1.9 | | | | Original | Elon Musk lets Tesla shares rise | 60.4 | 98.5 | | Perturbed | Elon Musk makes Tesla shares rise | 59.4 | 97.7 | | Score ∆ -1.0 | -0.8 | | | ble than for En, particularly for COMET. Note that all score differences here will register as statistically significant: the original source will always "win" against the perturbed source in all comparisons performed by tests such as paired bootstrap resampling or randomization. Some issues with BLEU have been highlighted previously (Reiter, 2018; Kocmi et al., 2021); COMET, BLEURT and BERTScore presumably suffer from robustness issues as neural models. ChrF scores display smaller variations that are consistent across punctuation and languages, and therefore seem more reliable for robustness evaluations, corroborating the findings from Michel et al. (2019). Overall, we expand the metric sensitivity issues highlighted in Karpinska et al. (2022) for English in finer-detail for punctuation, and further confirm them for German, Japanese and Ukrainian. Comparison with severe translation errors. We performed a manual segment-wise analysis of a subset of machine translation outputs. We find that in several cases, particularly for shorter sentences, translations with punctuation differences are penalized similar to translations with severe errors. See Table 4 for an example. Broader Implications. More broadly, these results indicate that (i) statistically significant differences can be obtained merely by changing a single punctuation, (ii) models that fail to match the reference punctuation may be penalized more than they should be, and (iii) models that mistranslate a single word but correctly match the punctuation may be getting more credit than they should. For example, we found that up to 5% of the sentences in the WMT2022 Uk-En test set and up to 10% of the sentences in the WMT2022 Ja-En test set had mismatches between the ending punctuation in the source and reference. This could mean that model performance on these instances may be undervalued if the model reproduces the source punctuation. Conversely, we also found many instances of models producing acceptable punctuation that was not present in the original source (*e.g.,* ≈ 13% of Microsoft's Uk-En output for full stop deletion perturbation test set had full stops), which may also get unfairly penalized. More importantly, it may be worthwhile to reexamine how machine translation models are evaluated in robustness tests and after adversarial training, since resultant differences in scores may not be a reflection of actual translation quality. ## 4 Machine Translation Experiments We now test the publicly available commercial machine translation systems of Google, **DeepL** and Microsoft through their paid APIs on our test sets.4 Some of these commercial systems have previously been claimed to have reached human parity (Wu et al., 2016; Hassan et al., 2018). Commercial systems are generally expected to deal with nonstandard inputs as they are targeted for real-world use cases. We therefore expect that these systems have already been trained to be somewhat robust to various kinds of input noise. For the insertion tests, we compare the translation of the original source text *without* punctuation against the translation of the perturbed source *with* sentence-final punctuation. For deletion tests, we compare the translation of the original source text with punctuation against the translation of the perturbed source *without* sentence-final punctuation. ## 4.1 Evaluation Results from our meta-evaluation in §3.1 mean that we cannot get reliable results from evaluation metrics if we directly use the perturbed source translations and original references for evaluation; it will be hard to identify if changes in scores originate from translation differences or merely punctuation changes. One solution is to add the same punctuation perturbation to the reference that we add to the source. We find that this increases the overall scores since there is now an additional character that matches the reference in each sentence, rendering the score incomparable to the original translation. Another solution is to reset the punctuation changes in the translations. We therefore remove corresponding sentence-final punctuation produced 4All translations are from December 2022. in the translations for the source inputs perturbed through insertion that are not also produced for the original source inputs, and vice versa for deletion, thereby making the two translations comparable. Henceforth we use chrF scores due to its relative robustness and include COMET scores as it has been shown to have high correlations with human judgements (Kocmi et al., 2021; Freitag et al., 2022). Inconsistency. Apart from measuring whether perturbations cause degradation in translations compared to a reference, another important criterion is the consistency. That is, given the original and the perturbed sources as input, we measure how different the translations produced for each are. Since here we want to also account for surfacelevel changes, we choose the string-based matching metric chrF based on results in §3.1 and findings from Michel et al. (2019). Given a source X and its translation Y, and the perturbed source X′and its translation Y′, we measure **consistency** at the sentence-level as the score chrF(Y′, Y), where Y acts as the "reference". We designate a score < 75 to be a significant deviation in translation, and measure **percentage of inconsistency** by counting the number of Y′ which have chrF< 75. ## 4.2 Results The results for the punctuation insertion perturbation tests are given in Table 5. We see that in general, the insertion of sentence-final punctuation results in a statistically significant drop in scores, but also some significant improvements. The results for the punctuation deletion perturbation tests are given in Table 7. Overall, deletion causes more drops in performance than insertion, and far fewer improvements in scores. Effect of Language. Unsurprisingly, based on inconsistency, we see that the models are far more robust to insertion perturbations for the high resource language pair De-En, with generally < 10% inconsistency. More interestingly, we see that while Ja-En and Uk-En are both medium resource, the models are far more robust for Uk-En at 0 − 23% inconsistency, as compared to Ja-En which has between 18 − 35% inconsistency across models. We see the same inconsistency trends for deletion as for insertion: models are more robust to perturbations in De (0 − 23%) and Uk (0 − 25%) source texts than Ja (10 − 37%). Overall, deletion leads to a higher range of inconsistency than insertion. Effect of Punctuation. We see that the models are more likely to be robust to full stop insertion than exclamation and question marks: statistically significant differences in performance occur more often for the latter. In fact, DeepL and Microsoft models seem to benefit from having full stops and exclamation marks added, with results improving for Ja-En and Uk-En. In the case of question marks, we see that it causes a universal drop in scores across models and languages. For Uk-En, question mark insertion almost always causes more significant drops in scores than inserting a random character. Unlike insertion, full stop deletion causes significant drops in scores, particularly for the DeepL and Microsoft models for Ja-En and Uk-En. Interestingly, question mark deletion does not cause a significant score drop in Ja-En for all models. This is possibly because the question mark is mostly optional in Ja, which uses the particle 'か' as a question marker. Pre-processing. We see that both insertion and deletion can cause degradation in performance. This means that while pre-processing of the inputs to ensure consistent punctuation may lead to more consistent translations, it is unlikely to result in better quality translations. ## 5 Analysis And Discussion Some examples of translation changes caused by the perturbations are given in Table 6. Both insertion and deletion cause a wide range of translation changes, with a few severe errors where the meaning is completely changed, such as by hallucinating or omitting negation. Others include changes in number, tense, pronouns, named entities, etc. Reordering. Often, inserting or deleting punctuation leads to a reordering of the words in the sentence. In many cases the reordering leads to mostly similar but slightly off translations (Example 4), with some cases causing significant differences in meaning (Example 8). While we might expect punctuation perturbation to ideally cause no other changes in translation apart from the difference in punctuation itself, there could be cases of valid translation changes caused by the perturbation. For example, while *"1) Heben Sie* die Autorisierung des Lesegeräts auf" is originally translated as *"1) Deauthorize the reader"*, adding a question mark does not produce "1) Deauthorize the reader?" but instead *"1) Are you deauthorizing* | Google | DeepL | Microsoft | | | | | | | | | |-----------------|-----------|-------------|-------|-------|-------|-------|-------|------|-------|-------| | Lg. | Insertion | chrF | COMET | %Inc. | chrF | COMET | %Inc. | chrF | COMET | %Inc. | | Original Source | 62.6 | 66.1 | 63.5 | 67.8 | 63.2 | 67.1 | | | | | | + Full stop | 0.0 | +0.1 | 0.0 | -0.2 | -0.4 | 8.3 | -0.5 | -0.7 | 7.2 | | | + Exclamation | -0.2 | 0.0 | 11.6 | -0.3 | -1.3 | 7.9 | -0.7 | -0.9 | 7.2 | | | + Question | -0.5 | -2.1 | 13.5 | -0.6 | -2.0 | 8.3 | -0.7 | -1.4 | 8.7 | | | + Random | -0.3 | -10.2 | 10.4 | -0.9 | -21.3 | 9.4 | -0.7 | -9.5 | 6.5 | | | De-En | Original | 53.2 | 40.5 | 51.5 | 37.7 | 52.8 | 36.3 | | | | | +Full stop | 0.0 | 0.0 | 24.3 | +0.2 | +0.2 | 30.7 | +0.2 | +1.9 | 20.6 | | | +Exclamation | -0.2 | -0.6 | 24.2 | -1.0 | -1.9 | 33.4 | +0.1 | +1.2 | 19.2 | | | +Question | -0.4 | -6.0 | 28.0 | -0.5 | -5.2 | 35.8 | -0.4 | -0.3 | 25.4 | | | +Random | -0.4 | -7.0 | 22.3 | -0.6 | -12.1 | 27.1 | -0.3 | -7.1 | 18.8 | | | Ja-En | Original | 64.7 | 58.3 | 63.1 | 56.1 | 61.4 | 42.4 | | | | | +Full stop | 0.0 | 0.0 | 0.8 | +0.9 | +1.1 | 12.6 | 0.0 | +1.9 | 10.3 | | | +Exclamation | -0.1 | -0.9 | 10.5 | +0.9 | +0.7 | 15.1 | -0.3 | +1.4 | 10.5 | | | +Question | -2.1 | -9.9 | 23.9 | -0.8 | -5.6 | 20.8 | -1.8 | -7.6 | 22.1 | | | +Random | -0.9 | -5.0 | 10.7 | -0.4 | -6.9 | 13.4 | -0.2 | -7.6 | 9.1 | | | Uk-En | | | | | | | | | | | | ∆ | Con. | | | | | |------------------------------------------|-------------------------------------------------------------------|-------------------------------------------------------------|------------------------------------------------------|-----------------|------| | # | Text | Original (X, Y) | Perturbed (X′, Y′) | chrF COMET chrF | | | 1 | Source | なにかアドバイス下さい | なにかアドバイス下さい 。 | | | | Google | give me some advice | Please give me some advice | +17.8 | +8.0 | 91.7 | | 2 | Source Якщо ще колись не захочете менi писати, то я чекатиму | Якщо ще колись не захочете менi писати, то я чекатиму . | | | | | DeepL | If you ever want to write to me again, I will be waiting | If ever you do not want to write to me, I will be waiting | -2.1 | -23.0 | 67.7 | | 3 | Source | Elon Musk lässt Tesla-Aktien steigen | Elon Musk lässt Tesla-Aktien steigen ! | | | | Microsoft | Elon Musk lets Tesla shares rise | Elon Musk makes Tesla shares rise | -1.0 | -0.8 | 76.9 | | 4 | Source | アンドロイドのハドウェアをきれいにするコツ | アンドロイドのハドウェアをきれいにするコツ ! | | | | DeepL Tips for cleaning android hardware | Androids, tips on how to clean up your hardware | -19.2 | -98.4 | 43.1 | | | 5 | Source | かけが良すぎるガデニングギミックにご用心 | かけが良すぎるガデニングギミックにご用心 ? | | | | Google | Beware of gardening gimmicks that look too good | Worried about gardening gimmicks that look too good | -11.8 | -19.7 | 78.6 | | 6 | Source | Der saubere Lake Tahoe vom Keimwandel verunreinigt | Der saubere Lake Tahoe vom Keimwandel verunreinigt ? | | | | DeepL | The clean Lake Tahoe polluted by the germ change | Clean Lake Tahoe contaminated by gerrymandering | -14.5 | -72.7 | 41.0 | | 7 | Source Проблема з температурою водонагрiвача та ванною . | Проблема з температурою водонагрiвача та ванною | | | | | Microsoft | The problem is with the temperature of the water heater and bath. | Problem with water heater temperature and bath. | +11.8 | +38.6 | 56.0 | | 8 | Source | Ja... wir haben hier alle Schusswaffen . | Ja... wir haben hier alle Schusswaffen | | | | Microsoft | Yes... we all have firearms here. | Yes... we have all the firearms here. | -9.7 | -7.6 | 75.5 | | 9 | Source «Справа дiйсно зрушилася з мiсця ! | «Справа дiйсно зрушилася з мiсця | | | | | Google | "The matter really got out of hand ! | "The matter has really moved from place to place ! | -4.0 | -19.4 | 43.4 | | 10 | Source | Boron zum Grusse ! | Boron zum Grusse | | | | Google | Greetings Boron! | Greetings from Boron! | -2.6 | -30.1 | 73.1 | | 11 | Source Тiльки я буду трошки пiзнiше - десь о 8. Можна ? | Тiльки я буду трошки пiзнiше - десь о 8. Можна | | | | | DeepL | Only I will be a little later - around 8 . Can I ? | Only I will be a little later - around 8 o'clock . You can? | -4.3 | +0.5 | 80.6 | | 12 | Source | LINEは何日に何回送るのが良いですか ? | LINEは何日に何回送るのが良いですか | | | | Microsoft | What is the best time to send LINE messages per day? | How many times a day should I send LINE? | -8.5 | +14.1 | 25.9 | ![6_image_0.png](6_image_0.png) ![6_image_1.png](6_image_1.png) the reader?". This word reordering for an interrogative sentence, typical particularly for English, can be considered a valid change even though the chrF (65.5 −→ 52.7) and COMET (28.2 *−→ −*25.8) scores drop. There are also cases when the resultant reordering actually improves the scores of the translation despite being wrong *e.g.,* adding a question mark to *"Und ich muss nochmal Versandkosten* Google DeepL Microsoft Lg. Deletion chrF COMET %Inc. chrF COMET %Inc. chrF COMET %Inc. Original Source 66.1 68.2 67.0 68.6 66.3 67.1 - Full stop 0.0 0.0 0.1 +0.1 0.0 4.7 0.0 -0.3 1.9 Original Source 61.2 66.8 61.0 66.3 60.8 65.0 - Exclamation -0.3 -0.2 7.8 -0.4 +1.6 14.7 -0.5 -1.9 4.9 Original Source 61.3 61.8 61.3 56.0 60.8 59.9 - Question **-0.9 -3.7** 18.4 **-0.9 -4.5** 23.0 **-1.2 -5.2** 17.3 Original Source 62.6 66.1 63.5 67.8 63.2 67.1 - Random **-0.6 -8.3** 10.4 **-1.2 -3.2** 9.9 **-1.5 -15.4** 11.2 Original Source 55.6 52.0 56.5 56.0 55.5 53.9 - Full stop -0.1 -0.3 13.1 **-0.3 -1.5** 13.2 **-0.2 -1.8** 10.0 Original Source 46.8 41.6 48.1 43.3 46.7 39.9 - Exclamation -0.8 **-3.2** 26.1 -0.7 -2.9 31.4 **-1.3 -2.7** 26.6 Original Source 48.9 41.3 51.8 48.3 47.6 39.5 - Question -0.3 -2.9 34.9 **-1.8 -6.4** 29.6 -0.2 -2.2 19.4 Original Source 53.2 40.5 51.5 37.7 52.8 36.3 - Random **-1.7 -13.2** 33.4 **-2.7 -14.0** 37.5 **-2.2 -18.5** 33.5 Original Source 65.2 64.6 65.2 65.1 62.6 59.3 - Full stop +0.1 -0.2 0.6 **-0.4 -1.0** 5.5 **-0.2 -0.8** 4.2 Original Source 64.6 69.7 63.5 66.0 59.4 58.5 - Exclamation **-0.9** -0.9 5.8 -0.3 +0.5 8.7 **-0.9** -3.9 10.1 Original Source 61.7 57.7 61.6 60.9 59.0 50.0 - Question **-1.7 -5.6** 21.6 **-1.3 -6.2** 24.7 **-2.4** -6.7 25.4 Original Source 64.7 58.3 63.1 56.1 61.4 42.4 Random **-1.0 -7.5** 12.2 **-1.3 -7.9** 15.5 **-2.9 -16.2** 17.5 | De-En Ja-En Uk-En | |---------------------| zahlen" changes the translation from "And I have to pay shipping again" to "And do I have to pay shipping costs again?" (instead of "And I have to pay shipping again?") and improves both chrF (17.5 −→ 23.4) and COMET (26.3 −→ 45.6) scores, presumably due to the presence of the word *"costs"* that now matches the reference (*"And I still need* to pay the delivery costs"). Similarly for question mark deletion, removing the question mark from "Заняття в понедiлок i середу вiдрiзняються?" changes the translation from "Are Monday and Wednesday classes different?" to *"Monday* and Wednesday classes are different", dropping the chrF (76.2 −→ 74.9) and COMET (91.0 −→ 83.7) scores. Expecting translations of both original and perturbed source texts to match is a standard evaluation setting for robustness tests, even for more severe perturbations resulting in drastic changes and out-of-vocabulary inputs (see Table 1). Given these results, we reiterate our call from §3 to reexamine this evaluation setup for settings similar to this work. However, there are several cases where the interrogative nature of the source is not dependent on the question mark and the model correctly produces a translation that is also interrogative but different. For example, deleting the question mark from "ありますか?" changes the translation from "Is there?" to *"do you have"*. Example 12 shows another case where the model correctly recognizes the perturbed source as a question, but produces a significantly different translation. Example 5 and Example 6 are also cases of translation differences that are more severe than reordering. Sentence Style Association. Although we see some critical translation changes due to perturbing full stops (Example 2), a majority of the translations underwent a change in sentence style. In particular, we found that inserting a full stop resulted in models producing longer, complete sentences, while deleting the full stop resulted in shorter, headlinestyle sentences. This was observed across systems (Example 1 and 7), which indicates that this stylistic change presumably comes from what is commonly seen in training data: the models have seemingly learnt to associate a lack of full stop with article headlines from news domain data. In the case of Example 7, the changed translation better matches the reference ("*Water heater temp and bath* issue."), leading to improvement in scores in both chrF (46.8 −→ 58.6) and COMET (36.7 −→ 75.3). Robustness. Previous works have correlated consistency with robustness (Niu et al., 2020; Wallace et al., 2020), the implication being that less consistent outputs are lower in translation quality. We find that this is not necessarily the case for our perturbation setting. For instance, Example 1 shows a translation that has high consistency (91.7 chrF compared to original translation), while Example 7 has low consistency (56.0 chrF). However, in both cases the translations of the perturbed sources score significantly higher than the original translations. Similarly, Example 12 has a very low consistency score (25.9) but the chrF reduces (−8.5) while the COMET increases (+14.1). COMET is more reflective of the translation quality here: given the reference ('*'How many LINE messages are okay to* send in a day?"), the translation of the perturbed source is closer to the actual translation. Conversely, instances with relatively high consistency (Example 3, 5, 8, 10) all drop in scores and have significant translation issues. Other Changes. Some other changes in the translations include changes in number, tense, pronouns, named entities, capitalization, and so on. Some of the less severe errors such as changes in capitalization or extra demonstratives also incur heavy drops in chrF and COMET scores. Some more examples of the translations produced for perturbed inputs can be found in Appendix A.3. ## 6 Conclusions In this work, we unite the robustness evaluation of both machine translation systems and their evaluation metrics, and discuss ways in which both fail to be adequately robust to trivial punctuation change. This shows that models and metrics are in fact far more sensitive and a lot less reliable in real-world use cases than is commonly expected. We show that both metrics and machine translation systems treat each punctuation differently, with machine translation systems showing associations between punctuation and sentence styles. We also highlight the implications of these sensitivities for robustness research and evaluation for machine translation. Although it may not necessarily be a hard task to train systems that are robust to punctuation, our goal is to highlight one of the issues that has possibly been overlooked due to its triviality. We hope that future research in robustness, evaluation metrics and machine translation accounts for these sensitivities while performing evaluation and model training. ## Limitations Test Set Size. One of the main limitations of our work is relatively smaller test set sizes. This stems from the way our perturbation experiments are set up - we can only use existing test sentences which already end with specific punctuation in order to measure the effect of deleting them, or start with sentences which do not have sentence final punctuation in order to measure the effect of inserting them. In general, a majority of the official test sets have sentences ending in full stops; this results in having a smaller test set to work with. This is also the same issue that presumably gives rise to sensitivity issues in the trained models. However, given that our focus has been on each particular punctuation, instead of merging them all together, we find that our test sets are larger than the ones used in previous work for each punctuation. Combined with the fact that we ensure to perform significance testing and manual analysis, we believe our results are reliable. Appendix A.1 includes details and a discussion. Target Language. Although we test models across several source languages, the target language is always English. This makes our analysis of induced errors limited to phenomena that occur in English, for example, changes in number, reordering of words for question marks, or changes in capitalization, etc. Languages without capitalization or number marking but with morphological richness and other phenomena are likely to have different errors. For example, inserting a full stop changes the translation to include 'Please' and makes the sentence more polite in Example 1 in Table 6. For languages like Japanese, which have complex systems of marking varying levels of honorifics, punctuation perturbations may result in more interesting changes to the translations. A vast majority of previous work has performed perturbations on languages using the Latin alphabet, so we consider our work a step forward, considering that we also evaluate metrics on Japanese and Ukrainian texts. However, it is also important to evaluate sensitivity when both directions are nonEnglish, for example, Ukraininan to Japanese translation. A lack of adequate parallel data in such directions usually precludes such experiments. We hope to undertake this in future work. ## Acknowledgments We would like to thank the reviewers for their reviews and suggestions. We would also like to thank our colleagues Alberto Poncelas and Maksim Tkachenko for their valuable inputs. ## References Duarte Alves, Ricardo Rei, Ana C Farinha, José G. C. de Souza, and André F. T. Martins. 2022. Robust MT evaluation with sentence-level multilingual augmentation. In *Proceedings of the Seventh Conference* on Machine Translation, pages 469–478, Abu Dhabi. Association for Computational Linguistics. Loic Barrault, Ondrej Bojar, Fethi Bougares, Rajen Chatterjee, Marta R. Costa-jussa, Christian Federmann, Mark Fishel, Alexander Fraser, Markus Freitag, Yvette Graham, Roman Grundkiewicz, Paco Guzman, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, Tom Kocmi, Andre Martins, Makoto Morishita, and Christof Monz, editors. 2021. *Proceedings of the Sixth Conference on Machine Translation*. Association for Computational Linguistics, Online. Loïc Barrault, Ondřej Bojar, Fethi Bougares, Rajen Chatterjee, Marta R. Costa-jussà, Christian Federmann, Mark Fishel, Alexander Fraser, Yvette Graham, Paco Guzman, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, André Martins, Makoto Morishita, Christof Monz, Masaaki Nagata, Toshiaki Nakazawa, and Matteo Negri, editors. 2020. *Proceedings of the Fifth Conference on Machine Translation*. Association for Computational Linguistics, Online. Yonatan Belinkov and Yonatan Bisk. 2018. Synthetic and natural noise both break neural machine translation. *ArXiv*, abs/1711.02173. Toms Bergmanis, Arturs Stafanovivcs, and Marcis Pinnis. 2020. Robust neural machine translation: Modeling orthographic and interpunctual variation. In Baltic HLT. Xiaoyu Chen, Daimeng Wei, Hengchao Shang, Zongyao Li, Zhanglin Wu, Zhengzhe Yu, Ting Zhu, Mengli Zhu, Ning Xie, Lizhi Lei, Shimin Tao, Hao Yang, and Ying Qin. 2022. Exploring robustness of machine translation metrics: A study of twenty-two automatic metrics in the WMT22 metric task. In Proceedings of the Seventh Conference on Machine Translation, pages 530–540, Abu Dhabi. Association for Computational Linguistics. Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2018. HotFlip: White-box adversarial examples for text classification. In *Proceedings of the 56th* Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 31–36, Melbourne, Australia. Association for Computational Linguistics. Adam Ek, Jean-Philippe Bernardy, and Stergios Chatzikyriakidis. 2020. How does punctuation affect neural models in natural language inference. In *Proceedings of the Probability and Meaning Conference* (PaM 2020), pages 109–116, Gothenburg. Association for Computational Linguistics. Markus Freitag, Ricardo Rei, Nitika Mathur, Chi-kiu Lo, Craig Stewart, Eleftherios Avramidis, Tom Kocmi, George Foster, Alon Lavie, and F. T. André Martins. 2022. Results of WMT22 metrics shared task: Stop using BLEU - neural metrics are better and more robust. In Proceedings of the Seventh Conference on Machine Translation, pages 46–68, Abu Dhabi. Association for Computational Linguistics. Naman Goyal, Cynthia Gao, Vishrav Chaudhary, PengJen Chen, Guillaume Wenzek, Da Ju, Sanjan Krishnan, Marc'Aurelio Ranzato, Francisco Guzmán, and Angela Fan. 2021. The flores-101 evaluation benchmark for low-resource and multilingual machine translation. *Transactions of the Association for Computational Linguistics*, 10:522–538. Hany Hassan, Anthony Aue, Chang Chen, Vishal Chowdhary, Jonathan Clark, Christian Federmann, Xuedong Huang, Marcin Junczys-Dowmunt, William D. Lewis, Mu Li, Shujie Liu, Tie-Yan Liu, Renqian Luo, Arul Menezes, Tao Qin, Frank Seide, Xu Tan, Fei Tian, Lijun Wu, Shuangzhi Wu, Yingce Xia, Dongdong Zhang, Zhirui Zhang, and Ming Zhou. 2018. Achieving human parity on automatic Chinese to English news translation. ArXiv, abs/1803.05567. Georg Heigold, Stalin Varanasi, Günter Neumann, and Josef van Genabith. 2018. How robust are characterbased word embeddings in tagging and MT against wrod scramlbing or randdm nouse? In Proceedings of the 13th Conference of the Association for Machine Translation in the Americas (Volume 1: Research Track), pages 68–80, Boston, MA. Association for Machine Translation in the Americas. Marzena Karpinska, Nishant Raj, Katherine Thai, Yixiao Song, Ankita Gupta, and Mohit Iyyer. 2022. Demetr: Diagnosing evaluation metrics for translation. In *Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing*. Tom Kocmi, Rachel Bawden, Ondřej Bojar, Anton Dvorkovich, Christian Federmann, Mark Fishel, Thamme Gowda, Yvette Graham, Roman Grundkiewicz, Barry Haddow, Rebecca Knowles, Philipp Koehn, Christof Monz, Makoto Morishita, Masaaki Nagata, Toshiaki Nakazawa, Michal Novák, Martin Popel, Maja Popovic, and Mariya Shmatova. 2022. Findings of the 2022 conference on machine translation (WMT22). In *Proceedings of the Seventh Conference on Machine Translation*, pages 1–45, Abu Dhabi. Association for Computational Linguistics. Tom Kocmi, Christian Federmann, Roman Grundkiewicz, Marcin Junczys-Dowmunt, Hitokazu Matsushita, and Arul Menezes. 2021. To ship or not to ship: An extensive evaluation of automatic metrics for machine translation. In Proceedings of the Sixth Conference on Machine Translation, pages 478–494, Online. Association for Computational Linguistics. Xian Li, Paul Michel, Antonios Anastasopoulos, Yonatan Belinkov, Nadir Durrani, Orhan Firat, Philipp Koehn, Graham Neubig, Juan Pino, and Hassan Sajjad. 2019. Findings of the first shared task on machine translation robustness. In *Proceedings of the* Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 91–102, Florence, Italy. Association for Computational Linguistics. Paul Michel, Xian Li, Graham Neubig, and Juan Miguel Pino. 2019. On evaluation of adversarial perturbations for sequence-to-sequence models. In North American Chapter of the Association for Computational Linguistics. Xing Niu, Prashant Mathur, Georgiana Dinu, and Yaser Al-Onaizan. 2020. Evaluating robustness to input perturbations for neural machine translation. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 8538–8544, Online. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In *Proceedings of* the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Martin Popel, Markéta Tomková, Jakub Tomek, Łukasz Kaiser, Jakob Uszkoreit, Ondrej Bojar, and Z. Žabokrtský. 2020. Transforming machine translation: a deep learning system reaches news translation quality comparable to human professionals. *Nature* Communications, 11. Maja Popović. 2015. chrF: character n-gram F-score for automatic MT evaluation. In *Proceedings of the Tenth* Workshop on Statistical Machine Translation, pages 392–395, Lisbon, Portugal. Association for Computational Linguistics. Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186– 191, Brussels, Belgium. Association for Computational Linguistics. Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020. COMET: A neural framework for MT evaluation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2685–2702, Online. Association for Computational Linguistics. Ehud Reiter. 2018. A structured review of the validity of BLEU. *Computational Linguistics*, 44(3):393–401. Thibault Sellam, Dipanjan Das, and Ankur Parikh. 2020. BLEURT: Learning robust metrics for text generation. In *Proceedings of the 58th Annual Meeting of* the Association for Computational Linguistics, pages 7881–7892, Online. Association for Computational Linguistics. Anders Søgaard, Miryam de Lhoneux, and Isabelle Augenstein. 2018. Nightmare at test time: How punctuation prevents parsers from generalizing. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 25–29, Brussels, Belgium. Association for Computational Linguistics. Jiao Sun, Thibault Sellam, Elizabeth Clark, Tu Vu, Timothy Dozat, Dan Garrette, Aditya Siddhant, Jacob Eisenstein, and Sebastian Gehrmann. 2022. Dialect-robust evaluation of generated text. *ArXiv*, abs/2211.00922. Samson Tan, Shafiq R. Joty, Min-Yen Kan, and Richard Socher. 2020. It's morphin' time! combating linguistic discrimination with inflectional perturbations. ArXiv, abs/2005.04364. Ashish Vaswani, Noam M. Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *ArXiv*, abs/1706.03762. Eric Wallace, Mitchell Stern, and Dawn Song. 2020. Imitation attacks and defenses for black-box machine translation systems. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5531–5546, Online. Association for Computational Linguistics. Yonghui Wu, Mike Schuster, Z. Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason R. Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Gregory S. Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. *ArXiv*, abs/1609.08144. Tianyi Zhang*, Varsha Kishore*, Felix Wu*, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with BERT. In International Conference on Learning Representations. Xinze Zhang, Junzhe Zhang, Zhenhua Chen, and Kun He. 2021. Crafting adversarial examples for neural machine translation. In *Proceedings of the 59th Annual Meeting of the Association for Computational* Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1967–1977, Online. Association for Computational Linguistics. ## A Appendix A.1 Testset Sizes | Test Split | De-En | Ja-En | Uk-En | |------------------------|---------|---------|---------| | No Final Punctuation | 748 | 956 | 515 | | Final Full Stop | 1000 | 1000 | 1000 | | Final Exclamation Mark | 102 | 207 | 69 | | Final Question Mark | 283 | 284 | 287 | Table 8: Testset sizes Test set sizes for our perturbation tests are given in Table 8. Note that all punctuation insertion tests use the **No Final Punctuation** split, while the deletion tests use the respective ending punctuation splits. Random insertion and deletion both use the No Final Punctuation split. The test set sizes reflect a general imbalance in sentence-final punctuation in parallel corpora that may be causing the sensitivity in the models. In order to be able to insert or delete punctuation, we are limited to sentences which originally have no punctuation or the specific punctuation we intend to delete. This is a requirement unique to our extremely minimal setup, since more indiscriminate punctuation perturbations can be possibly carried out on a larger scale. For comparison, the FLORES101 dataset has 1012 sentences, and the WMT2020-2022 datasets range from 785 to 2037 sentences. Some challenge sets for metrics in the WMT Metrics Tasks (Freitag et al., 2022) included 50 sentences per phenomenon for 3 language pairs (Alves et al., 2022) and 721 sentences covering 5 error types for Zh-En (Chen et al., 2022). ## A.2 Metric Versions Metric signatures and versions used for evaluation are given in Table 9. ## A.3 More Translation Examples Table 10 shows some more examples of translation changes in response to perturbations. We see more instances of changes in sentence style, fluency, hallucination and others. | Metric | Version | |-------------------|---------------------------------------------------------------------------| | BLEU | nrefs:1|case:mixed|eff:no|tok:13a|smooth:exp|version:2.3.1 | | BLEU [Ja] | nrefs:1|case:mixed|eff:no|tok:ja-mecab-0.996-IPA|smooth:exp|version:2.3.1 | | chrF | nrefs:1|case:mixed|eff:yes|nc:6|nw:0|space:no|version:2.3.1 | | COMET | 1.1.3 wmt20-comet-da | | BERTScore [En] | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.25.1) | | BERTScore [Other] | bert-base-multilingual-cased_L9_no-idf_version=0.3.12(hug_trans=4.25.1) | | BLEURT | 0.0.2 BLEURT-20 | Table 9: Metric versions and signatures. We use the sacreBLEU (Post, 2018) implementations for BLEU and chrF, and the huggingface implementations for BLEURT and BERTScore. | # | Text | Original | Perturbed | |----------------------|------------------------------------------------------------------------|----------------------------------------------------------------------------|-----------------------------------------| | 13 | Source | 2019年初りセル催中 | 2019年初りセル催中 ! | | DeepL | 2019 New Year's Sale is underway! | 2019 First Year Sale is on now! | | | 14 | Source У нас вiйськова служба обов'язковою для всiх чоловiкiв | У нас вiйськова служба обов'язковою для всiх чоловiкiв | | | вiд 16 до 29 рокiв . | вiд 16 до 29 рокiв | | | | DeepL | In Ukraine , military service is compulsory for all men aged 16 to 29. | In our country , military service is compulsory for all men aged 16 to 29. | | | 15 | Source Яблуко вiд яблунi недалеко, як вiдомо, пада.. . | Яблуко вiд яблунi недалеко, як вiдомо, пада.. | | | Google | As you know, the apple does not fall far from the apple tree... | As you know, the apple falls far from the apple tree. | | | 16 | Source | BGM | BGM ? | | Google | Background music | BGM | | | 17 | Source | Vorwürfe gegen Trump verschärfen sich | Vorwürfe gegen Trump verschärfen sich ! | | Microsoft | Allegations against Trump intensify | Accusations against Trump are intensifying | | | 18 | Source Я розмовляю укранською, росiйською та чеською мовами | Я розмовляю укранською, росiйською та чеською мовами | | | iнтенсивно вчуся . | iнтенсивно вчуся | | | | DeepL | I speak Ukrainian, Russian and Czech and I am studying intensively. | I speak Ukrainian, Russian and Czech intensively studying . | | | 19 | Source Гришко вже пiшов у яслi але не все так просто. . . | Гришко вже пiшов у яслi але не все так просто. . . | | | Дуже сильно плаче | Дуже сильно плаче . | | | | DeepL | Grishko has already gone to the nursery, but it's not so easy ... | Hryshko has already gone to the nursery, but not everything is so simple | | | He cries a lot . . . | He cries a lot | | | | 20 | Source | 印象に残る日曜日は ? | 印象に残る日曜日は | | Microsoft | What Sunday left a lasting impression on you? | Memorable Sundays? | | Table 10: Examples of changes in translation caused by perturbations. Punctuation perturbations at the end of the sentence are highlighted in blue , original translations are highlighted in yellow and the changes in the translations are highlighted in red . ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitations section A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Left blank. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 2 ✓ B1. Did you cite the creators of artifacts you used? Section 2 B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 2 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Appendix A.1 ## C ✗ **Did You Run Computational Experiments?** Left blank. C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? No response. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? No response. C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? No response. C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? No response. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
tan-etal-2023-reimagining
Reimagining Retrieval Augmented Language Models for Answering Queries
https://aclanthology.org/2023.findings-acl.382
We present a reality check on large language models and inspect the promise of retrieval-augmented language models in comparison. Such language models are semi-parametric, where models integrate model parameters and knowledge from external data sources to make their predictions, as opposed to the parametric nature of vanilla large language models. We give initial experimental findings that semi-parametric architectures can be enhanced with views, a query analyzer/planner, and provenance to make a significantly more powerful system for question answering in terms of accuracy and efficiency, and potentially for other NLP tasks.
# Reimagining Retrieval Augmented Language Models For Answering Queries [Reality Check Theme Track] Wang-Chiew Tan Yuliang Li Pedro Rodriguez Richard James* **Xi Victoria Lin Alon Halevy Scott Yih** Meta {wangchiew,yuliangli,victorialin,ayh,scottyih}@meta.com [email protected]* ## Abstract We present a reality check on large language models and inspect the promise of retrievalaugmented language models in comparison. Such language models are semi-parametric, where models integrate model parameters and knowledge from external data sources to make their predictions, as opposed to the parametric nature of vanilla large language models. We give initial experimental findings that semiparametric architectures can be enhanced with views, a query analyzer/planner, and provenance to make a significantly more powerful system for question answering in terms of accuracy and efficiency, and potentially for other NLP tasks. ## 1 Introduction As language models have grown larger (Kaplan et al., 2020; Hoffmann et al., 2022), they have fared better and better on question answering tasks (Hendrycks et al., 2021) and have become the foundation of impressive demos like ChatGPT (Ouyang et al., 2022; ChatGPT3-OpenAI). Models like GPT-3 (Brown et al., 2020) and ChatGPT generate fluent, human-like text, which comes the potential for misuse as in high-stakes healthcare settings (Dinan et al., 2021). Large language models (LLMs) also come with several significant issues (Hoffmann et al., 2022; Bender et al., 2021). LLMs are costly to train, deploy, and maintain, both financially and in terms of environmental impact (Bender et al., 2021). These models are also almost always the exclusive game of industrial companies with large budgets. Perhaps most importantly, the ability of LLMs to make predictions is not commensurate with their ability to obtain insights about their predictions. Such models can be prompted to generate false statements (Wallace et al., 2019a), often do so unprompted (Asai et al., 2022) and when combined with its ability to easily fool humans, can lead to misuse (Macaulay, 2020). In recent years, we have seen the promise of retrieval-augmented language models partially addressing the aforementioned shortcomings (Guu et al., 2020; Lewis et al., 2020; Borgeaud et al., 2021; Izacard et al., 2022; Yasunaga et al., 2022a). The architecture of such models is *semi-parametric*, where the model integrates model parameters and knowledge from external data sources to make its predictions. The first step of performing a task in these architectures is to retrieve relevant knowledge from the external sources, and then perform finer-grained reasoning. Some of the benefits these architectures offer are that the external sources can be verified and updated easily, thereby reducing hallucinations (Shuster et al., 2021a) and making it easy to incorporate new knowledge and correct existing knowledge without needing to retrain the entire model (Lewis et al., 2020). Models that follow semi-parametric architectures (SPA) are typically smaller than LLMs and they have been shown to outperform LLMs on several NLP tasks such as open domain question answering (see Table 1). Recent work that extends LLMs with modular reasoning and knowledge retrieval (Karpas et al., 2022; LangChain) is also a type of SPA. In this paper we argue that building on the core ideas of SPA, we can potentially construct much more powerful question answering systems that also provide access to multi-modal data such as image, video and tabular data. We describe POSTTEXT, a class of systems that extend SPA in three important ways. First, POSTTEXT allows the external data to include *views*, a concept we borrow from database systems (Garcia-Molina et al., 2008). A *view* is a function over a number of data sources, V = f(D1*, ..., D*n). In databases, SQL queries are used to define tabular views. For example, V can be a table of records of minors that is derived from a table of person records by selecting only those with age<18. In general, however, views need not be tabular. When a view is materialized | Model | #Params | Outperformed LLM's sizes | Tasks | |-------------------------------|-----------|----------------------------------|-------------------------| | REALM (Guu et al., 2020) | 330M | 11B (T5) | Open-QA | | RETRO (Borgeaud et al., 2021) | 7.5B | 178B (Jurassic-1), 280B (Gopher) | Language modeling | | Atlas (Izacard et al., 2022) | 11B | 175B (GPT-3), 540B (PaLM) | Multi-task NLU, Open-QA | | RAG (Lewis et al., 2020) | 400M | 11B (T5) | Open-QA | | FiD (Izacard and Grave, 2021) | 770M | 11B (T5), 175B (GPT-3) | Open-QA | Table 1: The sizes of SPA models with those of comparable or outperformed LLMs. (i.e., executed and stored), it may be useful for answering certain queries1 more effectively. In this paper, we adopt a more general notion of views, not limited to results of SQL queries, which can (compositionally) support a variety of user questions. Views are particularly important to support multi-modal data, because combinations of data from multiple modalities can be modeled as views. Second, POSTTEXT contains a question analyzer and planner module that decides on the best strategy to answer a question that may involve first answering multiple subquestions in sequence or in parallel. This module bears similarity to query optimization techniques in database systems but will go significantly beyond the techniques established in database systems since, there are multiple different ways to answer a natural language question, especially with the availability of multi-modal data. Finally, POSTTEXT supports computing the provenance of answers to questions. The provenanceaware answer generator module can track the evidence (training data or external sources) that is used for the answers, even if views are used as intermediate results. We illustrate the power of POSTTEXT with examples in the next section and also the overview of its architecture. In the remaining sections, we describe the different components of POSTTEXT. ## 2 Overview Of Posttext Example 1 Consider a setting where we answer questions over data that includes images of dishes and text with restaurant reviews. We can create a view that aligns these two data sets so we can answer more complex queries readily. The view, the table in the middle of Figure 1, aligns dishes with relevant reviews and the corresponding restaurants. Note that creating this view involves an intermediate step of identifying the name of the dish in an image. The view also stores the provenance links to the actual reviews from which the snippets were 1We use queries and questions interchangeably. extracted. There are also provenance links for the images and the name of the dish (not shown). This view can be used to answer questions that would be more difficult without it. For example, if a person recalls a nice dish she had in the past but does not remember its name and is trying to figure out which restaurants serve the same dish and what are the reviews, she can pose the question, which includes both the question in text and an image of the dish. The answer states the name of the dish in question and lists restaurants with top reviews for that dish, along with images of the dish and snippets of those reviews and their provenance. Example 2 The same view can also be used to answer the question "*how many reviews raved about* Shaking beef?". The answer requires counting the number of reviews that are synonymous to very positive reviews about Shaking beef. The view surfaces the reviews associated with Shaking beef immediately and alleviates the amount of work that is required to compute the answer otherwise. The examples show that some questions can be answered more easily if they are supported by views that surface useful associations between data. In fact, indices are a type of views to accelerate lookups between an item and its attributes. In database systems, views have been used extensively to enable more efficient query answering (Halevy, 2001; Goldstein and Larson, 2001) with significant work on automatically materializing a set of indices for efficient query answering (Jindal et al., 2018; Das et al., 2019). A set of views and indices are defined automatically or manually in anticipation of a set of frequently asked queries under a budget constraint, e.g., space, so that during runtime, most of the incoming queries can be answered immediately or after applying simple operations over the views. Otherwise, the system falls back to answering the queries using the actual data sources. In other words, POSTTEXT prefers to use views to answer the questions, which will likely to be more efficient and accurate in general but otherwise, the system falls back to the traditional question answer- ![2_image_0.png](2_image_0.png) ing strategy. In addition to query answering, views have also been used to define content-based access control (Bertino and Sandhu, 2005), i.e., which parts of the data are accessible and by whom. The examples also show how provenance is provided as part of the answer. In these examples, it happened that provenance was easily determined through the provenance links that are already captured in the views. If actual data sources are accessed, the links to the data sources used (e.g., spans of text documents, parts of images, segments of videos) to derive the answer are provided as part of the answer. If the answer is generated by the language model, we trace how POSTTEXT derives the answer from parametric knowledge and retrieved data through analyzing its weights or determining "influential" parametric knowledge (Section 6) similarly to (Akyürek et al., 2022). PostText architecture POSTTEXT enhances the core architecture of semi-parametric models with three components: views, a query analyzer & planner (QAP), and a provenance-aware answer generator (PAG). In addition, all components including the "traditional" knowledge retrievers are equipped to manage both structured and unstructured data of different modalities. Figure 2 shows the architecture of POSTTEXT. Views are synthesized from different types of external data sources (e.g., text, images, videos, and tabular data), which can be public or private. When a question is posed in natural language (NL), the QAP module interprets and decomposes the question into subquestions whose answers can be composed to obtain an answer to the input question. QAP coordinates with the knowledge retriever to derive the data needed to answer these questions. It also coordinates with the PAG module with its plan so that provenance-aware answers can be returned. Adding these components raises interesting challenges such as what views should we construct and how do we construct and maintain these views automatically as data sources changes? What is a good plan for deriving an answer and how do we choose among alternative plans? And how do we measure the "goodness" of an answer with provenance? In the remaining sections, we describe the challenges associated with each of these components ## 3 Data Sources And Views Data Sources Most existing work on retrieval augmented language models are focused on text. More recently, (Chen et al., 2022; Yasunaga et al., 2022b; Sheynin et al., 2022) has applied SPA models on image-text and text-only corpus. The data sources in POSTTEXT are multi-modal, unstructured or structured. They can be external public data sources or private ones. Views Views are results computed (not necessarily through SQL queries) from data sources or other ![3_image_0.png](3_image_0.png) views. For example, a view can be a document involving data of different modalities (e.g., an image or a table). Views are powerful constructs for surfacing important and useful associations that are not obvious otherwise, whether they are associations from data within one data source or across multiple data sources. The table in Figure 1 is a view over restaurant reviews from Yelp, Google, and images provided by restaurants. This view makes it easier to compute the number of reviews associated with each dish in each restaurant or even across all restaurants. This view also makes it easier to determine the answer as to which dishes has more reviews than Shaking beef at Tamarine. Indexes are a special type of views. They associate an item with its attribute. Several implementations of retrieval augmented language models (Guu et al., 2020; Lewis et al., 2020; Izacard et al., 2022) already construct indices that associate a document with its nearest neighbors. Recently, GPT-index (GPT-Index, 2022) developed a set of APIs for creating data structures that can be traversed using LLMs to answer queries. The data structures are structured indexes and can be used to determine an answer to a question. Relational views are extensively used in data warehouses for optimizing queries. Indexes and views are typically created by users or database administrators or they can be automatically selected (Agrawal et al., 2000; Schnaitter et al., 2007; Jindal et al., 2018) and tuned (Agrawal et al., 2006; Bruno and Chaudhuri, 2008) to efficiently answer queries of a given workload (Das et al., 2019), which are queries that are anticipated to be frequently occurring. In typical settings, a set of views are constructed, usually under a budget constraint such as space, to maximize the queries that can be answered (either directly or through applying a few simple operators on the views) in a given workload. When a new query arrives after the views are constructed, the query optimizer determines the best plan to adopt for computing the answer. Queries are directly executed over the views if possible. Otherwise, it falls back to old strategy of answering the query with the data sources. For example, early last year, in anticipation of frequent queries about statistics of past World Cups due to the World Cup 2022 event at the end of the year, a set of views about the different World Cup statistics could have been constructed a priori so that most World Cup related questions can be directly answered using the views. We hypothesize that views in POSTTEXT can bring similar benefits to question answering. The right views will make it easier for the QAP module and the knowledge retriever to discover and obtain relevant data and subsequently for the answer generator to derive the right answers. Existing SPAs (Guu et al., 2020; Lewis et al., 2020; Izacard et al., 2022) are already leveraging dense-vector indices to accelerate the retrieval of document spans. In POSTTEXT with views being available, it is a natural extension to annotate each view with a description of its content (e.g., "Restaurants and highly ranked dishes"), which would make it even easier for the knowledge retriever to find the relevant data. The core challenges in developing views are how do we determine what is a "right" set of views to materialize automatically or semi-automatically? How do we incrementally maintain such views as data sources are updated? These problems are extensively studied in the database community and it will be interesting to explore those ideas that transfer to the POSTTEXT. The architecture can also be instrumented in such a way that views are the only sources of data for the knowledge retriever (i.e., actual data sources are excluded). Hence, in this case, views act as a gateway that define which parts of the data sources are accessible by the knowledge retriever to answer queries. Finer-grained access control can also be instrumented through views as described in (Bertino and Sandhu, 2005). With views, it is also possible to enable a finer-grained public-private autoregressive information retrieval privacy system (Arora et al., 2022). ## 4 Question Analyzer & Planner The question analyzer and planner (QAP) module examines the input question and generates a plan, i.e., a sequence of sub-questions whose answers can be combined to form an answer to the input question. For each subquestion in the plan, QAP first checks whether external knowledge is needed. If not, the language model can be used to derive the answer. Otherwise, the subquestion is passed to the knowledge retriever to discover and retrieve relevant data for the subquestion at hand. The results from the knowledge retriever and the plan are passed to PAG (i.e., the rightmost green box in Figure 2). It is still an open and challenging question to determine whether a language model can confidently answer a question (Kamath et al., 2020; Si et al., 2022). Any solution to this problem will help improve the plan generator. An example plan from the QAP module for our running example is as follows: (1) find the name of the dish X in the input image, (2) find restaurants that serve X, (3) find the top restaurant among the results from (2). This plan is viable because (a) there is an index associating embeddings of images with the name of the main entity of the image, (b) there exists a view as shown in Figure 1, which supports the search for restaurants that serve a particular dish. Top answers can be derived by computing the scores of the reviews or approximating it based on the sentiment of the reviews and then ranking the results based on such scores. The information from (2) is passed to PAG which will compute the answer along with its provenance. This plan is based on the heuristic to push selection conditions early before joining/combining different data sources if needed. The conditions in the question are "good version" and "this dish". In this case, no joins are required as the view already combines the required information in one place. Hence, QAP seeks to first find the name of the dish to narrow down the reviews restricted to this dish. Alternatively, it could also retrieve all good reviews before conditioning on the name of the dish. Yet another plan could be to match the image directly to the images of the view to find the top reviews. Or, it may decide to directly retrieve only top reviews with images similar to the image in the question from the external data sources and condition the answer based on the name of the restaurant mentioned in the reviews. In all possible plans, the knowledge retriever is responsible for discovering and retrieving the relevant data for the QAP plan. In addition to the logic that may be needed for decomposing the question into subquestions, a plan is also needed for composing the subanswers obtained to form an answer to the input question. The plan is shared with the PAG module for deriving the associated provenance. A fundamental challenge in developing the QAP module is how to derive candidate plans and decide what is the "best" plan for answering the question when there are different ways to obtain an answer. Achieving this requires understanding how to compare amongst alternative plans for deriving an answer to the question. This problem bears similarity to query evaluation techniques for database systems (e.g., (Graefe, 1993)). It will be interesting to investigate whether database query planning techniques and ideas can synergize with question understanding and planning techniques (e.g., (Wolfson et al., 2020; Dunietz et al., 2020; Zhao et al., 2021; Xiong et al., 2021) to develop a comprehensive query planner. Emerging work such as chain of thought reasoning (Wei et al., 2022), where a sequence of prompts are engineered to elicit better answers, ReAct (Yao et al., 2022), where reasoning and action techniques are applied for deriving an answer, and more recently, work that generates a plan which can call LMs for resolving subquestions (Cheng et al., 2022) are also relevant. These techniques so far are restricted to text and does not compare among different plans. Another challenge in the context of NL questions is that while there is a single correct answer to an SQL query over a database, there are potentially many different correct answers to a NL question (Si et al., 2021; Min et al., 2020; Chen et al., 2020). Hence the space of possible plans to derive the "best" answer most efficiently is even more challenging in this case. We are advocating for a system that can reason and compare at least some viable strategies to arrive at a best plan for deriving a good answer efficiently. Naturally, one can also train a LM to create a plan. Our belief is that taking a more systematic route to planning can relief the need for the amount of training data required and will also aid provenance generation through its ability to describe the steps it took and the sources of data used in each step to generate an answer. As we shall explain in Section 5, the cost and accuracy of knowledge retrievers can also play a role in determining what is a better strategy for computing a good answer. ## 5 Knowledge Retriever The role of the knowledge retriever is to provide the information that the system lacks in order to fulfill the given task, typically at the inference time. More importantly, we envision that the knowledge retriever proposed in our framework has the ability to access knowledge stored in different sources and modalities, retrieve and integrate the relevant pieces of information, and present the output in a tabular data view. The structured output contains raw data items (e.g., text documents, images or videos) and and optionally different metadata, such as textual description of each data item. Such structured output allows downstream (neural) models to consume the retrieved knowledge efficiently and also allows developers and users to validate the provenance conveniently. Existing information retrieval models mostly focus on a single form of data. Below we first describe briefly how knowledge retrieval is done for unstructured and structured data. We then discuss the technical challenges for building a unified knowledge retriever, as well as recent research efforts towards this direction. Retrievers for unstructured data For unstructured data, such as a large collection of documents (i.e., text corpus) or images, knowledge retrieval is often reduced to a simple similarity search problem, where both queries and data in the knowledge source are represented as vectors in the same vector space (Turney and Pantel, 2010). Data points that are *close* to the query are considered as *relevant* and thus returned as the knowledge requested. Traditional information retrieval methods, whether relying on sparse vector representations, such as TFIDF (Salton et al., 1975) and BM25 (Robertson et al., 2009), or dense representations, such as LSA (Deerwester et al., 1990), DSSM (Huang et al., 2013), DPR (Karpukhin et al., 2020), are the canonical examples of this paradigm. Notice that the vector space model is not restricted to text but is also applicable to problems in other modalities, such as image tagging (Weston et al., 2011) and image retrieval (Gordo et al., 2016). Retrievers for structured data When the knowledge source is semi-structured (e.g., tables) or structured (e.g., databases), the query can be structured and allows the information need to be defined in a more precise way. Because the data is typically stored in a highly optimized management system and sometimes only accessible through a set of predefined API calls, the key technical challenge in the knowledge retriever is to formulate the information need into a formal, structured query. To map natural language questions to structured queries, semantic parsing is the key technical component for building a knowledge retriever for structured data. Some early works propose mapping the natural language questions to a generic meaning representation, which is later translated to the formal language used by the target knowledge base through ontology matching (Kwiatkowski et al., 2013; Berant et al., 2013). Others advocate that the meaning representation should be closely tight to the target formal language (Yih et al., 2015), such as SPARQL for triple stores. Because of the success of deep learning, especially the large pre-trained language models, semantic parsing has mostly been reduced to a sequence generation problem (e.g., Text-to-SQL). For example, RASAT (Qi et al., 2022) and PICARD (Scholak et al., 2021), which are generation models based on T5 (Raffel et al., 2020), give state-of-the-art results on benchmarks like Spider (Yu et al., 2018) and CoSQL (Yu et al., 2019). Towards a unified knowledge retriever As knowledge can exist in different forms, a unified knowledge retriever that can handle both structured and unstructured data in different modalities is more desirable. One possible solution for realizing a unified retriever is to leverage multiple single-source knowledge retrievers. When a query comes in, the QAP module first decomposes it into several smaller sub-queries, where each sub-query can be answered using one component knowledge retriever. The results from multiple knowledge retrievers can be integrated and then returned as the final output. However, several technical difficulties, including how to accurately decompose the question and how to join the retrieved results often hinder the success of this approach. Alternatively, unifying multiple sources of information in a standard representation, using text as a denominator representation, has been promoted recently (Oguz et al., 2022; Zeng et al., 2022). If all data items have a corresponding textual description, it is possible for the knowledge retriever to use only text-based retrieval techniques to find relevant data items once all input entities of non-textual modality have been mapped to their corresponding textual descriptions. Such approach circumvents the complexity of managing multiple knowledge stores in different format. Moreover, with the success of large multilingual and multi-modal language models (Conneau and Lample, 2019; Aghajanyan et al., 2022), data of different structures or from different modalities can naturally share the same representation space. While unifying multiple sources of information through representation learning seems to be a promising direction, it should be noted that certain structured information may be lost in the process. For example, by flatting a knowledge graph to sequences of (subject, predicate, object) triples, the graph structure is then buried in the textual form. Whether the information loss limits the retriever's ability to handle certain highly relational queries remains to be seen. ## 6 Provenance-Aware Answer Generators 6.1 Semi-Parametric Engine Demonstrating the provenance of a QA model prediction should center on identifying the datawhether in training data, retrieval corpora, or input—that is most influential in causing the model to make a particular prediction. For example, given the question "*who was the first U.S. president?*", the system should return the correct answer "*George* Washington" and references to training or retrieval corpora that are—to the model—causally linked to the answer. If the training or retrieval data included Washington's Wikipedia page, a typical human would expect for this to be included. However, the requirement we impose is causal and counterfactual: had the model not used that data, the prediction should change. If the prediction does not change, then from the causal perspective, there may be other data that is either more influential or duplicative (e.g., if whitehouse.gov is in the training data, it is duplicative). Next, we describe common semi-parametric models and sketch how this casually-based answer provenance could be obtained and computational challenges to overcome. Provided an input prompt and retrieved text, semi-parametric models like ATLAS (Izacard et al., 2022) or passing documents as prompts to GPT3 (Kasai et al., 2022) are adept at generating freetext, short answers. Likewise, parametric models with flexible input like GPT-3 can be combined with retrievers to achieve a similar goal; alternatively, transformer models can be retrofitted with layers so that passages can be integrated in embedding space (Borgeaud et al., 2021). While retrievalaugmentation is no catch-all panacea to model hallucination, it does mitigate the problem (Shuster et al., 2021b). Additionally, models' explanations can make it easier to know when to trust models and when not to (Feng and Boyd-Graber, 2022). In the case of QA models that take question plus retrieved text as input, there are several options. First, the model could provide several alternative answers which provide insight into the distribution of model outputs, rather than just a point estimate. Second, the model could provide a combination of feature-based explanations such as token saliency maps and the model's confidence in a correct answer (Wallace et al., 2019b). When combined, they can jointly influence the degree to which humans trust the model (Lai and Tan, 2019). However, to provide a complete account of model behavior, we must return to the training of model and the data used. In short, we endeavor to identify the combination of input, training data, and retrieved text that caused the model to produce the distribution of outputs (i.e., answer(s)). This is, of course, challenging due to scale of language model training data like C4 (Raffel et al., 2020) and the Pile (Gao et al., 2020) and that establishing causal—and therefore more faithful—explanations of model behavior is difficult. Training data attribution is one promising idea in this direction—it uses gradient and embedding based methods to attribute inference behavior to training data (Akyürek et al., 2022). For example, influence functions (Hampel, 1974; Han et al., 2020) and TracIn (Pruthi et al., 2020) link predictions to specific training examples, but are computationally expensive and are approximate rather than exact solutions. To firmly establish a causal connection, one could fully re-train the model without the identified training examples, but this is prohibitively expensive in practice. Future development of efficient training data attribution, combined with methods like interpretations of input plus retrieved data, is a promising direction towards more complete explanations of model predictions. ## 6.2 Tabular Engine As described at the end of Section 4, the knowledge retriever will pass on the data obtained to PAG. The QAP module will pass information about its plan to PAG. If the data obtained is tabular and a SQL query is generated, the information is passed to the tabular engine of PAG to compute the required answer(s). The recent advances in Text-toSQL (Wang et al., 2020; Zhao et al., 2022) provide a good technical foundation for generating such SQL queries. In most cases, it is not difficult to understand the correspondence between the natural language question and the SQL query that is generated. Once the SQL query is obtained, provenance can be systematically derived. In databases, the notion of provenance is well-studied (Cheney et al., 2009) for a large class of SQL queries; from explaining why a tuple is in the output (i.e., the set of tuples in the database that led to the answer), where a value in a tuple is copied from (i.e., which cell in the source table is the value copied from) (Buneman et al., 2001) to how that tuple was derived, which is formalized as semirings (Green et al., 2007), a polynomial that essentially describes conjunction/disjunction of records required materialize a record in the result. Database provenance has also been extended to aggregate queries (Amsterdamer et al., 2011). Since one can derive the mapping between the input question and the SQL query that is generated and also derive the provenance from the data sources based on the SQL query, it becomes possible to understand how the input question led to the answers given by POSTTEXT. Putting all together, POSTTEXT first explains that the name of the image (i.e., "a good version of this dish") referred in question is Shaking beef. It then shows the SQL query that is generated for the question "*Where can I find a good version of Shaking beef*" and the ranking function used for ranking the rows of restaurants with reviews for the dish Shaking beef. For our running example, the answer is obtained from the first row of the table in Figure 1. Specifically, the answer is summarized from the column *Dish* and *Review snippets/embeddings*. The actual snippets are found following the provenance links captured in the column *Provenance*. A more direct relationship between the summary and the actual review snippets can also be established (Carmeli et al., 2021). The success of this approach depends on how far we can push database provenance systematically as SQL queries can still be far more complex than what is investigated in past research (e.g., complex arithmetic and aggregate functions involving also negation, group filters, and functions over values of different modalities). As an alternative to executing the SQL query over the tables obtained, the tabular engine can also choose to deploy table question answering (tableQA) methods where a model directly searches the tabular data for answers based on the input question (Sun et al., 2016). Tapas (Herzig et al., 2020) and Tapex (Liu et al., 2022) are two example solutions for tableQA that formulates tableQA as sequence understanding/generation tasks. Like other recent tableQA works (Glass et al., 2021; Herzig et al., 2021), they consider the problem of computing the answer from a single input. It will be interesting to explore how to explain the results obtained using tableQA methods and how tableQA methods can be extended to handle multi-hop questions where the answer may span multiple tables or involve different types of aggregations, reasoning and modalities. ## 7 Preliminary Findings To test our hypothesis that views are valuable for answering queries, especially queries that involve counting or aggregation, we have implemented a first version of POSTTEXT2and compared it against some QA baselines. The current implementation of POSTTEXT assumes views over the underlying data are available in tabular format. The QAP module simply routes the query to a view-based engine (VBE) or a retrieval-based engine (RBE) to answer the query. VBE picks the best view and translates the natural language query into an SQLite query against the view using OpenAI's gpt-3.5-turbo/gpt-4 model. It then executes the SQLite query against 2PostText source code will be made available soon. VBE RBE DBChain **DBChain (no views)** ![8_image_0.png](8_image_0.png) ![8_image_1.png](8_image_1.png) ![8_image_3.png](8_image_3.png) S 3.45 2.81 3.37 2.72 M 3.79 2.69 3.28 2.61 L 3.11 2.44 2.95 1.95 Table 2: Results with GPT-3.5-turbo. Sizes of (S)mall, (M)edium, (L)arge are 1.1MB, 2.4MB, and 5.6MB respectively. VBE RBE DBChain **DBChain (no views)** ![8_image_5.png](8_image_5.png) ![8_image_6.png](8_image_6.png) ![8_image_7.png](8_image_7.png) S 3.33*2.10*2.14*1.10* M 3.55 1.93 2.35 1.51* L 3.08 2 1.97 1.11* Table 3: Results with GPT-4. *indicates that timeouts or API errors were encountered during experimentation. the view to obtain a table result which is then translated into English as the final answer. VBE also analyzes the SQLite query to compute the provenance of the answers. At present, it does so by simply retrieving all tuples that contributed to every (nested) aggregated query that is a simple (select-fromwhere-groupby-having clause) and does not handle negations. An example of the VBE process is described in Appendix B. RBE is implemented with Langchain's RetrievalQAwithSources library. It first retrieves top-k documents that are relevant for the query and then conditions its answer based on the retrieval. The answer and the ids of the retrieved documents are returned. For our experiments, we use the 42 multihop queries over 3 synthetic personal timelines of different sizes from TimelineQA's benchmark (Tan et al., 2023). The personal timelines model the daily activities (e.g., the trips made, things bought, people talked to) of a person over a period of time. We create a view around each type of activity (e.g., trips, shopping, daily_chats) for VBE. For further comparison, we also ran Langchain's SQLDatabaseChain (DBChain) to perform QA over the same VBE views. Furthermore, we ran it over timelines loosely structured as a binary relation of (date,description) pairs (called DBChain (no views)). We compared the returned answers against the ground truth answers by grading them on a scale of 1-5, with a LLM, where 5 means the returned answer has the same meaning as the ground truth answer (the grading scheme is described in the Appendix C). Our results are shown in Tables 2 and 3. Across both tables, the results on DBChain vs. DBChain(no views) reveal that adding some structure (in this case adding views) is crucial for better performance. Although the benchmark is a relatively small dataset, the scale of the timelines already reveals an impact on the accuracy across all ![8_image_2.png](8_image_2.png) ![8_image_4.png](8_image_4.png) QA systems. For DBChain, the drop in accuracy as the size increases because it sometimes relies on generating SQL queries that return all relevant records and passing all the records to the language model to compute the aggregate. When the results returned are large, which tends to be the case for larger timelines, the token limit of the LLM is often exceeded. VBE has a similar downward trend. It tends to generate queries that push the aggregates to the SQL engine and hence, avoids the issue of exceeding the token limit of the language models for many cases encountered in DBChain. Still, as the timeline gets larger, the result returned by the generated SQL query tends to be bigger and when these results are passed to the verbalization component to compose an answer in English, this may sometimes exceed the token limit of the language model. We also found that on a handful of cases, it so happens that the SQL query generated for L is invalid compared with those generated for the sparse dataset. The scores of RBE is relatively stable across all data densities. But overall, it tends to score lower compared with VBE and DBChain . This is because RBE relies on retrieving the top k documents from an index to condition the answers upon, regardless of the size of the timeline. However, these retrieved documents may not contain all the necessary information for answering the question in general. Even though the grading scores may not reveal this, the answers tend to be "more wrong" for aggregate queries over a larger timeline. ## 8 Conclusion POSTTEXT enhances the core ideas of semiparametric architectures with views, a query analyzer & planner, and a provenance-aware answer generator. Our initial results indicate that POSTTEXT is more effective on queries involving counting/aggregation when we provide structured views to facilitate computation. We plan to further develop and investigate POSTTEXT to automatically determine what views to construct, how does one generate plans and compare amongst plans, and how can one measure the quality of answers with provenance. ## Limitations And Ethical Considerations We point out the limitations of large language models (costly to train, deploy, maintain, hallucinate, opaque). The vision of POSTTEXT shows promise of less costly training, maintenance, and more explainability. However, no actual system is built yet to validate these claims and it is also not clear that a system with POSTTEXT architecture will be easier to deploy since it has more components. ## References Armen Aghajanyan, Bernie Huang, Candace Ross, Vladimir Karpukhin, Hu Xu, Naman Goyal, Dmytro Okhonko, Mandar Joshi, Gargi Ghosh, Mike Lewis, and Luke Zettlemoyer. 2022. CM3: A causal masked multimodal model of the internet. *CoRR*, abs/2201.07520. Sanjay Agrawal, Surajit Chaudhuri, and Vivek R. Narasayya. 2000. Automated selection of materialized views and indexes in SQL databases. In *VLDB* 2000, Proceedings of 26th International Conference on Very Large Data Bases, September 10-14, 2000, Cairo, Egypt, pages 496–505. Morgan Kaufmann. Sanjay Agrawal, Eric Chu, and Vivek Narasayya. 2006. Automatic physical design tuning: Workload as a sequence. In *Proceedings of the 2006 ACM SIGMOD* International Conference on Management of Data, SIGMOD '06, page 683–694, New York, NY, USA. Association for Computing Machinery. Ekin Akyürek, Tolga Bolukbasi, Frederick Liu, Binbin Xiong, Ian Tenney, Jacob Andreas, and Kelvin Guu. 2022. Tracing knowledge in language models back to the training data. In Findings of the Association for Computational Linguistics: EMNLP. Association for Computational Linguistics. Yael Amsterdamer, Daniel Deutch, and Val Tannen. 2011. Provenance for aggregate queries. In Proceedings of the 30th ACM SIGMOD-SIGACTSIGART Symposium on Principles of Database Systems, PODS 2011, June 12-16, 2011, Athens, Greece, pages 153–164. ACM. Simran Arora, Patrick Lewis, Angela Fan, Jacob Kahn, and Christopher Ré. 2022. Reasoning over public and private data in retrieval-based systems. Akari Asai, Matt Gardner, and Hannaneh Hajishirzi. 2022. Evidentiality-guided generation for Knowledge-Intensive NLP tasks. In Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics. Emily M. Bender, Timnit Gebru, Angelina McMillanMajor, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big? In *Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency*, FAccT '21, page 610–623. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy S. Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proceedings of Empirical Methods in Natural Language Processing. E. Bertino and R. Sandhu. 2005. Database security - concepts, approaches, and challenges. *IEEE Transactions on Dependable and Secure Computing*, 2(1):2– 19. Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George van den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, Diego de Las Casas, Aurelia Guy, Jacob Menick, Roman Ring, Tom Hennigan, Saffron Huang, Loren Maggiore, Chris Jones, Albin Cassirer, Andy Brock, Michela Paganini, Geoffrey Irving, Oriol Vinyals, Simon Osindero, Karen Simonyan, Jack W Rae, Erich Elsen, and Laurent Sifre. 2021. Improving language models by retrieving from trillions of tokens. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Proceedings of Advances in Neural Information Processing Systems. Curran Associates, Inc. Nicolas Bruno and Surajit Chaudhuri. 2008. Constrained physical design tuning. *Proc. VLDB Endow.*, 1(1):4–15. Peter Buneman, Sanjeev Khanna, and Wang Chiew Tan. 2001. Why and where: A characterization of data provenance. In *ICDT*, volume 1973 of *Lecture Notes* in Computer Science, pages 316–330. Nofar Carmeli, Xiaolan Wang, Yoshihiko Suhara, Stefanos Angelidis, Yuliang Li, Jinfeng Li, and WangChiew Tan. 2021. Constructing explainable opinion graphs from reviews. In *WWW '21: The Web Conference 2021, Virtual Event / Ljubljana, Slovenia*, pages 3419–3431. ACM / IW3C2. ChatGPT3-OpenAI. Chatgpt: Optimizing language models for dialogue. Anthony Chen, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2020. MOCHA: A dataset for training and evaluating generative reading comprehension metrics. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6521–6532, Online. Association for Computational Linguistics. Wenhu Chen, Hexiang Hu, Xi Chen, Pat Verga, and William W. Cohen. 2022. Murag: Multimodal retrieval-augmented generator for open question answering over images and text. James Cheney, Laura Chiticariu, and Wang Chiew Tan. 2009. Provenance in databases: Why, how, and where. *Found. Trends Databases*, 1(4):379–474. Zhoujun Cheng, Tianbao Xie, Peng Shi, Chengzu Li, Rahul Nadkarni, Yushi Hu, Caiming Xiong, Dragomir Radev, Mari Ostendorf, Luke Zettlemoyer, Noah A. Smith, and Tao Yu. 2022. Binding language models in symbolic languages. Alexis Conneau and Guillaume Lample. 2019. Crosslingual language model pretraining. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc. Sudipto Das, Miroslav Grbic, Igor Ilic, Isidora Jovandic, Andrija Jovanovic, Vivek R. Narasayya, Miodrag Radulovic, Maja Stikic, Gaoxiang Xu, and Surajit Chaudhuri. 2019. Automatically indexing millions of databases in microsoft azure sql database. In *Proceedings of the 2019 International Conference on* Management of Data, SIGMOD '19, page 666–679, New York, NY, USA. Association for Computing Machinery. Scott C. Deerwester, Susan T. Dumais, Thomas K. Landauer, George W. Furnas, and Richard A. Harshman. 1990. Indexing by latent semantic analysis. *Journal of the American society for information science*, 41:391–407. Emily Dinan, Gavin Abercrombie, A Stevie Bergman, Shannon Spruit, Dirk Hovy, Y-Lan Boureau, and Verena Rieser. 2021. Anticipating safety issues in E2E conversational AI: Framework and tooling. Jesse Dunietz, Gregory Burnham, Akash Bharadwaj, Owen Rambow, Jennifer Chu-Carroll, and David Ferrucci. 2020. To test machine comprehension, start by defining comprehension. In *Proceedings of the* Association for Computational Linguistics. Shi Feng and Jordan Boyd-Graber. 2022. Learning to explain selectively: A case study on question answering. In Proceedings of Empirical Methods in Natural Language Processing. Association for Computational Linguistics. Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. 2020. The pile: An 800GB dataset of diverse text for language modeling. Hector Garcia-Molina, Jeffrey D. Ullman, and Jennifer Widom. 2008. *Database Systems: The Complete* Book, 2 edition. Prentice Hall Press, USA. Michael R. Glass, Mustafa Canim, Alfio Gliozzo, Saneem A. Chemmengath, Vishwajeet Kumar, Rishav Chakravarti, Avi Sil, Feifei Pan, Samarth Bharadwaj, and Nicolas Rodolfo Fauceglia. 2021. Capturing row and column semantics in transformer based question answering over tables. In *NAACL-HLT*, pages 1212– 1224. Association for Computational Linguistics. Jonathan Goldstein and Per-Åke Larson. 2001. Optimizing queries using materialized views: A practical, scalable solution. In *Proceedings of the 2001 ACM* SIGMOD International Conference on Management of Data, SIGMOD '01, page 331–342, New York, NY, USA. Association for Computing Machinery. Albert Gordo, Jon Almazán, Jerome Revaud, and Diane Larlus. 2016. Deep image retrieval: Learning global representations for image search. In *Computer Vision - ECCV 2016*, pages 241–257, Cham. Springer International Publishing. ## Gpt-Index. 2022. [Link]. Goetz Graefe. 1993. Query evaluation techniques for large databases. *ACM Comput. Surv.*, 25(2):73–169. Todd J. Green, Gregory Karvounarakis, and Val Tannen. 2007. Provenance semirings. In Proceedings of the Twenty-Sixth ACM SIGACT-SIGMOD-SIGART Symposium on Principles of Database Systems, June 11-13, 2007, Beijing, China, pages 31–40. ACM. Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. 2020. REALM: RetrievalAugmented language model Pre-Training. In *Proceedings of the International Conference of Machine* Learning. Alon Y. Halevy. 2001. Answering queries using views: A survey. *The VLDB Journal*, 10(4):270–294. Frank R Hampel. 1974. The influence curve and its role in robust estimation. *Journal of the American* Statistical Association, 69(346):383–393. Xiaochuang Han, Byron C. Wallace, and Yulia Tsvetkov. 2020. Explaining black box predictions and unveiling data artifacts through influence functions. In Proceedings of the Association for Computational Linguistics. Association for Computational Linguistics. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021. Measuring massive multitask language understanding. In International Conference on Learning Representations. Jonathan Herzig, Thomas Müller, Syrine Krichene, and Julian Eisenschlos. 2021. Open domain question answering over tables via dense retrieval. In *NAACLHLT*, pages 512–519. Association for Computational Linguistics. Jonathan Herzig, Pawel Krzysztof Nowak, Thomas Müller, Francesco Piccinno, and Julian Martin Eisenschlos. 2020. Tapas: Weakly supervised table parsing via pre-training. In ACL, pages 4320–4333. Association for Computational Linguistics. Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Noland, Katie Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karen Simonyan, Erich Elsen, Jack W Rae, Oriol Vinyals, and Laurent Sifre. 2022. Training Compute-Optimal large language models. Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. 2013. Learning deep structured semantic models for web search using clickthrough data. In *Proceedings of the 22nd ACM* International Conference on Information & Knowledge Management, CIKM '13, page 2333–2338, New York, NY, USA. Association for Computing Machinery. Gautier Izacard and Edouard Grave. 2021. Leveraging passage retrieval with generative models for open domain question answering. In Proceedings of the European Chapter of the Association for Computational Linguistics. Association for Computational Linguistics. Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane DwivediYu, Armand Joulin, Sebastian Riedel, and Edouard Grave. 2022. Atlas: Few-shot learning with retrieval augmented language models. Alekh Jindal, Konstantinos Karanasos, Sriram Rao, and Hiren Patel. 2018. Selecting subexpressions to materialize at datacenter scale. *Proc. VLDB Endow.*, 11(7):800–812. Amita Kamath, Robin Jia, and Percy Liang. 2020. Selective question answering under domain shift. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 5684–5696. Association for Computational Linguistics. Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models. *CoRR*, abs/2001.08361. Ehud Karpas, Omri Abend, Yonatan Belinkov, Barak Lenz, Opher Lieber, Nir Ratner, Yoav Shoham, Hofit Bata, Yoav Levine, Kevin Leyton-Brown, Dor Muhlgay, Noam Rozen, Erez Schwartz, Gal Shachaf, Shai Shalev-Shwartz, Amnon Shashua, and Moshe Tenenholtz. 2022. Mrkl systems: A modular, neurosymbolic architecture that combines large language models, external knowledge sources and discrete reasoning. Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for opendomain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769–6781, Online. Association for Computational Linguistics. Jungo Kasai, Keisuke Sakaguchi, Yoichi Takahashi, Ronan Le Bras, Akari Asai, Xinyan Yu, Dragomir Radev, Noah A Smith, Yejin Choi, and Kentaro Inui. 2022. RealTime QA: What's the answer right now? arXiv [cs.CL]. Tom Kwiatkowski, Eunsol Choi, Yoav Artzi, and Luke Zettlemoyer. 2013. Scaling semantic parsers with on-the-fly ontology matching. In *Proceedings of* the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1545–1556, Seattle, Washington, USA. Association for Computational Linguistics. Vivian Lai and Chenhao Tan. 2019. On human predictions with explanations and predictions of machine learning models: A case study on deception detection. In *Proceedings of the Conference on Fairness,* Accountability, and Transparency. Association for Computing Machinery. ## Langchain. [Link]. Patrick S. H. Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. 2020. Retrieval-augmented generation for knowledge-intensive NLP tasks. In *Proceedings of* Advances in Neural Information Processing Systems. Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, and Jian-Guang Lou. 2022. TAPEX: table pre-training via learning a neural SQL executor. In *ICLR*. OpenReview.net. Thomas Macaulay. 2020. Someone let a gpt-3 bot loose on reddit - it didn't end well. Sewon Min, Julian Michael, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2020. AmbigQA: Answering ambiguous open-domain questions. In *Proceedings of* the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5783– 5797, Online. Association for Computational Linguistics. Barlas Oguz, Xilun Chen, Vladimir Karpukhin, Stan Peshterliev, Dmytro Okhonko, Michael Schlichtkrull, Sonal Gupta, Yashar Mehdad, and Scott Yih. 2022. UniK-QA: Unified representations of structured and unstructured knowledge for open-domain question answering. In *Findings of the Association for Computational Linguistics: NAACL 2022*, pages 1535–1546, Seattle, United States. Association for Computational Linguistics. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Gray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. In *Advances in Neural Information* Processing Systems. Garima Pruthi, Frederick Liu, Satyen Kale, and Mukund Sundararajan. 2020. Estimating training data influence by tracing gradient descent. In *Proceedings of* Advances in Neural Information Processing Systems. Curran Associates, Inc. Jiexing Qi, Jingyao Tang, Ziwei He, Xiangpeng Wan, Yu Cheng, Chenghu Zhou, Xinbing Wang, Quanshi Zhang, and Zhouhan Lin. 2022. RASAT: Integrating relational structures into pretrained seq2seq model for text-to-sql. In *Proceedings of the 2022 Conference on Empirical Methods in Natural Language* Processing. Association for Computational Linguistics. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified Text-to-Text transformer. *Journal of Machine Learning Research*, 21(140):1–67. Stephen Robertson, Hugo Zaragoza, et al. 2009. The probabilistic relevance framework: Bm25 and beyond. Foundations and Trends® *in Information Retrieval*, 3(4):333–389. G. Salton, A. Wong, and C. S. Yang. 1975. A vector space model for automatic indexing. *Commun. ACM*, 18(11):613–620. Karl Schnaitter, Serge Abiteboul, Tova Milo, and Neoklis Polyzotis. 2007. On-line index selection for shifting workloads. In *2007 IEEE 23rd International* Conference on Data Engineering Workshop, pages 459–468. Torsten Scholak, Nathan Schucher, and Dzmitry Bahdanau. 2021. PICARD: Parsing incrementally for constrained auto-regressive decoding from language models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 9895–9901, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Shelly Sheynin, Oron Ashual, Adam Polyak, Uriel Singer, Oran Gafni, Eliya Nachmani, and Yaniv Taigman. 2022. Knn-diffusion: Image generation via large-scale retrieval. Kurt Shuster, Spencer Poff, Moya Chen, Douwe Kiela, and Jason Weston. 2021a. Retrieval augmentation reduces hallucination in conversation. In Findings of the Association for Computational Linguistics: EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 16-20 November, 2021, pages 3784– 3803. Association for Computational Linguistics. Kurt Shuster, Spencer Poff, Moya Chen, Douwe Kiela, and Jason Weston. 2021b. Retrieval augmentation reduces hallucination in conversation. In *Findings* of the Association for Computational Linguistics: EMNLP. Association for Computational Linguistics. Chenglei Si, Chen Zhao, and Jordan Boyd-Graber. 2021. What's in a name? answer equivalence for opendomain question answering. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 9623–9629, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Chenglei Si, Chen Zhao, Sewon Min, and Jordan L. Boyd-Graber. 2022. Revisiting calibration for question answering. *ArXiv*, abs/2205.12507. Huan Sun, Hao Ma, Xiaodong He, Wen-tau Yih, Yu Su, and Xifeng Yan. 2016. Table cell search for question answering. In Proceedings of the 25th International Conference on World Wide Web, WWW '16, page 771–782, Republic and Canton of Geneva, CHE. International World Wide Web Conferences Steering Committee. Wang-Chiew Tan, Jane Dwivedi-Yu, Yuliang Li, Lambert Mathias, Marzieh Saeidi, Jing Nathan Yan, and Alon Y. Halevy. 2023. Timelineqa: A benchmark for question answering over timelines. In ACL (to appear). Peter D Turney and Patrick Pantel. 2010. From frequency to meaning: Vector space models of semantics. *Journal of artificial intelligence research*, 37:141–188. Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. 2019a. Universal adversarial triggers for attacking and analyzing NLP. In Proceedings of Empirical Methods in Natural Language Processing. Association for Computational Linguistics. Eric Wallace, Pedro Rodriguez, Shi Feng, and Jordan Boyd-Graber. 2019b. Trick me if you can: Human-inthe-loop generation of adversarial question answering examples. In *Transactions of the Association for* Computational Linguistics, pages 387–401. Bailin Wang, Richard Shin, Xiaodong Liu, Oleksandr Polozov, and Matthew Richardson. 2020. RAT-SQL: relation-aware schema encoding and linking for textto-sql parsers. In ACL, pages 7567–7578. Association for Computational Linguistics. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed H. Chi, Quoc Le, and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. *CoRR*, abs/2201.11903. Jason Weston, Samy Bengio, and Nicolas Usunier. 2011. Wsabie: Scaling up to large vocabulary image annotation. In *Twenty-Second International Joint Conference on Artificial Intelligence*. Tomer Wolfson, Mor Geva, Ankit Gupta, Matt Gardner, Yoav Goldberg, Daniel Deutch, and Jonathan Berant. 2020. Break it down: A question understanding benchmark. Transactions of the Association for Computational Linguistics, 8:183–198. Wenhan Xiong, Xiang Lorraine Li, Srini Iyer, Jingfei Du, Patrick S. H. Lewis, William Yang Wang, Yashar Mehdad, Wen-tau Yih, Sebastian Riedel, Douwe Kiela, and Barlas Oguz. 2021. Answering complex open-domain questions with multi-hop dense retrieval. In *9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021*. OpenReview.net. Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. 2022. React: Synergizing reasoning and acting in language models. Michihiro Yasunaga, Armen Aghajanyan, Weijia Shi, Rich James, Jure Leskovec, Percy Liang, Mike Lewis, Luke Zettlemoyer, and Wen-tau Yih. 2022a. Retrieval-augmented multimodal language modeling. Michihiro Yasunaga, Armen Aghajanyan, Weijia Shi, Rich James, Jure Leskovec, Percy Liang, Mike Lewis, Luke Zettlemoyer, and Wen-Tau Yih. 2022b. Retrieval-Augmented multimodal language modeling. Wen-tau Yih, Ming-Wei Chang, Xiaodong He, and Jianfeng Gao. 2015. Semantic parsing via staged query graph generation: Question answering with knowledge base. In *Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics* and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1321–1331, Beijing, China. Association for Computational Linguistics. Tao Yu, Rui Zhang, Heyang Er, Suyi Li, Eric Xue, Bo Pang, Xi Victoria Lin, Yi Chern Tan, Tianze Shi, Zihan Li, Youxuan Jiang, Michihiro Yasunaga, Sungrok Shim, Tao Chen, Alexander Fabbri, Zifan Li, Luyao Chen, Yuwen Zhang, Shreya Dixit, Vincent Zhang, Caiming Xiong, Richard Socher, Walter Lasecki, and Dragomir Radev. 2019. CoSQL: A conversational text-to-SQL challenge towards crossdomain natural language interfaces to databases. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1962– 1979, Hong Kong, China. Association for Computational Linguistics. Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, Zilin Zhang, and Dragomir Radev. 2018. Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-SQL task. In *Proceedings of the 2018* Conference on Empirical Methods in Natural Language Processing, pages 3911–3921, Brussels, Belgium. Association for Computational Linguistics. Andy Zeng, Maria Attarian, Brian Ichter, Krzysztof Choromanski, Adrian Wong, Stefan Welker, Federico Tombari, Aveek Purohit, Michael Ryoo, Vikas Sindhwani, Johnny Lee, Vincent Vanhoucke, and Pete Florence. 2022. Socratic models: Composing zeroshot multimodal reasoning with language. Chen Zhao, Yu Su, Adam Pauls, and Emmanouil Antonios Platanios. 2022. Bridging the generalization gap in text-to-sql parsing with schema expansion. In ACL (1), pages 5568–5578. Association for Computational Linguistics. Chen Zhao, Chenyan Xiong, Hal Daumé III, and Jordan Boyd-Graber. 2021. Multi-step reasoning over unstructured text with beam dense retrieval. In North American Association of Computational Linguistics. ## A Appendix B View-Based Qa Example run of POSTTEXT with the query "When was the last time I chatted with Avery?": This query is first matched against a set of available views and the best one is picked if there is sufficient confidence. In this case, the view daily_chat_log is selected. The query is first translated into an SQLite query: SELECT MAX(date) FROM daily_chat_log WHERE friends LIKE '%Avery%' The SQLite query is then cleaned and "relaxed". For example, on occasions, an attribute that does not exist is used in the query even though this happens rarely. In this case, no cleaning is required. The conditions over TEXT types are also relaxed. We convert equality conditions (e.g., friends = 'Avery') to LIKE conditions (e.g., friends LIKE '%Avery%') and further relax LIKE condition with a user-defined CLOSE_ENOUGH predicate. SELECT MAX(date) FROM daily_chat_log WHERE (friends LIKE '%Avery%' OR CLOSE_ENOUGH('%Avery%', friends)) The above query is executed and the results obtained is shown below. We then verbalized an answer based on the table result. **Result:** [('2022/12/26')] Returned answer (verbalized): The last time I chatted with Avery was on December 26, 2022. We observe that Langchain's SQLDatabaseChain provides a very similar functionality of matching an incoming query against available tables and generating an SQL query over the matched tables. However, SQLDatabaseChain does not clean or relax query predicates, and requires one to specify a limit on the number of records returned. Furthermore, it does not compute the provenance of the answer obtained, as we will describe in the next section. As we also described in Section 7, view-based QA generally outperforms SQLDatabaseChain because of its ability to push aggregates to the database engine instead of relying on the language model to aggregate the results (after using the database engine to compute the relevant records for answering the query. Provenance queries: PostText generates queries to retrieve records that contributed to the answer returned above. It does so by analyzing every select-from-where-groupby-having subquery in the generated query to find tuples that contributed to every such subquery. For example, the following SQL queries are generated to compute provenance. SELECT name FROM pragma_table_info('daily_chat_log') where pk; q0: SELECT eid FROM daily_chat_log WHERE (friends LIKE '%Avery%' OR CLOSE_ENOUGH('%Avery%', friends)) The first query above returns the key of the table and the second retrieves the keys from the table that contributed to the returned answer. [('q0', ('e152',)), ('q0', ('e154',)), ('q0', ('e169',)), ('q0', ('e176',)), ...] ## C Grading Scheme The following is our grading scheme used for grading the answers generated by different systems against the ground truth answer: - 5 means the systems's answer has the same meaning as the TRUE answer. - 4 means the TRUE answer can be determined from the system's answer. - 3 means there is some overlap in the system's answer and the TRUE answer. - means there is little overlap in the system's answer and the TRUE answer. - 1 means the system's answer is wrong, it has no relationship with the TRUE answer.
shah-etal-2023-numeric
Numeric Magnitude Comparison Effects in Large Language Models
https://aclanthology.org/2023.findings-acl.383
Large Language Models (LLMs) do not differentially represent numbers, which are pervasive in text. In contrast, neuroscience research has identified distinct neural representations for numbers and words. In this work, we investigate how well popular LLMs capture the magnitudes of numbers (e.g., that 4{\textless}5) from a behavioral lens. Prior research on the representational capabilities of LLMs evaluates whether they show human-level performance, for instance, high overall accuracy on standard benchmarks. Here, we ask a different question, one inspired by cognitive science: How closely do the number representations of LLMscorrespond to those of human language users, who typically demonstrate the distance, size, and ratio effects? We depend on a linking hypothesis to map the similarities among the model embeddings of number words and digits to human response times. The results reveal surprisingly human-like representations across language models of different architectures, despite the absence of the neural circuitry that directly supports these representations in the human brain. This research shows the utility of understanding LLMs using behavioral benchmarks and points the way to future work on the number of representations of LLMs and their cognitive plausibility.
# Numeric Magnitude Comparison Effects In Large Language Models Raj Sanjay Shah, Vijay Marupudi, Reba Koenen, Khushi Bhardwaj, Sashank Varma Georgia Institute of Technology {rajsanjayshah, vijaymarupudi, rkoenen3, khushi.bhardwaj, varma}@gatech.edu ## Abstract Large Language Models (LLMs) do not differentially represent numbers, which are pervasive in text. In contrast, neuroscience research has identified distinct neural representations for numbers and words. In this work, we investigate how well popular LLMs capture the magnitudes of numbers (e.g., that 4 < 5) from a behavioral lens. Prior research on the representational capabilities of LLMs evaluates whether they show human-level performance, for instance, high overall accuracy on standard benchmarks. Here, we ask a different question, one inspired by cognitive science: How closely do the number representations of LLMs correspond to those of human language users, who typically demonstrate the *distance*, size, and *ratio* effects? We depend on a linking hypothesis to map the similarities among the model embeddings of number words and digits to human response times. The results reveal surprisingly human-like representations across language models of different architectures, despite the absence of the neural circuitry that directly supports these representations in the human brain. This research shows the utility of understanding LLMs using behavioral benchmarks and points the way to future work on the number of representations of LLMs and their cognitive plausibility. ## 1 Introduction Humans use symbols - number words such as "three" and digits such as "3" - to quantify the world. How humans understand these symbols has been the subject of cognitive science research for half a century. The dominant theory is that people understand number symbols by mapping them to mental representations, specifically *magnitude representations* (Moyer and Landauer, 1967). This is true for both number words (e.g., "three") and digits (e.g., "3"). These magnitude representations are organized as a "mental number line" (MNL), with numbers mapped to points on the line as shown in Figure 1d. Cognitive science research has revealed that this representation is present in the minds of young children (Ansari et al., 2005) and even nonhuman primates (Nieder and Miller, 2003). Most of this research has been conducted with numbers in the range 1-9, in part, because corpus studies have shown that 0 belongs to a different distribution (Dehaene and Mehler, 1992) and, in part, because larger numbers require parsing place-value notation (Nuerk et al., 2001), a cognitive process beyond the scope of the current study. Evidence for this proposal comes from magnitude comparison tasks in which people are asked to compare two numbers (e.g., 3 vs. 7) and judge which one is greater (or lesser). Humans have consistently exhibited three effects that suggest recruitment of magnitude representations to understand numbers: the distance effect, the size effect, and the ratio effect (Moyer and Landauer, 1967; Merkley and Ansari, 2010). We review the experimental evidence for these effects, shown in Figure 1, in LLMs. Our *behavioral benchmarking* approach shifts the focus from what abilities LLMs have in an absolute sense to whether they successfully mimic human performance characteristics. This approach can help differentiate between human tendencies captured by models and the model behaviors due to training strategies. Thus, the current study bridges between Natural Language Processing (NLP), computational linguistics, and cognitive science. ## 1.1 Effects Of Magnitude Representations Physical quantities in the world, such as the brightness of a light or the loudness of a sound, are encoded as logarithmically scaled magnitude representations (Fechner, 1860). Research conducted with human participants and non-human species has revealed that they recruit many of the same brain regions, such as the intra-parietal sulcus, to determine the magnitude of symbolic numbers (Billock and Tsou, 2011; Nieder and Dehaene, 2009). 6147 ![1_image_0.png](1_image_0.png) Three primary magnitude representation effects have been found using the numerical comparison task in studies of humans. First, comparisons show a *distance effect*: The greater the distance |x − y| between the numbers x vs. y, the faster the comparison (Moyer and Landauer, 1967). Thus, people compare 1 vs. 9 faster than 1 vs. 2. This is shown in abstract form in Figure 1a. This effect can be explained by positing that people possess an MNL. When comparing two numbers, they first locate each number on this representation, determine which one is "to the right", and choose that number as the greater one. Thus, the farther the distance between the two points, the easier (and thus faster) the judgment. Second, comparisons show a *size effect*: Given two comparisons of the same distance (i.e., of the same value for |x − y|), the smaller the numbers, the faster the comparison (Parkman, 1971). For example, 1 vs. 2 and 8 vs. 9 both have the same distance (i.e., |x − y| = 1), but the former involves smaller numbers and is therefore the easier (i.e., faster) judgment. The size effect is depicted in abstract form in Figure 1b. This effect also references the MNL, but a modified version where the points are *logarithmically compressed*, i.e., the distance from 1 to x is proportional to log(x); see Figure 1d. To investigate if a logarithmically compressed number line is also present in LLMs, we use multidimensional scaling (Ding, 2018) on the cosine distances between number embeddings. Third, comparisons show a *ratio effect*: The time to compare two numbers x vs. y is a decreasing function of the ratio of the larger number over the smaller number, i.e., max(x,y) min(x,y) (Halberda et al., 2008). This function is nonlinear, as depicted in abstract form in Figure 1c. Here, we assume that this function is a negative exponential, though other functional forms have been proposed in the cognitive science literature. The ratio effect can also be explained by the logarithmically compressed MNL depicted in Figure 1d. These three effects - distance, size, and ratio — have been replicated numerous times in studies of human adults and children, non-human primates, and many other species (Cantlon, 2012; Cohen Kadosh et al., 2008). The MNL model in Figure 1d accounts for these effects (and many others in the mathematical cognition literature). Here, we use LLMs to evaluate a novel scientific hypothesis: that the MNL representation of the human mind is latent in the statistical structure of the linguistic environment, and thus learnable. Therefore, there is less need to posit pre-programmed neural circuitry to explain magnitude effects. ## 1.2 Llms And Behavioral Benchmarks Modern NLP models are pre-trained on large corpora of texts from diverse sources such as Wikipedia (Wikipedia contributors, 2004) and the open book corpus (Zhu et al., 2015). LLMs like BERT (Devlin et al., 2018), ROBERTA (Liu et al., 2019) and GPT-2 (Radford et al., 2019) learn contextual semantic vector representations of words. These models have achieved remarkable success on NLP benchmarks (Wang et al., 2018). They can perform as well as humans on a number of language tests such as semantic verification (Bhatia and Richie, 2022) and semantic disambiguation (Lake and Murphy, 2021). Most benchmarks are designed to measure the absolute performance of LLMs, with higher accuracy signaling "better" models. Human or superhuman performance is marked by exceeding certain thresholds. Here, we ask not whether LLMs can perform well or even exceed human performance at tasks, but whether they show the same *performance* characteristics as humans while accomplishing the same tasks. We call these *behavioral benchmarks*. The notion of behavioral benchmarks requires moving beyond accuracy (e.g., scores) as the dominant measure of LLM performance. As a test case, we look at the distance, size, and ratio effects as behavioral benchmarks to determine whether LLMs understand numbers as humans do, using magnitude representations. This requires a linking hypothesis to map measures of human performance to indices of model performance. Here, we map human response times on numerical comparison tasks to similarity computations on number word embeddings. ## 1.3 Research Questions The current study investigates the number representations of LLMs and their alignment with the human MNL. It addresses five research questions: 1. Which LLMs, if any, capture the distance, size, and ratio effects exhibited by humans? 2. How do different layers of LLMs vary in exhibiting these effects? 3. How do model behaviors change when using larger variants (more parameters) of the same architecture? 4. Do the models show implicit numeration ("four" = "4"), i.e., do they exhibit these effects equally for all number symbol types or more for some types (e.g., digits) than others (e.g., number words)? 5. Is the MNL representation depicted in Figure 1d latent in the representations of the models? ## 2 Related Work Research on the numerical abilities of LLMs focuses on several aspects of mathematical reasoning (Thawani et al., 2021), such as magnitude comparison, numeration (Naik et al., 2019; Wallace et al., 2019), arithmetic word problems (Burns et al., 2021; Amini et al., 2019), exact facts (Lin et al., 2020), and measurement estimation (Zhang et al., 2020). The goal is to improve performance on application-driven tasks that require numerical skills. Research in this area typically attempts to (1) understand the numerical capabilities of pretrained models and (2) propose new architectures that improve numerical cognition abilities (Geva et al., 2020; Dua et al., 2019). Our work also focuses on the first research direction: probing the numerical capabilities of pretrained models. Prior research by Wallace et al. (2019) judges the numerical reasoning of various contextual and non-contextual models using different tests (e.g., finding the maximum number in a list, finding the sum of two numbers from their word embeddings, decoding the original number from its embedding). These tasks have been presented as evaluation criteria for understanding the numerical capabilities of models. Spithourakis and Riedel (2018) change model architectures to treat numbers as distinct from words. Using perplexity score as a proxy for numerical abilities, they argue that this ability reduces model perplexity in neural machine translation tasks. Other work focuses on finding numerical capabilities through building QA benchmarks for performing discrete reasoning (Dua et al., 2019). Most research in this direction casts different tasks as proxies of numerical abilities of NLP systems (Weiss et al., 2018; Dua et al., 2019; Spithourakis and Riedel, 2018; Wallace et al., 2019; Burns et al., 2021; Amini et al., 2019). An alternative approach by Naik et al. (2019) tests multiple non-contextual task-agnostic embedding generation techniques to identify the failures in models' abilities to capture the magnitude and numeration effects of numbers. Using a systematic foundation in cognitive science research, we build upon their work in two ways: we (1) use contextual embeddings spanning a wide variety of pre-training strategies, and (2) evaluate models by comparing their behavior to humans. Our work looks at numbers in an abstract sense, and is relevant for the grounding problem studied in artificial intelligence and cognitive science (Harnad, 2023). ## 3 Experimental Design The literature lacks adequate experimental studies demonstrating magnitude representations of num- | Model | Category | Size | | |------------------------------|-------------------------|--------|------| | Base | Large | | | | BERT (Devlin et al., 2018) | Encoder | 110M | 340M | | RoBERTA (Liu et al., 2019) | Encoder | 125M | 355M | | XLNET (Yang et al., 2019) | Auto-regressive Encoder | 110M | 340M | | GPT-2 (Radford et al., 2019) | Auto-regressive Decoder | 117M | 345M | | T5 (Raffel et al., 2019) | Encoder | 110M | 335M | | BART (Lewis et al., 2020) | Encoder-Decoder | 140M | 406M | Table 1: Popular Language Models bers in LLMs from a cognitive science perspective. The current study addresses this gap. We propose a general methodology for mapping human response times to similarities computed over LLM embeddings. We test for the three primary magnitude representation effects described in section 1.1. ## 3.1 Linking Hypothesis In studies with human participants, the distance, size, and ratio effects are measured using reaction time. Each effect depends on the assumption that when comparing which of two numbers x and y is relatively easy, humans are relatively fast, and when it is relatively difficult, they are relatively slow. The ease or difficulty of the comparison is a function of x and y: |x − y| for the distance effect, min(*x, y*) for the size effect, and max(x,y) min(x,y) for the ratio effect. LLMs do not naturally make reaction time predictions. Thus, we require a *linking hypothesis* to estimate the relative ease or difficulty of comparisons for LLMs. Here we adopt the simple assumption that *the greater the similarity of two* number representations in an LLM, the longer it takes to discriminate them, i.e., to judge which one is greater (or lesser). We calculate the *similarity* of two numbers based on the similarity of their vector representations. Specifically, the representation of a number for a given layer of a given model is the vector of activation across its units. There are many similarity metrics for vector representations (Wang and Dong, 2020): Manhattan, Euclidean, cosine, dot product, etc. Here, we choose a standard metric in distributional semantics: the cosine of the angle between the vectors (Richie and Bhatia, 2021). This reasoning connects an index of model function (i.e., the similarity of the vector representations of two numbers) to a human behavioral measure (i.e., reaction time). Thus, the more similar the two representations are, the less discriminable they are from each other, and thus the longer the reaction time to select one over the other. ## 3.2 Materials For these experiments, we utilized three formats for number representations in LLMs: lowercase number words, mixed-cased number words (i.e., the first letter is capitalized), and digits. These formats enable us to explore variations in input tokens and understand numeration in models. Below are examples of the three input types: - "one", "two", "three", "four" ... "nine" - "One", "Two", "Three", "Four" ... "Nine" - "1", "2", "3", "4" ... "9" As noted in the Introduction, prior studies of the distance, size and ratio effects in humans have largely focused on numbers ranging from 1 to 9. Our input types are not-affected by tokenization methods as the models under consideration have each input as a separate token. ## 3.3 Large Language Models - Design Choices Modern NLP models are pre-trained on a large amount of unlabeled textual data from a diverse set of sources. This enables LLMs to learn contextually semantic vector representations of words. We experiment on these vectors to evaluate how one specific dimension of human knowledge - number sense - is captured in different model architectures. We use popular large language models from Huggingface's Transformers library (Wolf et al., 2020) to obtain vector representations of numbers in different formats. Following the work by Min et al. (2021) to determine popular model architectures, we select models from three classes of architectural design: encoder models (e.g., BERT (Devlin et al., 2018)), auto-regressive models (e.g., GPT-2 (Radford et al., 2019)), and encoder-decoder models (e.g., T5 (Raffel et al., 2019)). The final list of models is provided in Table 1. Operationalization: We investigate the three number magnitude effects as captured in the representations of each layer of the six models for the three number formats. For these experiments, we consider only the obtained hidden layer outputs for the tokens corresponding to the input number word tokens. We ignore the special prefix and suffix tokens of models (e.g., the [cls] token in BERT) for uniformity among different architectures. For the T5-base model, we use only the encoder to obtain model embedding. All models tested use a similar number of model parameters (around 110-140 million parameters). For our studies, we arbitrarily choose the more popular BERT uncased variant as opposed to the cased version. We compare the two models in Appendix section A.2 for a complete analysis, showing similar behaviors in the variants. Model size variations for the same architecture are considered in the Appendix section A.1 to show the impact of model size on the three effects. ## 4 Magnitude Representation Effects In Llms 4.1 The Distance Effect Layer T5 BART RoB XLNET BERT GPT-2 Avg. 1 0.974 0.965 0.954 0.967 0.979 0.937 0.963 2 0.984 0.959 0.959 0.951 0.983 0.940 0.963 3 0.973 0.957 0.961 0.960 0.955 0.937 0.957 4 0.956 0.964 0.977 0.962 0.956 0.923 0.957 5 0.941 0.951 0.976 0.948 0.982 0.931 0.955 6 0.972 0.916 0.966 0.942 0.991 0.932 0.953 7 0.967 0.960 0.967 0.943 0.990 0.930 0.959 8 0.945 0.969 0.954 0.923 0.977 0.931 0.950 9 0.950 0.978 0.945 0.920 0.967 0.929 0.948 10 0.933 0.958 0.928 0.926 0.923 0.931 0.933 11 0.924 0.975 0.968 0.951 0.926 0.930 0.946 12 0.920 0.956 0.854 0.934 0.890 0.931 0.914 LLMs\Input LC MC Digits Avg. T5 0.986 0.937 0.936 0.953 BART 0.942 0.951 0.983 0.959 RoBERTa 0.945 0.943 0.964 0.951 XLNET 0.888 0.965 0.979 0.944 BERT (uncased) 0.976 0.944 0.960 GPT-2 0.906 0.904 0.986 0.932 Total Averages across models 0.941 0.946 **0.965** 0.950 Recall that the distance effect is that people are slower (i.e., find it more difficult) to compare numbers the closer they are to each other on the MNL. We use the pipeline depicted in Figure 1 to investigate if LLM representations are more similar to each other if the numbers are closer on the MNL. Evaluation of the distance effect in LLMs is done by fitting a straight line (a + bx) on the cosine similarity vs. distance plot. We first perform two operations on these cosine similarities: (1) We average the similarities across each distance (e.g., the point at distance 1 on the x-axis represents the average similarity of 1 vs. 2, 2 vs. 3, ..., 8 vs. 9). (2) We normalize the similarities to be in the range [0, 1]. These decisions allow relative output comparisons across different model architectures, which is not possible using the raw cosine similarities of each LLM. To illustrate model performance, the distance effects for the best-performing layer in terms of R2 values for BART are shown in Figure 2 for the three number formats. The high R2 values indicate a human-like distance effect. All of the models show strong distance effects for all layers, as shown in Table 2, and for all number formats, as shown in Table 3. Interestingly, LLMs are less likely to reveal the distance effect as layer count increases (Table 2). For example, layer one results in the strongest distance effect while layer twelve is the least representative of the distance effect. With respect to number format, passing *digits* as inputs tended to produce stronger distance effects than passing number words (Table 3); this pattern was present for four of the six LLMs (i.e., all but T5 and BERT). ## 4.2 The Size Effect The size effect holds for comparisons of the same distance (e.g., for a distance of 1, these include 1 vs. 2, 2 vs. 3, ..., 8 vs. 9). Among these comparisons, those involving larger numbers (e.g., 8 vs. 9) are made more slowly (i.e., people find them more difficult) than those involving smaller numbers (e.g., 1 vs. 2). That larger numbers are harder to differentiate than smaller numbers aligns with the logarithmically compressed MNL depicted in | Layer | T5 | BART | RoB | XLNET BERT GPT-2 | Avg. | | | |---------|-------|--------|-------|--------------------|--------|-------|-------| | 1 | 0.756 | 0.651 | 0.494 | 0.602 | 0.617 | 0.466 | 0.597 | | 2 | 0.685 | 0.637 | 0.507 | 0.551 | 0.783 | 0.653 | 0.636 | | 3 | 0.744 | 0.697 | 0.503 | 0.492 | 0.834 | 0.574 | 0.641 | | 4 | 0.726 | 0.677 | 0.519 | 0.493 | 0.871 | 0.478 | 0.627 | | 5 | 0.665 | 0.685 | 0.610 | 0.54 | 0.783 | 0.528 | 0.635 | | 6 | 0.670 | 0.692 | 0.586 | 0.563 | 0.757 | 0.539 | 0.635 | | 7 | 0.701 | 0.634 | 0.613 | 0.585 | 0.823 | 0.539 | 0.649 | | 8 | 0.705 | 0.687 | 0.567 | 0.591 | 0.870 | 0.532 | 0.659 | | 9 | 0.697 | 0.757 | 0.581 | 0.566 | 0.877 | 0.541 | 0.670 | | 10 | 0.727 | 0.694 | 0.622 | 0.555 | 0.905 | 0.533 | 0.672 | | 11 | 0.729 | 0.756 | 0.734 | 0.602 | 0.911 | 0.547 | 0.713 | | 12 | 0.703 | 0.702 | 0.744 | 0.662 | 0.889 | 0.550 | 0.708 | ![5_image_0.png](5_image_0.png) ![5_image_1.png](5_image_1.png) ![5_image_2.png](5_image_2.png) Figure 1d. This study evaluates whether a given LLM shows a size effect on a given layer for numbers of a given format by plotting the normalized cosine similarities against the size of the comparison, defined as the minimum of the two numbers being compared. For each minimum value (points on the x-axis), we average the similarities for all comparisons to form a single point (vertical compression). We then fit a straight line (ax + b) over the vertically compressed averages (blue line in Figure 3) to obtain the R2 values (scores). To illustrate model performance, the size effects for the best-performing layer of the BERT-uncased model (in terms of R2 values) are shown in Figure 3. Similar to the results for the distance effect, the high R2 values indicate a human-like size effect. Interestingly, Table 4 generally shows an increasing trend in the layer-wise capability of capturing the size effect across the six LLMs. This is opposite to the trend observed across layers for the distance effect. Table 5 shows that using digits as the input values yields significantly better R2 values than the other number formats. In fact, this is the only number format for which the models produce strong size effects. However, the vertical compression of points fails to capture the spread of points across the y-axis for each point on the x-axis. This spread, a limitation of the size effect analysis, is captured in the ratio effect (section 4.3). LLMs\Input LC MC Digits Avg. T5 0.702 0.539 0.886 0.709 BART 0.614 0.568 0.885 0.689 RoBERTa 0.520 0.466 0.783 0.59 XLNET 0.500 0.408 0.793 0.567 BERT (uncased) 0.803 0.851 0.827 GPT-2 0.434 0.332 0.853 0.54 Total Averages across models 0.596 0.519 **0.842** 0.654 | LLMs\Input | LC | MC | Digits | Avg. | |------------------------------|-------|-------|----------|--------| | T5 | 0.852 | 0.756 | 0.868 | 0.826 | | BART | 0.786 | 0.833 | 0.897 | 0.838 | | RoBERTa | 0.714 | 0.747 | 0.746 | 0.736 | | XLNET | 0.729 | 0.761 | 0.901 | 0.797 | | BERT (uncased) | 0.906 | 0.757 | 0.831 | | | GPT-2 | 0.686 | 0.758 | 0.681 | 0.709 | | Total Averages across models | 0.779 | 0.793 | 0.808 | 0.789 | Layer T5 BART RoB XLNET BERT GPT-2 Avg. 1 0.850 0.820 0.756 0.868 0.837 0.735 0.811 2 0.865 0.837 0.745 0.828 0.878 0.755 0.819 3 0.846 0.861 0.725 0.820 0.853 0.738 0.807 4 0.847 0.859 0.739 0.822 0.820 0.659 0.791 5 0.851 0.847 0.805 0.825 0.847 0.695 0.812 6 0.880 0.821 0.800 0.816 0.883 0.703 0.817 7 0.867 0.811 0.795 0.810 0.883 0.698 0.811 8 0.824 0.849 0.780 0.780 0.880 0.702 0.803 9 0.806 0.852 0.780 0.746 0.861 0.705 0.791 10 0.785 0.821 0.720 0.754 0.779 0.704 0.760 11 0.755 0.849 0.666 0.781 0.769 0.702 0.754 12 0.731 0.834 0.516 0.717 0.687 0.708 0.699 The ratio effect in humans can be thought of as simultaneously capturing both the distance and size effects. Behaviorally, the time to compare x vs. y is a decreasing function of the ratio of the larger number over the smaller number, i.e., of max(x,y) min(x,y) . In fact, the function is nonlinear as depicted in Figure 1c. For the LLMs, we plot the normalized cosine similarity vs. max(x,y) min(x,y) . To each plot, we fit the negative exponential function a ∗ e−bx + c and 6152 $##%$!" ! ![6_image_0.png](6_image_0.png) ![6_image_1.png](6_image_1.png) ![6_image_2.png](6_image_2.png) evaluate the resulting R2. To illustrate model performance, Figure 4 shows the ratio effects for the best-fitting layer of the BART model for the three number formats. As observed with the distance and size effect, the high R2 values of the LLMs indicate a human-like ratio effect in the models. ## 4.4 Multidimensional Scaling Along with the three magnitude effects, we also investigate whether the number representations of LLMs are consistent with the human MNL. To do so, we utilize multidimensional scaling (Borg and Groenen, 2005; Ding, 2018). MDS offers a method for recovering the latent structure in the matrix of cosine (dis)similarities between the vector representations of all pairs of numbers (for a given LLM, layer, and number format). It arranges each number in a space of N dimensions such that the Layer T5 BART RoB XLNET BERT GPT-2 Avg. 1 0.686 0.679 0.602 0.595 0.739 0.526 0.638 2 0.271 0.693 0.763 0.734 0.704 0.669 0.639 3 0.374 0.657 0.772 0.704 0.456 0.685 0.608 4 0.385 0.728 0.489 0.621 0.425 0.663 0.552 5 0.476 0.733 0.597 0.707 0.448 0.615 0.596 6 0.540 0.739 0.571 0.598 0.465 0.608 0.587 7 0.687 0.696 0.250 0.677 0.445 0.665 0.570 8 0.529 0.624 0.594 0.591 0.189 0.624 0.525 9 0.544 0.718 0.691 0.566 0.400 0.671 0.598 10 0.502 0.624 0.697 0.563 0.394 0.613 0.566 11 0.195 0.708 0.602 0.543 -0.013 0.675 0.451 12 0.509 0.677 0.186 0.557 -0.239 0.615 0.384 Number T5 BART RoB XLNET BERT GPT-2 Avg. 1 0.01 0.00 0.02 0.00 0.02 0.00 0.01 2 0.10 0.17 0.15 0.17 0.09 0.12 **0.13** 3 0.07 0.05 0.07 0.10 0.06 0.10 0.07 4 0.05 0.04 0.05 0.05 0.03 0.05 0.04 5 0.17 0.09 0.07 0.05 0.20 0.05 **0.11** 6 0.02 0.04 0.08 0.02 0.06 0.04 0.04 7 0.09 0.08 0.11 0.04 0.20 0.06 **0.10** 8 0.04 0.01 0.08 0.01 0.09 0.05 0.05 9 0.40 0.08 0.17 0.18 0.44 0.17 **0.24** | LLMs\Input | LC | MC | Digits | Avg. | |------------------------------|-------|-------|----------|--------| | T5 | 0.489 | 0.526 | 0.410 | 0.475 | | BART | 0.676 | 0.714 | 0.678 | 0.690 | | RoBERTa | 0.520 | 0.597 | 0.587 | 0.568 | | XLNET | 0.622 | 0.620 | 0.622 | 0.621 | | BERT (uncased) | 0.312 | 0.423 | 0.368 | | | GPT-2 | 0.566 | 0.513 | 0.828 | 0.636 | | Total Averages across models | 0.531 | 0.547 | 0.591 | 0.560 | distance between each pair of points is consistent with the cosine dissimilarity between their vector representations. We fix N = 1 to recover the latent MNL representation for each LLM, layer, and number format. For each solution, we anchor the point for "1" to the left side and evaluate whether the resulting visualization approximates the log compressed MNL as shown in Figure 1d. To quantify this approximation, we calculate the correlation between the positions of the numbers 1 to 9 in the MDS solution and the expected values (log(1) to log (9)) of the human MNL; see Table 8. All inputs have similar correlation values. Surprisingly, GPT-2 with digits as the number format (and averaged across all layers) shows a considerably higher correlation with the log-compressed MNL than all other models and number formats. The average correlation between latent model number lines and the log compressed MNL decreases over the 12 layers; see Table 9. We visualize the latent number line of GPT-2 by averaging the cosine dissimilarity matrix across layers and number formats, submitting this to MDS, and requesting a one-dimensional solution; see Figure 5. This representation shows some evidence of log compression, though with a few exceptions. One obvious exception is the right displacement of 2 away from 1. Another is the right displacement of 9 very far from 8. To better understand if this is a statistical artifact of GPT-2 or a more general difference between number understanding in humans versus LLMs, we perform a residual analysis comparing positions on the model's number line to those on the human MNL. We choose the digits number format, estimate the latent number line representation averaged across the layers of each model, and compute the residual between the position of each number in this representation compared to the human MNL. This analysis is presented in Table 10. For 1, all models show a residual value of less than 0.03. This makes sense given our decision to anchor the latent number lines to 1 on the left side. The largest residuals are for 2 and 9, consistent with the anomalies noticed for the GPT-2 solution in Figure 5. These anomalies are a target for future research. We note here that 2 is often privileged even in languages such as Piraha and Mundurucu that have very limited number of word inventories(Gordon, 2004; Pica et al., 2004). Further note that 9 has special significance as a "bargain price numeral" in many cultures, a fact that is often linguistically marked (Pollmann and Jansen, 1996). ![7_image_0.png](7_image_0.png) ## 4.5 Ablation Studies: Base Vs Large Model Variants We investigate changes in model behaviors when increasing the number of parameters for the same architectures. We use the larger variants of each of the LLMs listed in Table 1. The detailed tabular results of the behaviors are presented in Appendix section A.1; see Tables 11, 12, and 13. Here, we summarize key takeaways from the ablation studies: - The distance and ratio effects of the large variants of models *align with human performance* characteristics. Similar to the results for the base variants, the size effect is only observed when the input type is digits. - We observe the *same decreasing trend* in the layer-wise capability of capturing the distance effect, ratio effect, and the MDS correlation values in the Large variants of LLMs as observed in the base variants. The increasing trend in the layer-wise capability of the size effect is not observed in the Larger LLMs. - Residual analysis shows high deviation for the numbers "2", "5", and "9"; which is *in line* with our observations for the base variations. ## 5 Conclusion This paper investigates the performance characteristics in various LLMs across numerous configurations, looking for three number-magnitude comparison effects: distance, size, and ratio. Our results show that LLMs show human-like distance and ratio effects across number formats. The size effect is also observed among models for the digit number format, but not for the other number formats, showing that LLMs do not completely capture numeration. Using MDS to scale down the pairwise (dis)similarities between number representations produces varying correspondences between LLMs and the logarithmically compressed MNL of humans, with GPT-2 showing the highest correlation (using digits as inputs). Our residual analysis exhibits high deviation from expected outputs for the numbers 2, 5, 9 which we explain through patterns observed in previous linguistics studies. The behavioral benchmarking of the numeric magnitude representations of LLMs presented here helps us understand the cognitive plausibility of the representations the models learn. Our results show that LLM pre-training allows models to approximately learn human-like behaviors for two out of the three magnitude effects without the need to posit explicit neural circuitry. Future work on building pre-trained architectures to improve numerical cognition abilities should also be evaluated using these three effects. ## 6 Limitations Limitations to our work are as follows: (1) We only study the three magnitude effects for the number word and digit denotations of the numbers 1 to 9. The effects for the number 0, numbers greater than 10, decimal numbers, negative numbers, etc. are beyond the scope of this study. Future work can design behavioral benchmark for evaluating whether LLMs shows these effects for these other number classes. (2) The mapping of LLM behaviors to human behaviors and effects might vary for each effect. Thus, we might require a different linking hypothesis for each such effect. (3) We only use the models built for English tasks and do not evaluate multi-lingual models. (4) We report and analyze aggregated scores across different dimensions. There can be some information loss in this aggregation. (5) Our choice of models is limited by certain resource constraints. Future works can explore the use of other foundation / super-large models (1B parameters +) and API-based models like GPT3 and OPT3. (6) The behavioral analysis of this study is one-way: we look for human performance characteristics and behaviors in LLMs. Future research can utilize LLMs to discover new numerical effects and look for the corresponding performance characteristics in humans. This could spur new research in cognitive science. (7) The results show similar outputs to low dimensional human output and show that we do not need explicit neural circuitry for number understanding. We do not suggest models actually are humanlike in how they process numbers. ## References Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik Koncel-Kedziorski, Yejin Choi, and Hannaneh Hajishirzi. 2019. MathQA: Towards interpretable math word problem solving with operation-based formalisms. In *Proceedings of the 2019 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2357–2367, Minneapolis, Minnesota. Association for Computational Linguistics. Daniel Ansari, Nicolas Garcia, Elizabeth Lucas, Kathleen Hamon, and Bibek Dhital. 2005. Neural correlates of symbolic number processing in children and adults. *Neuroreport*, 16(16):1769–1773. Sudeep Bhatia and Russell Richie. 2022. Transformer networks of human conceptual knowledge. *Psychological Review*, pages No Pagination Specified–No Pagination Specified. Vincent A. Billock and Brian H. Tsou. 2011. To honor Fechner and obey Stevens: Relationships between psychophysical and neural nonlinearities. *Psychological Bulletin*, 137:1–18. I. Borg and P.J.F. Groenen. 2005. *Modern Multidimensional Scaling: Theory and Applications*. Springer. Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. 2021. Measuring mathematical problem solving with the MATH dataset. *CoRR*, abs/2103.03874. Jessica F. Cantlon. 2012. Math, monkeys, and the developing brain. Proceedings of the National Academy of Sciences, 109(supplement_1):10725– 10732. Roi Cohen Kadosh, Jan Lammertyn, and Veronique Izard. 2008. Are numbers special? An overview of chronometric, neuroimaging, developmental and comparative studies of magnitude representation. Progress in Neurobiology, 84(2):132–147. Stanislas Dehaene and Jacques Mehler. 1992. Crosslinguistic regularities in the frequency of number words. *Cognition*, 43(1):1–29. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*. Cody S. Ding. 2018. *Fundamentals of Applied Multidimensional Scaling for Educational and Psychological Research*. Springer International Publishing. Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019. DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In *Proceedings of the 2019 Conference of the North American* Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2368–2378, Minneapolis, Minnesota. Association for Computational Linguistics. Gustav Theodor Fechner. 1860. Elements of psychophysics. 1. Mor Geva, Ankit Gupta, and Jonathan Berant. 2020. Injecting numerical reasoning skills into language models. *CoRR*, abs/2004.04487. Peter Gordon. 2004. Numerical cognition without words: Evidence from amazonia. *Science*, 306(5695):496–499. Justin Halberda, Michèle M. M. Mazzocco, and Lisa Feigenson. 2008. Individual differences in nonverbal number acuity correlate with maths achievement. *Nature*, 455(7213):665–668. Stevan Harnad. 2023. Symbol grounding problem. Brenden M. Lake and Gregory L. Murphy. 2021. Word meaning in minds and machines. *Psychological review*. M. Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. Bart: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. *ArXiv*, abs/1910.13461. Bill Yuchen Lin, Seyeon Lee, Rahul Khanna, and Xiang Ren. 2020. Birds have four legs?! NumerSense: Probing Numerical Commonsense Knowledge of Pre-Trained Language Models. In *Proceedings of* the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6862– 6868, Online. Association for Computational Linguistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, M. Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. ArXiv, abs/1907.11692. Rebecca Merkley and Daniel Ansari. 2010. Using eye tracking to study numerical cognition: The case of the ratio effect. *Experimental Brain Research*, 206(4):455–460. Bonan Min, Hayley Ross, Elior Sulem, Amir Pouran Ben Veyseh, Thien Huu Nguyen, Oscar Sainz, Eneko Agirre, Ilana Heintz, and Dan Roth. 2021. Recent advances in natural language processing via large pre-trained language models: A survey. CoRR, abs/2111.01243. Robert S. Moyer and Thomas K. Landauer. 1967. Time required for judgements of numerical inequality. *Nature*, 215(5109):1519–1520. Aakanksha Naik, Abhilasha Ravichander, Carolyn Rose, and Eduard Hovy. 2019. Exploring numeracy in word embeddings. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3374–3380, Florence, Italy. Association for Computational Linguistics. Andreas Nieder and Stanislas Dehaene. 2009. Representation of Number in the Brain. *Annual Review of* Neuroscience, 32(1):185–208. Andreas Nieder and Earl K. Miller. 2003. Coding of Cognitive Magnitude: Compressed Scaling of Numerical Information in the Primate Prefrontal Cortex. Neuron, 37(1):149–157. Hans-Christoph Nuerk, Ulrich Weger, and Klaus Willmes. 2001. Decade breaks in the mental number line? Putting the tens and units back in different bins. *Cognition*, 82(1):B25–B33. John M. Parkman. 1971. Temporal aspects of digit and letter inequality judgments. *Journal of Experimental Psychology*, 91(2):191–205. Pierre Pica, Cathy Lemer, Ve'ronique Izard, and Stanislas Dehaene. 2004. Exact and approximate arithmetic in an amazonian indigene group. *Science*, 306(5695):499–503. Thijs Pollmann and Carel Jansen. 1996. The language user as an arithmetician. *Cognition*, 59(2):219–237. Alec Radford, Jeff Wu, R. Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. Russell Richie and Sudeep Bhatia. 2021. Similarity Judgment Within and Across Categories: A Comprehensive Model Comparison. *Cognitive Science*, 45(8):e13030. Georgios P. Spithourakis and Sebastian Riedel. 2018. Numeracy for language models: Evaluating and improving their ability to predict numbers. *CoRR*, abs/1805.08154. Avijit Thawani, Jay Pujara, Filip Ilievski, and Pedro Szekely. 2021. Representing numbers in NLP: a survey and a vision. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, pages 644–656, Online. Association for Computational Linguistics. Eric Wallace, Yizhong Wang, Sujian Li, Sameer Singh, and Matt Gardner. 2019. Do NLP models know numbers? probing numeracy in embeddings. *CoRR*, abs/1909.07940. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. In *BlackboxNLP@EMNLP*. Jiapeng Wang and Yihong Dong. 2020. Measurement of text similarity: A survey. *Information*, 11(9). Gail Weiss, Yoav Goldberg, and Eran Yahav. 2018. On the practical computational power of finite precision rnns for language recognition. *CoRR*, abs/1805.04908. Wikipedia contributors. 2004. Wikipedia, the free encyclopedia. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. *CoRR*, abs/1906.08237. Xikun Zhang, Deepak Ramachandran, Ian Tenney, Yanai Elazar, and Dan Roth. 2020. Do language embeddings capture scales? In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 4889–4896, Online. Association for Computational Linguistics. Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In *The IEEE International Conference on Computer Vision (ICCV)*. ## A Appendix A.1 Variants In Large Language Models Number T5 BART RoB XLNET BERT GPT-2 Avg. 1 0.04 0.01 0.01 0.01 0.01 0.00 0.01 2 0.09 0.17 0.09 0.16 0.07 0.12 **0.12** 3 0.02 0.09 0.04 0.07 0.03 0.10 0.06 4 0.02 0.07 0.03 0.04 0.03 0.07 0.04 5 0.12 0.07 0.13 0.17 0.16 0.02 **0.11** 6 0.20 0.06 0.06 0.05 0.10 0.02 0.08 7 0.17 0.09 0.09 0.07 0.12 0.02 0.09 8 0.22 0.09 0.05 0.06 0.09 0.03 0.09 9 0.15 0.19 0.25 0.36 0.25 0.14 **0.22** Averaged MDS Correlation values 1 0.967 0.647 0.825 0.643 2 0.963 0.549 0.718 0.557 3 0.964 0.587 0.736 0.584 4 0.968 0.622 0.765 0.544 5 0.962 0.632 0.763 0.423 6 0.958 0.641 0.774 0.483 7 0.957 0.591 0.752 0.526 8 0.956 0.608 0.753 0.550 9 0.956 0.599 0.773 0.625 10 0.944 0.612 0.766 0.610 11 0.938 0.608 0.742 0.526 12 0.923 0.604 0.726 0.557 13 0.939 0.659 0.739 0.538 14 0.944 0.656 0.755 0.562 15 0.940 0.645 0.751 0.500 16 0.933 0.611 0.741 0.509 17 0.934 0.567 0.730 0.550 18 0.933 0.580 0.723 0.505 19 0.919 0.559 0.690 0.527 20 0.900 0.557 0.671 0.535 21 0.867 0.558 0.644 0.553 22 0.854 0.571 0.664 0.524 23 0.829 0.509 0.633 0.484 24 0.805 0.508 0.622 0.414 Table 12: Residual analysis on MDS outputs in 1 dimension on the large variants of the models. RoB: Roberta-base model, BERT: uncased variant. Table 13: Averaged distance effect, size effect, ratio effect, and MDS correlation values for the 24 layers of the models. Table 14: Distance Effect: Averaged (across layers) R2 values of different *Larger variants of* LLMs on different input types when fitting a linear function. LC: Lowercase number words, MC: Mixedcase number words. Table 11: Averaged distance effect, size effect, ratio effect, and the MDS correlation values for the different input types of the models. | Averaged MDS Correlation values | | |-----------------------------------|-----------------------| | Averaged Size Effect | Averaged Ratio Effect | | Averaged | | |---------------|-----------------| | Layer\Effects | Distance Effect | | LLMs\Input | LC | MC | Digits | Avg. | |------------------------------|-------|-------|----------|--------| | T5 | 0.961 | 0.957 | 0.974 | 0.964 | | BART | 0.892 | 0.957 | 0.845 | 0.898 | | RoBERTa | 0.893 | 0.959 | 0.946 | 0.933 | | XLNET | 0.924 | 0.952 | 0.855 | 0.910 | | BERT (uncased) | 0.837 | 0.969 | 0.903 | | | GPT-2 | 0.946 | 0.934 | 0.987 | 0.956 | | Total Averages across models | 0.909 | 0.933 | 0.930 | 0.927 | | Averaged MDS Correlation values | | | | | |-----------------------------------|-----------------|----------------------|-----------------------|-------| | Lowercase number words | 0.909 | 0.587 | 0.730 | 0.593 | | Mixedcase number words | 0.933 | 0.514 | 0.749 | 0.460 | | Digits | 0.930 | 0.678 | 0.707 | 0.548 | | Averages | 0.927 | 0.595 | 0.727 | 0.534 | | Total | Averaged | | | | | Inputs\Effects | Distance Effect | Averaged Size Effect | Averaged Ratio Effect | | For the models in Table1, we show the three effects for the larger variants. The variants have the same architectures and training methodologies as their base variants but more parameters ( thrice the number of parameters). The in-depth results for the three effects are presented in tables 14, 16, 15, 17, 18, and 19. We also present the MDS correlation values in the same manner as done for base variants; see tables 20 and 21. Given the large layer count for these model vari- | LLMs\Input | LC | MC | Digits | Avg. | |------------------------------|-------|-------|----------|--------| | T5 | 0.720 | 0.730 | 0.840 | 0.763 | | BART | 0.697 | 0.644 | 0.380 | 0.574 | | RoBERTa | 0.468 | 0.267 | 0.677 | 0.471 | | XLNET | 0.533 | 0.448 | 0.510 | 0.497 | | BERT (uncased) | 0.635 | 0.712 | 0.674 | | | GPT-2 | 0.467 | 0.358 | 0.950 | 0.592 | | Total Averages across models | 0.587 | 0.514 | 0.678 | 0.595 | Layer T5 BART RoB XLNET BERT GPT-2 Avg. 1 0.978 0.948 0.968 0.972 0.978 0.959 0.967 2 0.977 0.958 0.962 0.976 0.967 0.940 0.963 3 0.977 0.970 0.931 0.979 0.977 0.951 0.964 4 0.976 0.948 0.972 0.984 0.968 0.959 0.968 5 0.975 0.944 0.950 0.981 0.976 0.947 0.962 6 0.973 0.919 0.950 0.978 0.975 0.952 0.958 7 0.979 0.911 0.968 0.974 0.958 0.952 0.957 8 0.981 0.892 0.953 0.977 0.973 0.959 0.956 9 0.983 0.875 0.967 0.974 0.980 0.959 0.956 10 0.980 0.857 0.947 0.967 0.957 0.958 0.944 11 0.984 0.847 0.931 0.944 0.964 0.959 0.938 12 0.990 0.828 0.865 0.920 0.974 0.959 0.923 13 0.990 0.953 0.901 0.865 0.968 0.959 0.939 14 0.990 0.933 0.935 0.874 0.975 0.957 0.944 15 0.988 0.919 0.945 0.858 0.972 0.959 0.940 16 0.977 0.900 0.941 0.854 0.966 0.957 0.933 17 0.974 0.899 0.944 0.883 0.948 0.955 0.934 18 0.978 0.897 0.946 0.892 0.930 0.957 0.933 19 0.951 0.882 0.938 0.874 0.913 0.957 0.919 20 0.947 0.885 0.900 0.857 0.858 0.956 0.900 21 0.932 0.879 0.887 0.808 0.740 0.957 0.867 22 0.927 0.858 0.927 0.789 0.668 0.957 0.854 23 0.859 0.827 0.889 0.862 0.579 0.957 0.829 24 0.872 0.825 0.867 0.808 0.502 0.954 0.805 ants and the multiple tables, we also present a summarized view of the results in tables 11, 12, 13. ## A.2 Cased Vs Uncased Bert The behavioral differences between the cased and uncased variants of the BERT architecture are shown in TableA.2. Despite different preprocessing paradigms, both models have similar performance characteristics. The only visible distinction is the higher correlation values for the cased version when compared to the uncased version of the Layer T5 BART RoB XLNET BERT GPT-2 Avg. 1 0.785 0.800 0.591 0.630 0.608 0.467 0.647 2 0.794 0.763 0.275 0.666 0.198 0.597 0.549 3 0.894 0.709 0.379 0.665 0.214 0.661 0.587 4 0.922 0.719 0.465 0.661 0.345 0.620 0.622 5 0.940 0.721 0.550 0.634 0.387 0.563 0.632 6 0.925 0.606 0.426 0.644 0.661 0.584 0.641 7 0.912 0.441 0.360 0.603 0.636 0.594 0.591 8 0.923 0.399 0.460 0.548 0.726 0.595 0.608 9 0.915 0.354 0.435 0.541 0.750 0.599 0.599 10 0.923 0.329 0.546 0.553 0.727 0.593 0.612 11 0.924 0.362 0.458 0.574 0.727 0.601 0.608 12 0.890 0.351 0.512 0.543 0.728 0.601 0.604 13 0.864 0.801 0.467 0.468 0.757 0.595 0.659 14 0.837 0.861 0.452 0.436 0.751 0.600 0.656 15 0.805 0.796 0.480 0.454 0.741 0.597 0.645 16 0.761 0.683 0.449 0.436 0.739 0.597 0.611 17 0.692 0.550 0.391 0.423 0.746 0.598 0.567 18 0.743 0.520 0.453 0.423 0.747 0.594 0.580 19 0.633 0.512 0.435 0.391 0.788 0.594 0.559 20 0.583 0.513 0.448 0.373 0.828 0.596 0.557 21 0.523 0.532 0.512 0.345 0.847 0.592 0.558 22 0.432 0.546 0.633 0.350 0.874 0.588 0.571 23 0.356 0.455 0.491 0.316 0.846 0.590 0.509 24 0.335 0.444 0.634 0.250 0.801 0.584 0.508 Table 17: Size Effect: Averaged (across inputs) R2 values of different *Larger variants of* LLMs for different layers when fitting a linear function. RoB: Robertabase model, BERT: uncased variant. ## Model. A.3 Impact Of Distance Effect And Size Effect In Ratio Effect Scores When interpreting LLM findings on the ratio effect, we observe that they are dominated by the distance effect as compared to the size effect. We observe the same decreasing trend in averaged results over input types in layers; see Table 7 (column: Total Averages). The impact of layer-wise trends can be quantified using regression with the distance effect | LLMs\Input | LC | MC | Digits | Avg. | |------------------------------|-------|-------|----------|--------| | T5 | 0.868 | 0.816 | 0.833 | 0.839 | | BART | 0.767 | 0.838 | 0.478 | 0.694 | | RoBERTa | 0.672 | 0.686 | 0.725 | 0.694 | | XLNET | 0.617 | 0.649 | 0.711 | 0.659 | | BERT (uncased) | 0.786 | 0.732 | 0.759 | | | GPT2 | 0.669 | 0.720 | 0.767 | 0.718 | | Total Averages across models | 0.730 | 0.749 | 0.707 | 0.718 | Layer T5 BART RoB XLNET BERT GPT-2 Avg. 1 0.868 0.837 0.803 0.881 0.829 0.733 0.825 2 0.803 0.740 0.529 0.873 0.657 0.708 0.718 3 0.792 0.798 0.573 0.875 0.602 0.775 0.736 4 0.828 0.782 0.722 0.868 0.667 0.725 0.765 5 0.860 0.823 0.716 0.863 0.664 0.652 0.763 6 0.878 0.811 0.671 0.836 0.765 0.680 0.774 7 0.898 0.686 0.669 0.818 0.735 0.704 0.752 8 0.896 0.657 0.726 0.797 0.722 0.716 0.753 9 0.910 0.658 0.714 0.792 0.838 0.729 0.773 10 0.915 0.639 0.718 0.774 0.818 0.729 0.766 11 0.921 0.640 0.583 0.745 0.835 0.725 0.742 12 0.917 0.638 0.518 0.691 0.868 0.724 0.726 13 0.920 0.836 0.538 0.593 0.820 0.728 0.739 14 0.937 0.764 0.679 0.585 0.837 0.724 0.755 15 0.931 0.715 0.772 0.546 0.822 0.722 0.751 16 0.915 0.713 0.762 0.514 0.815 0.726 0.741 17 0.904 0.684 0.747 0.492 0.836 0.718 0.730 18 0.907 0.666 0.728 0.497 0.815 0.728 0.723 19 0.778 0.617 0.754 0.464 0.807 0.720 0.690 20 0.754 0.613 0.717 0.450 0.775 0.720 0.671 21 0.692 0.600 0.723 0.435 0.699 0.716 0.644 22 0.679 0.605 0.802 0.459 0.715 0.721 0.664 23 0.637 0.587 0.730 0.478 0.651 0.716 0.633 24 0.592 0.559 0.767 0.486 0.624 0.703 0.622 LLMs\Input LC MC Digits Avg. T5 0.572 0.127 0.408 0.369 BART 0.677 0.546 0.515 0.580 RoBERTa 0.669 0.573 0.473 0.572 XLNET 0.498 0.373 0.465 0.445 BERT (uncased) 0.519 0.541 0.530 GPT2 0.623 0.624 0.888 0.711 Total Averages across models 0.593 0.460 0.548 0.534 and size effect as inputs (column: Total Averages; tables 2, 4) and the ratio effect (column: Total Averages; Table4) as output. Importantly, the distance effect averages are statistically significant predictors of ratio effect averages; see Table 23). These results provide a superficial view of the impact of distance and size effect in the ratio effect scores because of the aggregation performed at different levels of the study. Layer T5 BART RoB XLNET BERT GPT-2 Avg. 1 0.675 0.633 0.731 0.590 0.542 0.689 0.643 2 0.249 0.662 0.461 0.649 0.555 0.767 0.557 3 0.251 0.673 0.522 0.689 0.662 0.707 0.584 4 0.156 0.682 0.698 0.674 0.353 0.703 0.544 5 0.059 0.518 0.493 0.686 0.065 0.719 0.423 6 0.219 0.471 0.411 0.533 0.535 0.729 0.483 7 0.569 0.421 0.558 0.549 0.367 0.688 0.526 8 0.578 0.413 0.540 0.690 0.385 0.695 0.550 9 0.581 0.710 0.594 0.546 0.598 0.720 0.625 10 0.495 0.716 0.531 0.487 0.710 0.718 0.610 11 0.286 0.691 0.404 0.495 0.576 0.702 0.526 12 0.481 0.682 0.304 0.466 0.708 0.700 0.557 13 0.387 0.605 0.533 0.394 0.588 0.721 0.538 14 0.483 0.672 0.538 0.383 0.574 0.718 0.562 15 0.486 0.386 0.596 0.241 0.586 0.705 0.500 16 0.485 0.454 0.689 0.140 0.591 0.692 0.509 17 0.536 0.677 0.617 0.163 0.588 0.719 0.550 18 0.259 0.562 0.651 0.251 0.602 0.704 0.505 19 0.458 0.750 0.583 0.077 0.599 0.694 0.527 20 0.463 0.545 0.652 0.246 0.585 0.718 0.535 21 0.362 0.526 0.653 0.524 0.554 0.700 0.553 22 0.402 0.522 0.656 0.247 0.596 0.719 0.524 23 -0.019 0.466 0.649 0.490 0.600 0.720 0.484 24 -0.051 0.473 0.652 0.476 0.205 0.726 0.414 Table 21: Averaged (across inputs) correlation values of the *Large variants* of different LLMs on different model layers when comparing MDS values with Log101 to Log109. RoB: Roberta-base model, BERT: uncased variant. Variant Effect LC MC Digits Avg. Uncased Distance 0.976 0.944 0.960 Size 0.803 0.851 0.827 Ratio 0.906 0.757 0.831 MDS (Corr.) 0.312 0.423 0.386 Cased Distance 0.958 0.980 0.890 0.943 Size 0.664 0.691 0.918 0.758 Ratio 0.854 0.880 0.866 0.867 MDS (Corr.) 0.621 0.553 0.487 0.554 Table 22: Behavioral differences between the cased and uncased variants of the BERT architecture. LC: Lowercase number words, MC: Mixed-case number words. Variant Coef. Std. Error t Stat P-value Intercept -0.916 0.531 -1.722 0.119 Base Distance Effect 1.953 0.452 4.314 **0.001** Size Effect -0.228 0.188 -1.212 0.256 Intercept -0.188 0.075 -2.491 0.0.021 Large Distance Effect 0.700 0.117 5.997 **0.000** ⊕ Size Effect 0.447 0.124 3.612 **0.001** ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 6 A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and Section 1. Section 1.2 lays the foundation for the research questions and 1.3 describes the problems we tackle. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. ✗ B1. Did you cite the creators of artifacts you used? Left blank. ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Not applicable. Left blank. ## C ✓ **Did You Run Computational Experiments?** Section 3, 4. C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Not applicable. Left blank. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? The experimental design and the operationalization of experiments are given in sections 3 and 4. ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 4 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 3.3 ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
zhang-etal-2023-multi
Multi-Relational Probabilistic Event Representation Learning via Projected {G}aussian Embedding
https://aclanthology.org/2023.findings-acl.384
Event representation learning has been shown beneficial in various downstream tasks. Current event representation learning methods, which mainly focus on capturing the semantics of events via deterministic vector embeddings, have made notable progress. However, they ignore two important properties: the multiple relations between events and the uncertainty within events. In this paper, we propose a novel approach to learning multi-relational probabilistic event embeddings based on contrastive learning. Specifically, the proposed method consists of three major modules, a multi-relational event generation module to automatically generate multi-relational training data, a probabilistic event encoding module to model uncertainty of events by Gaussian density embeddings, and a relation-aware projection module to adapt unseen relations by projecting Gaussian embeddings into relation-aware subspaces. Moreover, a novel contrastive learning loss is elaborately designed for learning the multi-relational probabilistic embeddings. Since the existing benchmarks for event representation learning ignore relations and uncertainty of events, a novel dataset named MRPES is constructed to investigate whether multiple relations between events and uncertainty within events are learned. Experimental results show that the proposed approach outperforms other state-of-the-art baselines on both existing and newly constructed datasets.
# Multi-Relational Probabilistic Event Representation Learning Via Projected Gaussian Embedding Linhai Zhang Congzhi Zhang Deyu Zhou∗ School of Computer Science and Engineering, Key Laboratory of Computer Network and Information Integration, Ministry of Education, Southeast University, China {lzhang472, zhangcongzhi, d.zhou}@seu.edu.cn ## Abstract Event representation learning has been shown beneficial in various downstream tasks. Current event representation learning methods, which mainly focus on capturing the semantics of events via deterministic vector embeddings, have made notable progress. However, they ignore two important properties: the multiple relations between events and the uncertainty within events. In this paper, we propose a novel approach to learning multi-relational probabilistic event embeddings based on contrastive learning. Specifically, the proposed method consists of three major modules, a multi-relational event generation module to automatically generate multi-relational training data, a probabilistic event encoding module to model uncertainty of events by Gaussian density embeddings, and a relation-aware projection module to adapt unseen relations by projecting Gaussian embeddings into relationaware subspaces. Moreover, a novel contrastive learning loss is elaborately designed for learning the multi-relational probabilistic embeddings. Since the existing benchmarks for event representation learning ignore relations and uncertainty of events, a novel dataset named MRPES is constructed to investigate whether multiple relations between events and uncertainty within events are learned. Experimental results show that the proposed approach outperforms other state-of-the-art baselines on both existing and newly constructed datasets. ## 1 Introduction Events, carrying world knowledge, are the major research targets in Natural Language Processing (NLP) for decades. Distributed event representation learning has been shown beneficial in various NLP tasks, such as sentiment analysis (Zhou et al., 2021), event detection (Deng et al., 2021) and text generation (Chen et al., 2021). ![0_image_0.png](0_image_0.png) Figure 1: An example in the MRPES dataset, where shaded areas represent confidence intervals of the density embeddings. Early event representation learning methods mainly focused on the way of composing event components, such as by Multilayer Perceptrons (Granroth-Wilding and Clark, 2016), Recurrent Neural Networks (Modi, 2016), and Tensor Networks (Weber et al., 2018). Latter work tried to incorporate various external knowledge into event representation learning, such as knowledge graphs (Ding et al., 2016), extra event features (Lee and Goldwasser, 2018), or commonsense knowledge (Ding et al., 2019). Recently, Gao et al. (2022) showed the effectiveness of incorporating contrastive learning (Chen et al., 2020) in event representation learning by simultaneously utilizing weakly supervised contrastive learning and prototype-based clustering. So far, similar to word representation learning (Mikolov et al., 2013; Pennington et al., 2014), current approaches for event representation learning mainly aim to capture the semantics of events based on largescale co-occurrence training data by making the semantically-similar events closer in embedding ∗Corresponding author. space. Though notable progress has been made, most existing methods still have two limitations. On the one hand, they ignore the multiple relations between events, which means every event pair that occurs together will be pushed closer whatever the actual relation between them. As shown in Figure 1(a), both event (*man go to hospital*) and event (*man be healthy*) will be pushed closer to event (*he contracted disease*), which means they will be pushed closer too. It is problematic as they are semantically different. On the other hand, the inherent uncertainty or polysemy of events is ignored. A more general event is used to describe more situations, reflecting higher uncertainty in its meaning (Athiwaratkun and Wilson, 2018; Zhang et al., 2021). As shown in Figure 1(a), event (*he contracted disease*) is a general description for illness, which is semantically similar to two specific events (*man have headache*) and (*person catch cold*) which are different from each other. However, restricted by the triangle inequality, it is hard for a vector embedding to be close to the other two points that are apart from each other. In this paper, we propose a Multi-relatiOnal pRobabilistic Event embedding method based on Contrastive Learning (MORE-CL) to solve the above limitations. Specifically, we utilize COMET (Bosselut et al., 2019), a commonsense generative model, to generate multi-relational positive samples for contrastive training. A probabilistic event encoder based on BERT (Devlin et al., 2019) is proposed to generate Gaussian event embeddings by estimating the mean vector and variance matrix. To deal with unseen relations, a relation-aware projector is employed to determine the relation-based event pair context automatically with an attention mechanism and project the density embeddings into relation-specific subspaces. Finally, the original InfoNCE loss (Oord et al., 2018) is modified to learn multi-relational probabilistic embedding. To investigate the effectiveness of the proposed method, a multi-relational probabilistic event similarity dataset named MRPES is constructed. Experimental results show that MORE-CL outperforms other baselines by a large margin on both original and new benchmark datasets. In conclusion, our contributions are three-fold: - A novel method, MORE-CL, is proposed to model the multiple relations between events and the uncertainty within events using projected Gaussian density embeddings with contrastive learning. - A multi-relational probabilistic event similarity dataset named MRPES is constructed and annotated to evaluate whether the multiple relations between events and the uncertainty within events are learned. - Experimental results show the effectiveness of the proposed method on both original and newly constructed benchmark datasets. ## 2 Related Work Event Representation Learning. Most existing event representation learning methods aim to project textual event descriptions represented into a dense vector where the semantic information of events is preserved as much as possible. Previous works either explored ways to effectively compose event components such as by tensor network (Ding et al., 2015; Weber et al., 2018) or external knowledge to improve the learning of event embeddings (Ding et al., 2016; Lee and Goldwasser, 2018; Ding et al., 2019). Besides textual signal, Zhang et al. (2021) proposed to utilize event images as external knowledge. Generally, they can be categorized as methods learned by the margin loss based on a pair of a positive sample and a negative sample. Recently, Gao et al. (2022) explored contrastive learning (Chen et al., 2020) in event representation learning, which outperformed previous margin loss-based methods by a large margin, showing the effectiveness of contrastive learning in this task. It should be pointed out that SWCC proposed by (Gao et al., 2022) implicitly captures relation information by performing prototype-based clustering. However, SWCC is not trained on relational data explicitly and only captures one relation between the events. Our work follows this line of research and makes improvements by considering multiple relations between events and uncertainty within events. Script Event Prediction. A task closely related to event representation learning is script event prediction or script learning. Script learning focuses on modeling a sequence of events and predicting what will happen next. Previous works on script learning mainly focused on different neural architectures to learn event embeddings and model the sequence, such as MLP (Modi, 2016; Granroth-Wilding and ![2_image_0.png](2_image_0.png) Clark, 2016) and LSTM (Pichotta and Mooney, 2016b,a). Recently, some works tried to enhance script learning by incorporating knowledge of various discourse relations (Lee and Goldwasser, 2019; Zheng et al., 2020), which is similar to our work. However, our work mainly focuses on the modeling of the event itself instead of the sequence. ## 3 Method As shown in Figue 2, MORE-CL consists of three modules. Firstly, the training events extracted from a large corpus are fed into the multi-relational event generation module to generate positive sample events for contrastive learning. Then, the training events as well as their multi-relational positive samples are encoded as multivariate Gaussian density embeddings by the probabilistic event encoding module. After that, the density embeddings are projected into relation-specific subspaces by the relation-aware event projection module. Finally, three modules are jointly optimized by the modified contrastive learning loss. The details of each step are discussed as follows. ## 3.1 Multi-Relational Event Generating It is critical for contrastive learning to construct positive samples for training data. The common practices to generate positive samples in NLP tasks are token replacement, token shuffling, or token removing (Yan et al., 2021). However, such methods are not suitable for generating positive event samples because events are sensitive to word change. Recently, the dropout mechanism is employed to generate positive samples (Wu et al., 2021), which is still not suitable for our scenario because the positive samples generated by dropouts cannot encode relational prior knowledge. Therefore, we propose to employ COMET (Bosselut et al., 2019), a commonsense generative model, to automatically generate multi-relational positive samples for training events. Specifically, COMET is a transformer-based generative model trained on the commonsense knowledge graph, ATOMIC (Sap et al., 2019) for automatically commonsense knowledge graph construction. Given the head event as Xs = {x s0 , ..., xs |s|} and relation as Xr = {x r0 , ..., xr |r|}, where xs are word tokens, COMET generates the tail event as Xo = {x o0 , ..., xo |o|} by: $$X^{o}={\mathsf{C O M E T}}([X^{s},X^{r}])\qquad\qquad(1)$$ Given a set of training events $D=\left\{{e}_{i}\right\}$ . (x1, x2*, ..., x*|ei|)} n i=1, each event eiis fed into COMET as head event Xs. As for relations, the nine default training relations {rj} k j=1 in COMET are employed. The details of relations used for training are listed in Appendix A. Then, relational positive samples are generated for each event under each relation and the multi-relational positive event sample set {e irj|i = 1*, ..., n*; j = 1*, ..., k*} is obtained by: $$e_{r_{j}}^{i}=\mathbb{C}\mathbb{O}\mathbb{M}\mathbb{E}\mathbb{T}([e_{i},r_{j}])$$ $$\left(2\right)$$ ## 3.2 Probabilistic Event Encoding Previous event representation learning methods usually adopt specific neural network architectures with static word embeddings as the event encoders (Granroth-Wilding and Clark, 2016; Modi, 2016). Recent work show the effectiveness of the pre-trained language model such as BERT (Devlin et al., 2019) in event encoding (Gao et al., 2022). In this paper, we also employ BERT as the backbone of the event encoder. To represent the uncertainty and polysemy of events, we propose to learn density event embeddings instead of point event embeddings. In this paper, we choose multivariate Gaussian distribution as the density. The reasons are two-fold. On the one hand, Gaussian distributions only require two parameters and are easy to optimize. On the other hand, Gaussian distribution has an analytical form under many calculations such as Kullback–Leibler (KL) divergence. In the probabilistic event encoding module, an event ei = {x1*, ..., x*|ei|} (for both the original training data and the generated positive samples) is first fed into the BERT encoder to get their semantic representation by: $$q_{i}=\{[\text{CLS}],x_{1},...,x_{|e_{i}|},[\text{SEP}].\}\tag{3}$$ $$[\mathbf{v}_{[\text{CLS}]}^{(i)},\mathbf{v}_{x_{1}}^{(i)},...,\mathbf{v}_{x_{|e_{i}|}}^{(i)},\mathbf{v}_{[\text{SEP}]}^{(i)}]=\text{BERT}(q_{i})\tag{4}$$ where vs are the vector representations by BERT and qiis the input query for the event ei by concatenating the token sequence with the special token [CLS] and [SEP]. The vector representation of [CLS] token v (i) [CLS] is utilized as the semantic representation for ei. For simplification, we assume the variance matrixes of density embeddings are diagonal. Therefore the variance matrixes can be fully specified by the variance vectors at the diagonal. Then the semantic representation for eiis projected to the mean vector and variance vector of the Gaussian embedding by two specific Multilayer Perceptrons (MLPs): $$\begin{array}{l}{{\mu_{i}=\mathrm{MLP}_{m e a n}(\mathbf{v}_{[\mathrm{CLS}]}^{(i)})}}\\ {{\sigma_{i}^{2}=\mathrm{MLP}_{v a r}(\mathbf{v}_{[\mathrm{CLS}]}^{(i)})}}\end{array}\tag{5}$$ The density representation zi of the event eiis: $$z_{i}={\mathcal{N}}(\mu_{i},d i a g(\sigma_{i}^{2}))$$ i)) (6) where *diag*(v) means matrix taking v as diagonal. ## 3.3 Relation-Aware Event Projecting The original contrastive learning algorithm learns representations in a common embedding space, which requires the embeddings of positive sample pairs to be close and those of negative pairs to be separate. However, we argue that for multirelational learning, the original contrastive learning may not be valid. Similar to knowledge graph embedding, an event may have multiple aspects and various relations may focus on different aspects of events, which makes a common space insufficient for modeling. Therefore, inspired by the knowledge graph embedding methods (Wang et al., 2014; Lin et al., 2015), we propose to perform contrastive learning of different relations at different relationspecific hyperplanes to make sure that different relations do not affect each other during learning. Give a event embedding zi and a relation r, we project zi by: $$f_{r}(z_{i})=z_{i}-\omega_{r}^{T}z_{i}\omega_{r}$$ $$\left(7\right)$$ r ziωr (7) where ωr denotes the normal vector for hyperplanes of r. Based on projection fr(·), the Gaussian event embedding ziis projected into a subspace with ωr as a normal vector. It should be noted that a linear transformation of Gaussian distribution is still Gaussian. Therefore the density embedding after projection is still Gaussian density. Such transformation requires the normal vector for hyperplanes of relation, which is learned during training and unknown for an unknown relation. However, in the real world, the relations between events are various, which can not be enumerated during the model training. Therefore, it is necessary for our method to be generalized to unknown relations. To deal with this problem, an attention-based mechanism is proposed to learn the relation-specific normal vectors ω automatically based on the context of event pairs. To be more specific, given an event pair with unknown relation {ei, ej}, we first obtain their context embedding c by concatenating them together and feeding the concatenation into the BERT encoder by: $$q_{ij}=\{[\text{CLS}],e_{i},e_{j},[\text{SEP}].\}\tag{8}$$ $$[\mathbf{v}_{[\text{CLS}]}^{(ij)},...,\mathbf{v}_{[\text{SEP}]}^{(ij)}]=\text{BERT}(q_{ij})\tag{9}$$ Again, the representation of token [CLS] is utilized. $$({\boldsymbol{\delta}})$$ $$(9)$$ $$(6)$$ as the event pair context embedding cij = v (ij) [CLS]. Then the attention mechanism is adopted to learn the context-aware relation normal vector ωa based on a set of relation hyperplane normal vectors {ωj} k j=1 by: $$a_{r}^{(i j)}={\frac{e x p(\mathbf{c}_{i j}\cdot\mathbf{\omega}_{r})}{\sum_{t=1}^{k}e x p(\mathbf{c}_{i j}\cdot\mathbf{\omega}_{t})}}\qquad(10)$$ $$\mathbf{\omega}_{a}^{(i j)}=\sum_{r=1}^{k}a_{r}^{(i j)}\mathbf{\omega}_{r}$$ r ωr (11) $$\omega_{a}^{(i j)}\;\mathrm{is\normalized\by\}$$ $$\mathcal{L}_{dp}=-\sum_{i=1}^{n}\sum_{r=1}^{k}$$ $$\log\frac{\epsilon_{r}\cdot g(f_{r}(\mathbf{z}_{i}),f_{r}(\mathbf{z}_{i}^{(+)}))}{g(f_{r}(\mathbf{z}_{i}),f_{r}(\mathbf{z}_{i}^{(+)}))+\sum_{j\in N(i)}g(f_{r}(\mathbf{z}_{i}),f_{r}(\mathbf{z}_{j}))}\tag{16}$$ $$\omega_{a}^{(i j)}\leftarrow\frac{\omega_{a}^{(i j)}}{||\omega_{a}^{(i j)}||^{2}}$$ $$\quad(12)$$ $$\begin{array}{l}{{f_{a}(z_{i})=\mu_{i}-(\omega_{a}^{(i j)})^{T}\mu_{i}\omega_{a}^{(i j)}}}\\ {{f_{a}(z_{j})=\mu_{j}-(\omega_{a}^{(i j)})^{T}\mu_{j}\omega_{a}^{(i j)}}}\end{array}\tag{13}$$ $${\mathcal{L}}_{\mathrm{m-InfoNCE}}=\beta{\mathcal{L}}_{m r}+{\mathcal{L}}_{d p}+{\mathcal{L}}_{m l m}\qquad(17)$$ $$g(\mathbf{a},\mathbf{b})=\exp\{\frac{1}{2\tau}(\mbox{KL}(\mathbf{a}||\mathbf{b})+\mbox{KL}(\mathbf{b}||\mathbf{a}))\}\tag{14}$$ $$\mathcal{L}_{mr}=-\sum_{i=1}^{n}\sum_{r=1}^{k}$$ $$\log\frac{\epsilon_{r}\cdot g(f_{a}(\mathbf{z}_{i}),f_{a}(\mathbf{z}_{r}^{(i)}))}{g(f_{a}(\mathbf{z}_{i}),f_{a}(\mathbf{z}_{r}^{(i)}))+\sum_{j\in N(i)}g(f_{r}(\mathbf{z}_{i}),f_{r}(\mathbf{z}_{j}))}\tag{15}$$ ## 4 Experiments 4.1 Implementation Details $$(11)$$ To capture the event semantics, we also introduce the dropout-based positive samples during contrastive training: The Gaussian density embeddings of ei and ej are projected as: where z (+) iis the density embedding of the dropoutbased positive sample for the event ei. It should be noted that vanilla relational projection fr(·) is used for the calculation of positive samples in this part as we want the dropout-based positive samples to be close to the training samples for every relation. As discussed in Gao et al. (2022), introducing the original Mask Language Modeling (MLM) loss Lmlm into learning is beneficial for the backbone encoder. The final loss function is obtained by adding the above three terms together: 3.4 Multi-relational Probabilistic Contrastive ## Learning As stated before, contrastive learning is employed. However, the original InfoNCE loss (Chen et al., 2020) is designed for single-relational (similar or dissimilar) deterministic embedding, which is not suitable for multi-relational probabilistic embedding. Therefore, we modify the original InfoNCE loss. One important component of the InfoNCE loss is the distance function, where the cosine similarity is usually employed. For Gaussian density embeddings, we utilize the symmetric KL divergence to serve as the distance function. The distance function g(·, ·) is: where β is the loss weight parameter. In this section, we investigate the effectiveness of MORE-CL by comparing it with several competitive baselines both on the conventional event similarity task and the proposed multi-relation probabilistic event similarly task. where a and b are two density embeddings, τ is the temperature parameter. Then we set the multi-relation part of the loss function as: Following previous works (Weber et al., 2018; Ding et al., 2019; Gao et al., 2022), the training events are extracted from the *New York Times Gigaword Corpus* using Open Information Extraction system Ollie (Mausam et al., 2012). Specifically, we use the same filter setting as (Gao et al., 2022), which results in 4,029,877 distinct events. For each event, we use COMET to generate its positive samples under the 9 default relations. The details of generation are shown in Appendix A. The backbone used in the encoder module is BERT-base-uncased. The learning rate is set as 4e-7. The model is trained with a batch size of 125 and total epochs of 2 by an Adam optimizer. The optimal dimension of Gaussian density embedding is chosen by experiments and set to 500. The loss weight parameter β is set to 0.01. The temperature where z (i) r is the density embedding of the positive sample under the relation r for the event ei, zi is the density embedding of ei, N(i) is the index set of in-batch negative sample of ei and ϵr is the weight parameter for the relation r. Note that for the calculation of distance for negative samples, we use vanilla relational projection fr(·) instead of attention-based relational projection fa(·) to keep the negative pairs separate for every relation. parameter τ is set as 0.3 and the weight parameters for relations ϵ are set to 0.1. In practice, we assume the output of network MLPvar(·) is the log of variance vector which is taken exponential when used to keep it non-negative. At each batch, the calculated KL divergence values are normalized by min-max normalization to make the training process more stable. The model is implemented by PyTorch (Paszke et al., 2019). ## 4.2 Datasets Hard Similarity Dataset. Weber et al. (2018) proposed a dataset of 115 samples to identify the semantically similar event pairs from the dissimilar pairs. To make the dataset more difficult, the positive samples are annotated to have little lexical overlaps with the anchor events while the negative samples are annotated to have high overlaps. Ding et al. (2019) extended this dataset to 1000 samples. For both datasets, accuracy is adopted as an evaluation metric, where a sample is successfully processed if and only if the similarity between the positive pair is higher than the similarity between the negative pair. Transitive Sentence Similarity. Kartsaklis and Sadrzadeh (2014) proposed this fine-grained similarity dataset, which contains 108 pairs of transitive sentences that consist of a subject, a verb, and an object. Each pair of events is assigned with similarity scores from 1 to 7 by human annotators, where a higher value indicates more similar events. For this dataset, Spearman's correlation between the similarity score is predicted by each method and the average annotated similarity score is employed as the evaluation metric. Multi-Relational Probabilistic Event Similarity (MRPES). The previous two datasets are designed to evaluate the single-relational deterministic event representations. To further investigate whether the knowledge of multiple relations between events and uncertainty within events is learned, we propose a new multi-relational probabilistic event similarity dataset (MRPES). MRPES is an extension of Weber's dataset, containing 115 samples. As shown in Table 1, each sample in MRPES contains 1 anchor event, 1 negative sample, 4 relational positive samples, and 2 probabilistic positive samples. The anchor events and negative samples are taken from (Weber et al., 2018), while the rest of the events are manually annotated. For relational positive samples, we choose two learned relations **oEffect** and xNeed and two unknown relations **contrast** and sequential. For probabilistic positive samples, we annotate each anchor event with two semanticallyrelated events while these two events are semantically different. For the relational test, the setting is the same as the original **Hard Similarity Dataset**. For the probabilistic test, a sample is successfully processed if and only if the similarities between two positive samples are both greater than its similarity with the negative sample. The details of the dataset are listed in Appendix B. | Anchor | Negative | |---------------------------|--------------------------| | journalist capture animal | journalist capture image | | oEffect | xNeed | | animal is caught | person be a hunter | | Contrast | Sequential | | animal escaped | person sell animal | | Probabilistic_1 | Probabilistic_2 | | man hunt deer | kid catch insect | Table 1: An example in the MRPES dataset. ## 4.3 Baselines Following (Gao et al., 2022), three types of methods are employed for comparison: - Event representation methods: **EM Comp.**, Role Factor Tensor and **Predicate Tensor** are all proposed by (Weber et al., 2018) to learn the interactions of event components with tensor networks. **SWCC** (Gao et al., 2022) is the current SOTA method for the event similarity task by incorporating contrastive learning and prototypical clustering simultaneously. ## - **Event Representation Methods With External** knowledge: **KGEB** (Ding et al., 2016) incorporates knowledge graph information. **FEEL** (Lee and Goldwasser, 2018) employs animacy and sentiment as extra features of events. **NTNIntSent** (Ding et al., 2019) utilizes sentiment and intent of events to enhance event representations. - **Multi-relational script learning methods**: SAM-Net (Lee and Goldwasser, 2019) incorporates discourse relations into script learning. UniFA-S (Zheng et al., 2020) utilizes scenario knowledge for event representations. ## 4.4 Main Results Similarity datasets results. The experimental results for three similarity datasets are shown in Ta- | Method | Hard similarity% | Transitive sentence | | |-----------------------------------------|--------------------|-----------------------|------| | Original | Extended | similarity (ρ) | | | EM Comp. (Weber et al., 2018) | 33.9 | 18.7 | 0.57 | | Predicate Tensor (Weber et al., 2018) | 41.0 | 25.6 | 0.63 | | Role Factor Tensor (Weber et al., 2018) | 43.5 | 20.7 | 0.64 | | SWCC (Gao et al., 2022) | 80.9 | 72.1 | 0.82 | | KGEB (Ding et al., 2016) | 52.6 | 49.8 | 0.61 | | FEEL (Lee and Goldwasser, 2018) | 58.7 | 50.7 | 0.67 | | NTN-IntSent (Ding et al., 2019) | 77.4 | 62.8 | 0.74 | | SAM-Net (Lee and Goldwasser, 2019) | 51.3 | 45.2 | 0.59 | | UniFA-S (Zheng et al., 2020) | 78.3 | 64.1 | 0.75 | | MORE-CL | 89.6 | 84.9 | 0.81 | Table 2: Results on the similarity datasets. The best results are bold. Part of results are taken from (Gao et al., 2022). | Method | known relation | unknown relation | probablistic test | | | | |----------|------------------|--------------------|---------------------|------|------|------| | oEffect | xNeed | Contrast | Sequential | one | both | | | SWCC | 42.6 | 60.0 | 66.1 | 56.5 | 65.2 | 42.6 | | MORE-CL | 88.7 | 85.2 | 81.7 | 77.4 | 81.7 | 67.0 | Table 3: Results on the MRPES dataset. Best results are bold. "one" for the probabilistic test denotes at least one positive sample is successfully predicted, while "both" denotes both positive samples are successfully predicted. ble 2. It can be observed that MORE-CL outperforms all the other baselines on two hard similarity datasets by a large margin except that on the transitive sentence similarity dataset, MORE-CL achieves similar results as SWCC. It might be attributed to the under-calibration of Gaussian density embeddings, which is also found in (Zhang et al., 2021). It can also be observed that methods with external knowledge or multi-relation knowledge generally outperform those without using external knowledge. MRPES results. As SWCC generally outperformed other methods by a large margin in the similarity task, we only compare our method with SWCC on the MRPES dataset. As shown in Table 3, MORE-CL outperforms SWCC greatly. The reason is obvious that SWCC is a single-relational deterministic event representation method without modeling multiple relations and uncertainty of events. Moreover, the high accuracy scores for two known relations show that MORE-CL learns the training relations well. It can also be found that scores for unknown relations are slightly lower than known relations', showing that MORE-CL can generalize to unknown relations. As for the probabilistic test, the result is interesting. For the case of at least one sample correct, the performance gap between SWCC and MORE-CL is 16.5% while for the case of both samples correct, the performance gap increases to 24.4%. It further verifies our assumption that it is hard for point embeddings to model that one embedding is close to the other two embeddings which should be separated. ## 4.5 Model Analysis In this part, we remove or change three components of MORE-CL and generate four experiment settings to investigate their effects on the performance, where setting S5 is the original model. - S1 is the setting where the probabilistic event encoding module is replaced with a normal BERT encoder, and the symmetric KL similarity is replaced with cosine similarity. - S2 is a model where the multi-relational event generation module is removed, and the dropoutbased positive samples are utilized for contrastive learning. - S3 is the setting where the relation-aware event projection module is removed. For inference, the test samples are processed under all 9 training relations respectively and the final decision is made by averaging the results for 9 training relations. | Settings | Hard similarity% | Transitive sentence | | |------------|--------------------|-----------------------|------| | Original | Extended | similarity (ρ) | | | S1 | 79.1 | 65.9 | 0.82 | | S2 | 87.0 | 82.6 | 0.78 | | S3 | 88.7 | 83.2 | 0.80 | | S4 | 87.8 | 83.5 | 0.80 | | S5 | 89.6 | 84.9 | 0.81 | Table 4: Ablation study on similarity dataset. ![7_image_1.png](7_image_1.png) - S4 is the setting where a simple version without attention replaces the relation-aware event projection module. For training, an extra relationspecific normal vector is introduced for projecting all event pairs with unknown relations, for inference, the normal vector for unknown relations is employed for projection. In practice, we utilize the co-occurrence data to learn this normal vector. It can be observed from Table 4 that without the probabilistic encoding module, the performances of MORE-CL on hard similarity datasets drop dramatically while the performance on the transitive sentence similarity dataset increases slightly. The performance fluctuation over hard similarity datasets comes from two parts. On the one hand, the Gaussian density embeddings model the uncertainty within events. On the other hand, the relational positive event samples are generated automatically, which will certainly introduce noise into the model. The performance fluctuation over the transitive sentence similarity dataset further shows that the Gaussian density embeddings are under-calibrated. To investigate the optimal dimension for MORECL, we perform experiments with different embedding dimensions on three similarity datasets. As shown in Figure 3, the performances of MORE-CL increase first and then decrease with the embedding dimension growth, which is concordant with the majority of event representation learning methods. ![7_image_0.png](7_image_0.png) It should be noticed that the optimal embedding dimension of MORE-CL is smaller than SWCC. The reason might be that the density embeddings can carry more information compared with point embeddings at the same embedding dimension. ## 4.6 Visualization To get a more intuitional understanding of multirelational embedding learning, we present a visualization of embeddings learned by SWCC and MORE-CL with T-SNE (Van der Maaten and Hinton, 2008). As shown in Figure 4, the embeddings learned by SWCC for three types of events are mixed up, while the embeddings learned by MORECL are separated. ## 5 Conclusion In this paper, to model the multiple relations between events and uncertainty within events, we propose a multi-relational probabilistic event representation learning method, MORE-CL, based on the projected Gaussian embedding with contrastive learning. To be more specific, MORE-CL consists of three modules, a multi-relational event generation module to incorporate relational knowledge of events, a probabilistic event encoding module to model uncertainty with Gaussian density embeddings, and a relation-aware projection module to adapt to unseen relations. What's more, we also present a new dataset to test the knowledge of multiple relations and uncertainty learned by event representation methods. The experimental results for both existing and new datasets show the effectiveness of the proposed method. ## Limitations Though achieving promising results in the experiments, our work still has the following limitations. - As shown in Table 2 and Table 3. The proposed Gaussian embedding may have a calibration problem leading to performing badly on fine-grained similarity tasks measured by Spearman's correlation. - The proposed method assumes that all relations are symmetric and adopts a symmetric similarity measurement. However, not all the relations are symmetric. And the ability to deal with unsymmetric relations with unsymmetric measurement is one important advantage of density embeddings which point embeddings do not have. - The proposed MRPES dataset should be improved in terms of quantity and quality. The number of test samples should be increased to over a thousand to get more statistically robust results. The types of unseen relations should be also increased to have a more comprehensive investigation of the ability to generalize on relations. The negative samples should be elaborately designed to provide the anchor event with different negative samples under different relations. ## Acknowledgement The authors would like to thank the anonymous reviewers for their insightful comments. This work is funded by the National Natural Science Foundation of China (62176053). This work is supported by the Big Data Computing Center of Southeast University. ## References Ben Athiwaratkun and Andrew Gordon Wilson. 2018. Hierarchical density order embeddings. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Celikyilmaz, and Yejin Choi. 2019. COMET: Commonsense transformers for automatic knowledge graph construction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4762–4779, Florence, Italy. Association for Computational Linguistics. Hong Chen, Raphael Shu, Hiroya Takamura, and Hideki Nakayama. 2021. GraphPlan: Story generation by planning with event graph. In *Proceedings of the* 14th International Conference on Natural Language Generation, pages 377–386, Aberdeen, Scotland, UK. Association for Computational Linguistics. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020. A simple framework for contrastive learning of visual representations. In *International conference on machine learning*, pages 1597–1607. PMLR. Shumin Deng, Ningyu Zhang, Luoqiu Li, Chen Hui, Tou Huaixiao, Mosha Chen, Fei Huang, and Huajun Chen. 2021. OntoED: Low-resource event detection with ontology embedding. In *Proceedings of the 59th* Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2828–2839, Online. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Xiao Ding, Kuo Liao, Ting Liu, Zhongyang Li, and Junwen Duan. 2019. Event representation learning enhanced with external commonsense knowledge. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4894– 4903, Hong Kong, China. Association for Computational Linguistics. Xiao Ding, Yue Zhang, Ting Liu, and Junwen Duan. 2015. Deep learning for event-driven stock prediction. In *Twenty-fourth international joint conference* on artificial intelligence. Xiao Ding, Yue Zhang, Ting Liu, and Junwen Duan. 2016. Knowledge-driven event embedding for stock prediction. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 2133–2142, Osaka, Japan. The COLING 2016 Organizing Committee. Jun Gao, Wei Wang, Changlong Yu, Huan Zhao, Wilfred Ng, and Ruifeng Xu. 2022. Improving event representation via simultaneous weakly supervised contrastive learning and clustering. In *Proceedings* of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3036–3049, Dublin, Ireland. Association for Computational Linguistics. Mark Granroth-Wilding and Stephen Clark. 2016. What happens next? event prediction using a compositional neural network model. In *Proceedings of the AAAI* Conference on Artificial Intelligence, volume 30. Dimitri Kartsaklis and Mehrnoosh Sadrzadeh. 2014. A study of entanglement in a categorical framework of natural language. *arXiv preprint arXiv:1405.2874*. I-Ta Lee and Dan Goldwasser. 2018. Feel: Featured event embedding learning. In *Proceedings of* the AAAI Conference on Artificial Intelligence, volume 32. I-Ta Lee and Dan Goldwasser. 2019. Multi-relational script learning for discourse relations. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4214–4226, Florence, Italy. Association for Computational Linguistics. Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015. Learning entity and relation embeddings for knowledge graph completion. In *Twentyninth AAAI conference on artificial intelligence*. Mausam, Michael Schmitz, Stephen Soderland, Robert Bart, and Oren Etzioni. 2012. Open language learning for information extraction. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 523–534, Jeju Island, Korea. Association for Computational Linguistics. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Ashutosh Modi. 2016. Event embeddings for semantic script modeling. In *Proceedings of the 20th SIGNLL* Conference on Computational Natural Language Learning, pages 75–83, Berlin, Germany. Association for Computational Linguistics. Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation learning with contrastive predictive coding. *arXiv preprint arXiv:1807.03748*. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. *Advances in* neural information processing systems, 32. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In *Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)*, pages 1532–1543, Doha, Qatar. Association for Computational Linguistics. Karl Pichotta and Raymond Mooney. 2016a. Learning statistical scripts with lstm recurrent neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 30. Karl Pichotta and Raymond J. Mooney. 2016b. Using sentence-level LSTM language models for script inference. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 279–289, Berlin, Germany. Association for Computational Linguistics. Maarten Sap, Ronan Le Bras, Emily Allaway, Chandra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A Smith, and Yejin Choi. 2019. Atomic: An atlas of machine commonsense for ifthen reasoning. In *Proceedings of the AAAI conference on artificial intelligence*, volume 33, pages 3027–3035. Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. *Journal of machine* learning research, 9(11). Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by translating on hyperplanes. In *Proceedings of the AAAI* conference on artificial intelligence, volume 28. Noah Weber, Niranjan Balasubramanian, and Nathanael Chambers. 2018. Event representations with tensorbased compositions. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32. Lijun Wu, Juntao Li, Yue Wang, Qi Meng, Tao Qin, Wei Chen, Min Zhang, Tie-Yan Liu, et al. 2021. R-drop: Regularized dropout for neural networks. Advances in Neural Information Processing Systems, 34:10890– 10905. Yuanmeng Yan, Rumei Li, Sirui Wang, Fuzheng Zhang, Wei Wu, and Weiran Xu. 2021. ConSERT: A contrastive framework for self-supervised sentence representation transfer. In *Proceedings of the 59th Annual* Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5065–5075, Online. Association for Computational Linguistics. Linhai Zhang, Deyu Zhou, Yulan He, and Zeng Yang. 2021. Merl: Multimodal event representation learning in heterogeneous embedding spaces. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 35, pages 14420–14427. Jianming Zheng, Fei Cai, and Honghui Chen. 2020. Incorporating scenario knowledge into a unified finetuning architecture for event representation. In Proceedings of the 43rd international ACM SIGIR conference on research and development in information retrieval, pages 249–258. Deyu Zhou, Jianan Wang, Linhai Zhang, and Yulan He. 2021. Implicit sentiment analysis with eventcentered text representation. In *Proceedings of the* 2021 Conference on Empirical Methods in Natural Language Processing, pages 6884–6893, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. | Relation | Explanation | Example | |------------|------------------------------------------------------|----------------------------------------| | xIntent | Why does X cause the event | PersonX wanted to be nice | | xNeed | What does X need to do before the event? | PersonX knows PersonY well | | xAttr | How would X be described? | PersonX is caring | | xEffect | What effects does the event have on X? | PersonX will want to chat with PersonY | | xReact | How does X feel after the event? | PersonX will feel good | | xWant | What would X likely want to do after the event? | PersonX will want to chat with PersonY | | oEffect | What effects does the event have on others? | PersonY will smile | | oReact | How do others feel after the event? | PersonY will feel flattered | | oWant | What would others likely want to do after the event? | PersonY will compliment PersonX back | Table 5: Relations explained in Multi-relational event generation. Explanation and examples are taken from (Sap et al., 2019). The head entity is (*PersonX pays PersonY a compliment*). X denotes the subject of the head entity, and o denotes the subject of the tail entity. Table 6: Relations in the MERPES dataset. | Type | Explanation | Example | | |-------------------|----------------------------------------------------------------------------------|------------------------------------------------------------|------------------| | Anchor | Event that will be tested. | journalist capture animal | | | Negative | Event that is textually similar and semantically dissimilar to the anchor event. | journalist capture image | | | Seen relations | oEffect | What effects does the anchor event have on others? | animal is caught | | xNeed | What does X need to do before the anchor event? | person be a hunter | | | Unseen relations | Contrast | What is the opposite of the anchor event? | animal escaped | | Sequential | What is most likely to happen after the anchor event? | person sell animal | | | Prbabilistic test | Probabilistic_1 | An event that is semantically similar to the anchor event. | man hunt deer | | Probabilistic_2 | Another event that is semantically similar to the anchor event. | kid catch insect | | ## A Relations Explained In Multi-Relational Event Generation COMET (Bosselut et al., 2019) is a transformerbased generative model trained on the commonsense knowledge graph, ATOMIC (Sap et al., 2019), which employs 9 types of relations of ATOMIC. ATOMIC constructs event triplets by asking *If-then* questions. For example, for a commonsense "if X pays Y a compliment, *then* then Y will likely return the compliment", the relation between these two events is "what would others likely want to do after the event?", which is denoted as oWant in ATOMIC, then this commonsense will be transformed as an event triplet {(*PersonX pays* PersonY a compliment), *oWant*, (Y will compliment PersonX back}. To capture more knowledge of relations between events, we also employ all 9 types of relations to generate positive samples. The details of the relations used are shown in Table 5. ## B Relations Explained In Merpes Dataset To further investigate the knowledge of multiple relations and uncertainty learned by the event representation learning methods, we propose a multirelational probabilistic event similarity dataset (MRPES). Every sample in MRPES data consists of 8 events. The details of each event and its explanation is shown in Table 6. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? the section Limitations. ✗ A2. Did you discuss any potential risks of your work? Our work follows previous work with the same setting, therefore we believe no risks will be included in our work. ✓ A3. Do the abstract and introduction summarize the paper's main claims? the section abstract and the section 1 Introduction. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** The Section 4 Experiments. ✓ B1. Did you cite the creators of artifacts you used? the section 3 Method and the section 4 Experiments. ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? the section 3 Method and the section 4 Experiments. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? the section 4 Experiments. ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? the section 4 Experiments. ✗ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? The dataset used for training and testing are regular English texts collected by pervious work. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. the section 4 Experiments. ## C ✓ **Did You Run Computational Experiments?** The Section 4 Experiments. ✗ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? The size of our proposed model is relatively small, therefore the total computational budget should be affordable for most of people in the community. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? the section 4 Experiments. ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? the section 4 Experiments. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? the section 4 Experiments. ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
qi-etal-2023-pragmaticqa
{P}ragmati{CQA}: A Dataset for Pragmatic Question Answering in Conversations
https://aclanthology.org/2023.findings-acl.385
Pragmatic reasoning about another speaker{'}s unspoken intent and state of mind is crucial to efficient and effective human communication. It is virtually omnipresent in conversations between humans, e.g., when someone asks {``}do you have a minute?{''}, instead of interpreting it literally as a query about your schedule, you understand that the speaker might have requests that take time, and respond accordingly. In this paper, we present PragmatiCQA, the first large-scale open-domain question answering (QA) dataset featuring 6873 QA pairs that explores pragmatic reasoning in conversations over a diverse set of topics. We designed innovative crowdsourcing mechanisms for interest-based and task-driven data collection to address the common issue of incentive misalignment between crowdworkers and potential users. To compare computational models{'} capability at pragmatic reasoning, we also propose several quantitative metrics to evaluate question answering systems on PragmatiCQA. We find that state-of-the-art systems still struggle to perform human-like pragmatic reasoning, and highlight their limitations for future research.
# Pragmati**Cqa: A Dataset For Pragmatic Question Answering In** Conversations Peng Qi∗✁† Nina Du∗△ Christopher D. Manning△ **Jing Huang**✄† ✁ AWS AI Labs △ Computer Science Department, Stanford University ✄ Amazon Alexa AI {pengqi, manning}@cs.stanford.edu ## Abstract Pragmatic reasoning about another speaker's unspoken intent and state of mind is crucial to efficient and effective human communication. It is virtually omnipresent in conversations between humans, e.g., when someone asks "do you have a minute?", instead of interpreting it literally as a query about your schedule, you understand that the speaker might have requests that take time, and respond accordingly. In this paper, we present PRAGMATICQA, the first large-scale open-domain question answering (QA) dataset featuring 6873 QA pairs that explores pragmatic reasoning in conversations over a diverse set of topics. We designed innovative crowdsourcing mechanisms for *interestbased* and *task-driven* data collection to address the common issue of incentive misalignment between crowdworkers and potential users. To compare computational models' capability at pragmatic reasoning, we also propose several quantitative metrics to evaluate question answering systems on PRAGMATICQA. We find that state-of-the-art systems still struggle to perform human-like pragmatic reasoning, and highlight their limitations for future research. ## 1 Introduction Reasoning about interlocutors' unspoken intent or state of mind is a crucial feature of human communication, which allows us to convey ideas and exchange information more efficiently and effectively assuming that conversation participants are cooperative (Grice, 1975). For instance, when asked "Is there water on Mars?", a friendly, knowledgeable person will not answer just *"Yes"*. Typically, the answerer would anticipate reasonable follow-up questions and/or identify the asker's theme of curiosity, *These authors contributed equally. PQ designed the collection mechanism and evaluation metrics, and wrote the paper; ND carried out most of the groundwork for collection/experiments/analyses. CM/JH offered project guidance and support, and helped discuss experiments and analysis. †Work done prior to joining Amazon at JD AI Research. Question: Is there water on Mars? Literal, Direct Answer: Yes, there is water on Mars. Potential follow-up question: Where? In what *form?* Relevant knowledge: ✿✿✿✿✿ *Water* ✿✿ has*✿✿✿✿* been*✿✿✿✿✿* found ✿✿ in ✿✿ 23 *✿✿✿✿✿* places ✿✿ in ✿✿✿ our *✿✿✿✿* Solar*✿✿✿✿✿✿* System. *✿✿✿✿* Turns✿✿✿ out✿✿ it ✿✿✿ isn't✿✿ so*✿✿✿✿✿✿✿* parched. Pragmatic Answer: Yes, but only in the form of ice caps near its poles. ✿ In*✿✿✿✿* fact, *✿✿✿✿* Mars ✿ is*✿✿✿✿* just ✿✿✿ one ✿✿ of ✿✿ 23 *✿✿✿✿✿* places *✿✿✿✿✿* where ✿✿ we*✿✿✿✿* have*✿✿✿✿✿* found *✿✿✿✿* water✿✿ in ✿✿ the*✿✿✿✿✿* Solar *✿✿✿✿✿✿* System! Figure 1: An example of answering an informationseeking question literally vs. pragmatically by reasoning about the asker's unspoken information needs and potential*✿✿✿✿✿✿✿* relevant *✿✿✿✿✿✿✿✿✿* knowledge that might engage the asker. and offer more details (see Figure 1 for an example). This capability of *pragmatic reasoning* is especially helpful when the asker is seeking information from an answerer that is more knowledgeable about the topic discussed, *e.g.*, in a teacher-student discussion, a user-database interaction (Kaplan, 1982), or a user-(virtual-)assistant conversation (Allen and Perrault, 1980). Recent open-domain question answering (QA) datasets have placed an increasing emphasis on mimicking this information-seeking setting, but they still fall short at two crucial desiderata. First, most datasets mainly focus on evaluating systems' accuracy at finding the literal answer to a question, both in single-turn QA (Rajpurkar et al., 2016; Kwiatkowski et al., 2019) and multi-turn QA (Choi et al., 2018; Reddy et al., 2019). While this simplifies data collection and model evaluation, they cannot evaluate whether a QA system can understand or fulfill unspoken needs behind a question, which can be key to successful and engaging multi-turn interactions. Second, most of these datasets are crowd-sourced, which leaves them vulnerable to the problem of *incentive misalignment* between annotators and potential users (de Vries et al., 2020). This not only affects how or what questions are asked, but also how these questions are answered. In this paper, we present PRAGMATICQA, a conversational open-domain question answering | Topicswitching | Pragmatic Answers & Eval | Incentivealigned | | | | | | | |----------------------------------------------|---|---|----|------------|--------------------|----|----|----| | Dataset | Opendomain | Multiturn | Infoseeking | Extractive | Free-form Response | | | | | Rationale | | | | | | | | | | SQuAD (Rajpurkar et al., 2016) | % | % | % | ! | % | % | % | % | | Wizard of Wikipedia (Dinan et al., 2018) | ! | ! | % | % | ! | % | % | % | | Natural Questions (Kwiatkowski et al., 2019) | ! | % | ! | % | % | % | % | o | | QuAC (Choi et al., 2018) | % | ! | ! | ! | % | % | % | % | | CoQA (Reddy et al., 2019) | % | ! | % | ! | ! | % | % | % | | Curiosity (Rodriguez et al., 2020) | % | ! | ! | % | ! | ! | % | % | | QReCC (Anantha et al., 2021) | ! | ! | o | % | ! | o | % | o | | TOPIOCQA (Adlakha et al., 2021) | ! | ! | ! | % | ! | ! | % | o | | PRAGMATICQA (this work) | ! | ! | ! | ! | ! | ! | ! | ! | dataset that features conversations between humans that involve pragmatic reasoning, the first of its kind to the best of our knowledge. We also present various automated metrics to evaluate QA systems on answer accuracy, pragmatic reasoning, answer naturalness and faithfulness. Aside from pragmatic reasoning, PRAGMATICQA is collected with incentive alignment as a primary design goal. To this end, we curate data with a focus on discussion topics that might share popular interest, and allow crowdworkers to choose topics of mutual interest to converse about instead of prescribing them. This allows crowdworkers to engage in conversations in a manner that closely mirrors a real user on topics they are genuinely curious about. Further, to encourage crowdworkers to explore the topic under discussion, we also design mechanisms where the question asker ("learner") can qualify as an answerer ("teacher") through learning and receive higher pay from the task. This not only allows us to collect high-quality conversational data with workers of varying amount of background knowledge on the same topic, but also aligns the diversity and quality of our data with crowdworkers' compensation. We finetune a Fusion-in-Decoder model on PRAGMATICQA and find that our current models fails to recover >90% the pragmatic information that crowdworkers provided in the data. To recap, our contributions in this paper are: 1) we propose PRAGMATICQA, an open-domain conversational question answering (ConvQA) dataset featuring pragmatic answers and quantitative metrics to evaluate pragmatic reasoning in ConvQA; 2) we design a crowdsourcing framework for PRAG-MATICQA that alleviates the problem of incentive misalignment, which yields realistic, high-quality, and diverse data; 3) we analyze PRAGMATICQA and show that it presents unique and important challenges to ConvQA systems today.1 ## 2 Related Work Our work is closely related to three topics, namely open-domain question answering (QA), conversational QA, and computational pragmatic reasoning, in which we review prior work in this section. We also highlight key features of PRAGMATICQA and contrast it with previous work in Table 1. Open-domain QA. The goal of open-domain QA is to answer questions from a large collection of unstructured knowledge (*e.g.*, text). SQuAD Open (Chen et al., 2017) is one of the most widely used datasets in this task, originally adapted from reading comprehension questions collected on Wikipedia passages (Rajpurkar et al., 2016). While it helps benchmark retrieval-based QA, the questions are often too context-dependent and ambiguous (e.g., "What day was the game played on?") or too unnaturally specific (e.g., *"What park* covers an area of 76 ha.?"). In later work, Yang et al. (2018) expand open-domain QA to require multi-step reasoning which helped alleviate the former issue, but the latter remained unresolved. TriviaQA (Joshi et al., 2017) and Natural Questions (Kwiatkowski et al., 2019) take two distinct approaches to improve incentive alignment in opendomain QA. While the former enlists trivia enthusiasts to author questions to reflect their interest, the latter takes questions typed into a search engine and answers them with crowdworkers on Wikipedia. However, as with other prior work, the question answerer is not incentivized to provide helpful answers that might address the asker's unspoken intent beyond a literal interpretation of the question. Conversational QA. The growing interest in interactive natural language processing (NLP) sys-1We release the data and code for the baseline at https: //github.com/qipeng/PragmatiCQA. tems has also driven the development of conversational QA (ConvQA) resources. Beyond reading comprehension QA tasks in conversations like QuAC (Choi et al., 2018) and CoQA (Reddy et al., 2019) and knowledge-enhanced chitchat like Wizard of Wikipedia (Dinan et al., 2018), there has also been growing interest in open-domain ConvQA tasks to closely imitate how virtual assistants operate in real life. QReCC (Anantha et al., 2021) and TOPIOCQA (Adlakha et al., 2021) are two recent benchmarks in this direction. The former focuses on evaluating retrieval-based ConvQA systems on coreference and ellipsis resolution, and the latter is designed to train ConvQA systems to handle natural topic transitions. While both datasets attempt to simulate real-world information seeking by seeding conversations with questions from Natural Questions, these questions are assigned to crowd workers that are not necessarily interested in them, and thus the actual conversations might still fall short at closely modeling the conversation between a curious user and a helpful assistant. ## Computational Pragmatic Reasoning. Since the publication of the Gricean cooperative principles between rational speakers (Grice, 1975), various frameworks have been proposed to characterize pragmatic reasoning in discourse understanding (Marcu, 1998) and in multi-agent communication, where the latter includes plan inference (Allen and Perrault, 1980), plan inference with discourse coherence (Asher and Lascarides, 1998, 2003), game theoretic analysis (Stevens et al., 2016), and rational speech acts (RSA; Frank and Goodman, 2012). When applied to textual responses of natural language interfaces, these techniques are often referred to as "over-answering" (Wahlster et al., 1983; Bersia et al., 1986) or "coorperative response generation" (Kaplan, 1982; Cheikes and Webber, 1989), which find their roots in database systems with natural language interfaces that serve users on knowledge-intensive tasks like question answering. We note, however, that most prior work focus on settings where all agents share all the referents involved for pragmatic reasoning, *e.g.*, the set of colors (Monroe et al., 2017), images (Cohn-Gordon et al., 2018), environments (Fried et al., 2018), and sometimes a finite set of utterances used to refer to them. Essentially, both speakers share the same information aside from the identity of the target referent (or the goal/plan) available only to the speaker, and computational approaches focus on efficient normalization over referents or utterances (CohnGordon et al., 2018). In an information-seeking conversation, however, agents need to navigate the information asymmetry in their knowledge of potential referents, where shared common sense and pragmatic reasoning on the question answerer's part play an important role. We believe that PRAG-MATICQA provides a starting point and benchmark for the development of computational pragmatic reasoning approaches under information asymmetry with the full complexity of natural language. ## 3 Pragmati**Cqa: Pragmatic Question** Answering In Conversations In this section, we introduce how we crowdsource PRAGMATICQA ranging from data source and processing to details about task design and how it helps align crowdworker interest with that of potential users. We conclude the section with evaluation metrics we propose to assess the pragmatic reasoning of QA systems, as well as statistics of the dataset. ## 3.1 Data Preparation To engage crowdworkers in a teaching/learning conversation they are interested in, we select Fandom2as the source of our corpus for these conversations. Similar to Wikipedia, Fandom is a crowdmaintained web-based encyclopedia service on a wide variety of topics. Unlike Wikipedia, however, Fandom is largely organized around entertainment topics with content contributed by fans, where each topic is elaborated in a *community* of hundreds to thousands of webpages about each detail. As a result, Fandom is not only an ideal source of topics that might interest crowdworkers, but also offers a diverse set of relatively isolated topics to test models' few-shot or zero-shot generalization. To select topics for data collection, we organize Fandom communities by their genres,3and select the most active communities as candidates for data collection (see Section 3.3 for more details). For each community, we manually choose a "central" page from which we follow hyperlinks up to three levels to scrape related topics. We remove navigation bars and sections from each page to limit the ![3_image_0.png](3_image_0.png) scope of hyperlinks to the main body, which will serve as the reading material for crowdworkers to answer question from. We keep these hyperlinks in place for crowdworkers to navigate between webpages to find answers to questions. We discuss more details about the communities we selected for data collection in Appendix A. ## 3.2 Collecting Pragmatic Responses In PRAGMATICQA, we pair crowdworkers to engage in a conversation about a topic drawn from Fandom. In each conversation, one crowdworker takes the role of the *teacher* and the other the *student*, and they are free to explore topics related to the central page of the topic. Every conversation starts with a question from the student (e.g., *"What is the Lord of the Rings?"*), which can be about a specific entity or event within the topic if it is not the first time a student is learning about it (e.g., *"Who is Gandalf?"*). Then, the teacher attempts to answer the question with Fandom pages (that the student cannot access) in three steps. First, the teacher selects a collection of extracted spans that answers Alit the question based on its literal interpretation (e.g., "The Lord of the Rings is an epic high fantasy novel written by J.R.R. Tolkien"), or simply answer "Yes", "No", or "I don't know" if the pages do not contain the answer. This is similar to the extractive answers provided in previous conversational QA datasets (*e.g.* Reddy et al., 2019). Then, the teacher is tasked to select spans Aprag that might answer questions the student will ask next given the literal answer (*e.g.*, "The story concerns peoples such as Hobbits, Elves, Men, Dwarves, Wizards, and Orcs (called goblins in The Hobbit), and centers on the Ring of Power ![3_image_1.png](3_image_1.png) made by the Dark Lord Sauron."). Finally, to answer the question, the teacher is tasked to combine information from both span collections, and paraphrase it into a conversational response a. The student is tasked to come up with a followup question once a response is received. As soon as this question is sent, the student is presented a survey about the response that was just received, regarding its naturalness, the quality of the span collections (not revealed until the survey), as well as the faithfulness of the final paraphrased response to these selected spans. This survey is answered concurrently with the teacher's answering of the follow-up question so that we minimize crowdworkers' wait time and keep the utterances as conversational as possible. Once 6 rounds of QA pairs are reached, the crowdworkers can choose to leave the conversation at any time, and at the time they exit the task, we present both crowdworkers a short survey for feedback on the topic, the other crowdworker, and the task itself. We refer the reader to Appendix B for annotation guidelines and Appendix C for more details about the task interface. ## 3.3 Aligning Crowdworker Incentives With Potential Users To align the interests of crowdworkers with potential users, we make several design choices to encourage crowdworkers to be genuinely curious about the discussion topic, to sufficiently explore it, and to produce good questions and responses in the task. Incentive to learn. One common feature of crowdsourcing tasks that might lead to incentive misalignment is prescribed topics or roles, especially in a conversation, since they are unlikely to match crowdworkers personal interests or life experiences. To mitigate this issue, we begin by curating communities and topics from Fandom of popular interest. Specifically, we filter out Fandom communities with less than 100 content pages and 10 active users in the past 30 days, and rank them by the average edits per page as a proxy for user engagement. For each genre, we keep the top 30 communities as candidates for discussion topics for crowdworkers. Beyond the curation of the pool of topics, we also provide crowdworkers means to indicate topics of mutual interest as they are paired to converse. At the beginning of each conversation, each crowdworker is shown a handful of potential topics for discussion, and asked to indicate their inclinations to teach or learn about each topic (see Figure 2). Once both crowdworkers have indicated their preferences, we will automatically select the topic that is the most compatible between the two crowdworkers for data collection, and assign teacher/student roles accordingly. Incentive to explore. Although our topics are selected to match the student's curiosity and the teacher's self-proclaimed expertise, the topic alone does not guarantee that the student would explore knowledge about the topic as a curious user would, or that the teacher would be able to support them effectively. We incorporate two mechanisms to further align crowdworker incentives. First, both crowdworkers are paid more in rewards with each additional turn they finish beyond the minimum requirement, and the pay for each turn grows as the conversation persists longer. This encourages both crowdworkers to explore more topics within the conversation. Second, we design a simple *background knowledge test* for each crowdworker to gauge their readiness to teach a certain topic, inspired by Rodriguez et al. (2020). Similar to that work, we try to come up with multiple choice questions for each topic for crowdworkers to indicate their level of background in a given topic. However, since Fandom covers a large variety of diverse topics, we cannot easily come up with a fixed set of questions for each, like Rodriguez et al. (2020) did with geographic entities. We instead generate a list of popular page titles for each topic via personalized PageRank, and ask crowdworkers to answer which titles are related and which are unrelated from a list consisting of five relevant titles and five drawn from popular titles of other topics in the same genre (see Figure 3 for an example). This not only allows us to automatically generate these tests for any given topic, but also serves as an automatic qualification mechanism for crowdworkers to teach a topic, and incentivize students to explore the topic sufficiently should they want to teach the topic and get paid more per conversation. ## 3.4 Evaluation Metrics Once a question answering model is built on PRAG-MATICQA, we are interested in quantifying its performance with the data we have collected with the help of crowdworkers. Assume that a QA model produces predictions on all of the categories of information a crowdworker is asked to provide in PRAGMATICQA, namely a collection of literal answer spans Aˆlit extracted from the webpages, a collection of pragmatic answer spans Aˆprag, and a final paraphrased answer aˆ. Given these model predictions and their humanannotated counterparts, it is relatively straightforward to understand how accurate models are at answering the question based on its literal interpretation. We employ the standard F1 metric for extractive question answering popularized by (Rajpurkar et al., 2016): $$\mathrm{F_{1}^{\mathrm{{hit}}}=F_{1}(\hat{\cal A}_{\mathrm{{hit}}},\hat{\cal A}_{\mathrm{{hit}}}),}$$ $$(1)$$ 1 = F1(Aˆlit, Alit), (1) which is effectively the same as the F1 metric employed by previous extractive question answering datasets. For PRAGMATICQA, we are further interested in how well models can learn to emulate the pragmatic behavior of human annotators. For this purpose, F1(Aˆprag, Aprag) would seem to be a good candidate. However, this metric does not account for the potential dependency between Aˆprag and Aˆlit, between Aprag and Alit, as well as potential prediction errors. Ideally, we would like to capture the model's pragmatic reasoning *beyond* the information that is already in Alit and assign a score of zero if no additional information is provided in overlap with Aprag. In pathological cases where Alit ∩ Aprag ̸= ∅, using F1(Aˆprag, Aprag) as the pragmatics metric allows predictions like Aˆprag = Alit to receive non-zero scores, despite revealing no information that requires pragmatic reasoning. Comparing Aˆprag to Aˆlit is unlikely to be helpful, either, since it is possible to maximize their difference by setting Aˆlit = ∅. We therefore design the following metric to gauge the model's pragmatic reasoning against annotations: $$\mathrm{F}_{1}^{\mathrm{prag}}=\mathrm{F}_{1}(\hat{\mathcal{A}}_{\mathrm{prag}}-\mathcal{A}_{\mathrm{lit}},\mathcal{A}_{\mathrm{prag}}-\mathcal{A}_{\mathrm{lit}}),\quad(2)$$ where *B − A* removes all spans in A from spans in B. It can be seen that this avoids the aforementioned pathological cases, and properly assigns a score of zero unless Aˆprag contains information beyond Alit that overlaps with Aprag. Last but not least, we are also interested in evaluating the final response aˆ. For this, we can apply reference-based evaluation metrics to compare it directly with a. Here, we apply the symmetric BARTScore (Yuan et al., 2021): $Q(\hat{a})=\frac{\text{BARTScore}(\hat{a},a)+\text{BARTScore}(a,\hat{a})}{2}$ (3) Here, BARTScore(*x, y*) uses a trained BART (Lewis et al., 2020) model on a text-to-text dataset (e.g., summarization or paraphrasing) to obtain the token-averaged conditional log likelihood of sequence y given sequence x as the input. BARTScore has been shown to exhibit better correlation with human judgement in a variety of tasks compared to prior model-based evaluation metrics. We use the symmetric formulation as a proxy for semantic equivalency, and we use the model finetuned on the CNN/DailyMail dataset. ## 3.5 Data Analysis We analyze the conversations collected for PRAG-MATICQA, present dataset statstics, and analyze the effect of our incentive alignment techniques. As can be seen in Table 2, dialogues in PRAG-MATICQA cover a broad set of topics, each of which contains an average of 7.8 question answer pairs. We split the data into train/dev/test sets with disjoint topics of discussion to minimize information leakage, so that the model's pragmatic reasoning capabilities can be evaluated on unseen topics to evaluate generalization. When studying questions in the dataset, we find that the teacher's response is usually significantly longer than that of the student's question (about 4x as long), which contains roughly the same number of words as literal and pragmatic answer spans combined. The literal and pragmatic answers have very little overlap overall (4.40 words per question), and typically | Split | QA | Dial. | Topics | Genres | |-----------|----------------|---------|----------|----------| | Train | 4027 | 526 | 34 | 8 | | Dev | 1479 | 162 | 8 | 8 | | Test | 1367 | 193 | 9 | 8 | | Total | 6873 | 881 | 51 | 8 | | Component | Average Length | | | | | Q | 8.34 | | | | | A | 31.37 | | | | | Alit | 13.47 | | | | | Aprag | 22.90 | | | | ![5_image_0.png](5_image_0.png) come from 2.55 different HTML elements (usually different passages in the document). An average conversation in PRAGMATICQA extracts answer spans from 16.1 unique HTML elements from an average of 3.11 unique web pages, or a new page every 2.5 turns. This shows that PRAGMATICQA's setup encourages crowd workers to explore the topic of interest at a great depth, leading to natural topic shifts throughout the conversation. PRAGMATICQA features a diverse set of questions (see Figure 4), which elicit a diverse set of literal answers, of which about 22.1% are "Yes/No/I don't know" answers with equal proportions. The rest of the literal answers contain a combination of short factoid answers and longer narrative answers Q: When had the first Zelda game been released? Alit: It came out as early as 1986 for the Famicom in Japan, and was later released in the western world, including Europe and the US in 1987. Aprag: The Legend of Zelda is the first installment in the Zelda franchise, and its success allowed the development of sequels. In one or another way, nearly every title in the series is influenced by this game A (a): The Legend Of Zelda was first released as early as 1986 in Japan and later to the western world in 1987. It was the first installment in the Zelda franchise and its sucess allowed the development of sequels, with nearly every game in the series influenced by it! Figure 5: An Example QA pair in PRAGMATICQA with literal and pragmatic answer spans. Taken from a conversation about "The Legend of Zelda". | Non-empty | All | | | | |-------------------|---------|----------|----------|-------| | Incentive \ Stats | Secs/QA | QAs/Dial | QAs/Dial | | | Topic sel. | ! | 341* | 6.24 | 1.64 | | % | 288* | 5.52 | 1.36 | | | BG test | ! | 341 | 6.12 | 1.47* | | % | 301 | 6.12 | 2.14* | | Table 3: Effect of incentive alignment techniques. "Topic sel." gives crowd workers freedom to choose topics to converse about, which incentivizes them to be curious and learn; "BG test" screens crowd workers that are not qualified to teach, which incentivizes them to fully explore each topic. !/% means the technique is enabled/disabled in A/B testing. * indicates results where the 95% confidence interval are disjoint. "Secs/QA" stands for the number of seconds crowd workers spend per QA pair, and "QAs/Dial" is the average number of QA pairs per dialogue. (with quartile span lengths of 4, 12, and 21 tokens, respectively). Of the pragmatic answer spans, we find that 41% answer potential follow-up questions the Student worker might ask given the literal spans, 25% offer information from the web pages that helps sustain the conversation, and 22% do a bit of both. An actual example of PRAGMATICQA can be found in Figure 5. We further study the effect of the incentive alignment techniques we presented in Section 3.3. Specifically, during data collection, we perform an A/B test for each technique, where we target 80% of completed conversations collected with each feature independently.4 As can be seen in Table 3, when crowd workers are committed to converse on a topic ("non-empty", conversations with at least one QA pair), both techniques incentivize crowd workers to spend more time in the conversation, with a statistically significant gain observed on time per QA pair from allowing workers to select their topics of interest to discuss. Furthermore, we find that crowd workers are more likely to engage in longer conversations on a topic of their choosing, and spend more time to finish QA pairs when the Teacher worker is qualified through the background knowledge test. Finally, we find that the background knowledge test has a statistically significant filtering effect on conversations that could have taken place with an underqualified Teacher worker. That is, while non-empty conversations are qualitatively similar in length, the number of empty conversations as a result of enabling the background knowledge test significantly drives down the average QA pairs per conversation when they are considered. In contrast, the filtering effect for topic selection is much less pronounced, because the crowd worker has a wide variety of topics to choose from. ## 4 Experiments 4.1 Model And Setup In our experiments to establish a baseline on PRAG-MATICQA, we use a Fusion-in-Decoder (FiD) model (Izacard and Grave, 2021) with a dense passage retriever (DPR) (Karpukhin et al., 2020). DPR is a pretrained Transformer (Vaswani et al., 2017) retrieval model that helps us find correct passages from the Fandom corpus to answer questions, and FiD is a technique to combine top retrieved passages in a conditional generative model for efficient generation. We use BART-large (Lewis et al., 2020), a pretrained Transformer sequence-tosequence model to generate answers.5 We finetune the DPR question encoder and the BART model on the training set of PRAGMATICQA with the Adam optimizer (Kingma and Ba, 2015) with batch size of 4 and initial learning rate of 10−5 on two RTX 3090 GPUs, and select the model that achieves the best dev performance and stop training until the model fails to improve dev performance for 5 consecutive evaluations. The total training time is that they cannot always skip configurations they do not like and introduce worker confounds in the results. 5We use the publicly available implementation and models from the Transformers (Wolf et al., 2020) library in our experiments and train them with ParlAI (Miller et al., 2017). | Top-k docs | dev | test | | | | | | | | | | |--------------|---------|---------|----------|-------|--------|---------|---------|----------|-------|--------|--------| | R@k | R All@k | F lit 1 | F prag 1 | Q(ˆa) | R@k | R All@k | F lit 1 | F prag 1 | Q(ˆa) | | | | k = | 1 | 1.59 | 1.49 | 9.85 | 5.23 | -3.937 | 1.83 | 1.61 | 11.08 | 5.49 | -3.931 | | k = | 5 | 5.77 | 5.48 | 11.48 | 6.11 | -3.754 | 4.90 | 4.24 | 11.92 | 5.73 | -3.741 | | k = 10 | 8.12 | 7.51 | 11.25 | 5.99 | -3.717 | 7.10 | 6.14 | 11.87 | 5.54 | -3.655 | | | k = 20 | 10.27 | 9.40 | 10.67 | 5.32 | -3.666 | 9.62 | 8.34 | 11.94 | 5.27 | -3.640 | | | k = 50 | 14.52 | 13.12 | 10.27 | 5.00 | -3.679 | 14.81 | 12.73 | 11.12 | 4.94 | -3.636 | | about 4 hours. We report further training settings and hyperparameters in Appendix E. The model is provided with unlimited conversation history during training and evaluation, and the output is formatted as follows: Literal Span 1 </lit> ... </lit> Literal Span n </lit> Pragmatic Span 1 </prag> ... Pragmatic Span m </prag> Final Answer </s>. We experiment with different top k passages used during evaluation, and report retrieval performance of DPR, F lit 1and F prag 1of the selected spans, and Q(ˆa) of the final answer. ## 4.2 Main Results We report model performance by varying the number of top documents retrieved with DPR when each response is generated. As can be seen in Table 4, the recall of the gold context grows as expected with increasing k, which in turn also leads to improved final answer quality Q(ˆa) initially. We also find that the quality of literal answers (F lit 1 ) selected improves slightly with more passages retrieved presumably with more gold contexts readily available in top retrieval results. However, we notice that F prag 1does not improve similarly. This cannot entirely be attributed to a lackluster retrieval performance, since the recall of the entire set of gold documents improves as the number of top documents increases, suggesting that predicting pragmatic spans is a more challenging task than predicting answer spans and fluent final answers. ## 4.3 Analysis In this section, we analyze the prediction of the FiD model, highlight success and failure cases, and provide an empirical analysis of the proposed F prag 1 metric. For these analysis, we use retrieve k = 5 top passages for each response, which has the best PRAGMATICQA performance on the dev set. As can be seen in the examples in Figure 6, the F prag 1 =0, F lit 1 =0, R@5=100.0, *Q(ˆa*)=-3.151 Q1: What award did the Maze runner win? Gold: ALA Best Fiction for Young Adults , The Maze Runner (2011) </lit> Young Reader's Choice Award , intermediate grades, The Maze Runner (2012) </prag> it won the ALA Best Fiction for Young Adults in 2011, and Young Reader's Choice Award in 2012. </s> Pred:I don't know </lit> The Maze Runner: The Scorch Trials </prag> The film won the Academy Award for Best Visual Effects for its use of CGI. </s> F prag 1 =6.67, F lit 1 =100.00, R@5=0.0, Q(ˆa)=-3.131 Q2: Ok, Is there a signature color that Taylor has? Gold: I don't know </lit>*✿✿✿✿✿* When *✿✿✿✿* Swift✿✿✿ was✿✿ in*✿✿✿✿✿* fourth *✿✿✿✿✿* grade, ✿✿ she*✿✿✿✿* won✿ a*✿✿✿✿✿✿* national*✿✿✿✿✿* poetry*✿✿✿✿✿✿* contest*✿✿✿✿* with ✿ a *✿✿✿✿✿✿✿✿* three-page *✿✿✿✿✿* poem, ✿✿✿✿✿✿✿ "Monster ✿✿ in ✿✿✿ My *✿✿✿✿✿* Closet" </prag> I do not know of a signature color Taylor Swift has, but when she was in the 4th grade, she won a national poetry cont est with a 3 page poem called "Monster in My Closet". </s> Pred: I don't know </lit> ✿✿✿ She ✿ is*✿✿✿✿✿✿* known ✿✿ for✿✿✿ her*✿✿✿✿✿* bright ✿✿ red ✿✿✿ hair ✿✿✿ and *✿✿✿✿* bright*✿✿✿✿* blue*✿✿✿✿* eyes. </prag> I do not know. However, she has a bright red and blue color. </s> F prag 1 =72.73, F lit 1 =8.16, R@5=100.0, *Q(ˆa*)=-3.239 Q3: Where did Batman grow up? Gold: Wayne Manor </lit> wealthy splendor✿✿✿ and*✿✿✿✿* leads ✿ a *✿✿✿✿✿* happy ✿✿✿ and *✿✿✿✿✿✿✿✿* privileged *✿✿✿✿✿✿✿* existence *✿✿✿✿* until *✿✿✿✿✿✿* roughly ✿✿ the✿✿✿ age✿✿ of ✿✿✿✿ eight. </prag> Wayne Manor. There Batman lived in wealth and lead a happy and privileged life until he was eight years old. </s> Pred: Born in Gotham City to Thomas Wayne ... and his wife Martha , Bruce Wayne grew up ... Wayne Manor estate ... their trusted butler Alfred J. "Alfred" Pennyworth . </lit> He was brought up in Wayne *Manor* and its wealthy splendor ✿✿ and*✿✿✿✿* leads✿✿ a *✿✿✿✿✿* happy ✿✿✿ and *✿✿✿✿✿✿✿* privileged*✿✿✿✿✿✿✿✿* existence ✿✿✿ until ✿✿✿✿✿✿ roughly ✿✿ the✿✿✿ age✿✿ of*✿✿✿✿✿✿* twenty. </prag> Born in Gotham, Bruce Wayne is born in a mansion on Wayne Manor. He is brought up there by his butler, Alfred Figure 6: Examples from PRAGMATICQA's dev set, the FiD model's predicted answers, and the metrics of the corresponding answers. For ease of reading, pragmatic spans are highlighted and roughly categorized as satisfying unspoken information needs, providing*✿✿✿✿✿✿✿* relevant ✿✿✿✿✿✿✿✿✿ knowledge, or irrelevant to *either*. trained model can exhibit several different failure modes. In the first example, we see that the model misinterpreted the Maze Runner book series as a film. While this is a common problem in singleturn open-domain QA (Min et al., 2020), here, it should have been clear from the conversational context what the Student worker is referring to. The second example exhibits a widely know issue with generative QA models known as factual inaccuracy or hallucination. Here, the model fabricated information about Taylor Swift's hair and eye colors, presumably triggered by the word "color" in the question. Note that in this case the model is also generating a paraphrased answer that is not consistent with the spans it generated. Finally, the third example shows several issues. First, the predicted literal answer provides too much information that does not directly answer the question, unlike the succinct span that crowd workers annotated. Second, the predicted pragmatic span repeats information from the literal span. Third, the model also hallucinates when generating the pragmatic span, where instead of "the age of eight", the model generated "the age of twenty". These examples suggest that current models still struggle with multiple facets of the task presented by PRAGMATICQA: retrieval accuracy, factually grounded generation, generation consistency, entity disambiguation, as well as the ability to retrieve pragmatically useful information to present. We do note, however, that the proposed F prag 1properly awards models for surfacing information that is not in the gold literal spans but in the gold pragmatic spans regardless of what the predicted literal answer is, effectively decoupling the evaluation of the two. We find that the full suite of proposed metrics, when used together, can holistically evaluate the answer quality and pragmatic reasoning strength of the conversational QA model. ## 5 Discussion: Evaluation On Pragmaticqa While the proposed evaluation metrics are useful to provide quality estimates of the provided answers, especially when it comes to how well the prediction matches the annotators' pragmatic behavior, we acknowledge that this is far from a complete set of evaluations desirable for real-world systems developed on PRAGMATICQA to be useful. First, the evaluation metrics presented do not evaluate the final system response on its factuality or faithfulness to the spans selected, which is crucial for real-world systems. Liu et al. (2023) recently report that publicly available generative search engines are still far from satisfactory on this front, which we speculate will be more challenging for pragmatic responses such as those in PRAG-MATICQA. Second, unlike the literal answer, the definition of pragmatic responses in an information-seeking conversation is open-ended and more subjective in nature. In this paper, we explored categorizing these into two broad categories, answers to address potentially unspoken information needs, and potential relevant knowledge that can be helpful, but this is far from comprehensive, since good pragmatic responses could involve clarification questions that are not covered by PRAGMATICQA. Even within these categories, we see that at a given point in the conversation, there are typically more than one follow-up questions to be asked given the literal response; relevant knowledge is only more diverse. While a high F prag 1score can approximate a sufficient condition for a pragmatic natural language system, it might be far from necessary due to the potential existence of multiple good answers. Both of these suggest that additional evaluation metrics are necessary for PRAGMATICQA, which we leave to future work. We believe, given these observations, that model-based evaluation will become crucial in the pursuit of better evaluation methods on PRAGMATICQA, where our dataset will provide the resource to help kickstart the exploration. In the meantime, we also believe that the metrics presented in this paper can still serve as good proxies for evaluating model's pragmatic behavior until more powerful evaluation methods are available. ## 6 Conclusion We presented PRAGMATICQA, the first opendomain conversational question answering dataset featuring pragmatic answers and quantitative metrics to evaluate pragmatic reasoning in conversational QA. PRAGMATICQA is collected with innovative crowdsourcing techniques, including techniques that better align crowd worker incentive with eventual users of ConvQA systems that improves crowd worker engagement and data quality. Finally, we show in our experiments that questions in PRAGMATICQA present unique and important challenges to ConvQA systems today, and open new research directions for investigation. ## 7 Limitations PRAGMATICQA is collected via crowdsourcing on English-language material from Fandom.com, where community-maintained wiki pages are used as reading materials and basis for answering questions. Therefore, it cannot be guaranteed that the excerpts from Fandom will be factually correct or stay unchanged over time, and in turn the answers in PRAGMATICQA are also not factually verified. Furthermore, techniques or models developed on PRAGMATICQA might not be generally applicable to non-English languages or non-entertainment topics without further adjustment or evaluation. More importantly, the crowd workers that participated in PRAGMATICQA are geographically limited to primarily English-speaking countries, and therefore might not represent typical pragmatic reasoning behaviors of people that speak different first languages or come from different cultural backgrounds. Therefore, it should not be treated as a universal standard for pragmatic reasoning in information-seeking conversations, but rather a single reference point. ## Acknowledgments The authors would like to thank the anonymous reviewers for the discussion and suggestions. This research was supported in part by the SAIL-JD Research Initiative. ## References Vaibhav Adlakha, Shehzaad Dhuliawala, Kaheer Suleman, Harm de Vries, and Siva Reddy. 2021. TopiOCQA: Open-domain conversational question answeringwith topic switching. *arXiv preprint* arXiv:2110.00768. James F Allen and C Raymond Perrault. 1980. Analyzing intention in utterances. *Artificial intelligence*, 15(3):143–178. Raviteja Anantha, Svitlana Vakulenko, Zhucheng Tu, Shayne Longpre, Stephen Pulman, and Srinivas Chappidi. 2021. Open-domain question answering goes conversational via question rewriting. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 520–534, Online. Association for Computational Linguistics. Nicholas Asher and Alex Lascarides. 1998. Questions in dialogue. *Linguistics and Philosophy*, 23(3):237– 309. Nicholas Asher and Alex Lascarides. 2003. Logics of conversation. Cambridge University Press. Nicoletta Bersia, Barbara Di Eugenio, Leonardo Lesmo, and Pietro Torasso. 1986. The overanswering mechanism in the Fido system. In *Cybernetics and Systems* '86: Proceedings of the Eighth European Meeting on Cybernetics and Systems Research, organized by the Austrian Society for Cybernetic Studies, held at the University of Vienna, pages 823–830. Springer. Brant A. Cheikes and Bonnie L. Webber. 1989. Elements of a computational model of cooperative response generation. In *Speech and Natural Language:* Proceedings of a Workshop Held at Philadelphia, Pennsylvania, February 21-23, 1989. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer opendomain questions. In *Proceedings of the 55th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870–1879, Vancouver, Canada. Association for Computational Linguistics. Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wentau Yih, Yejin Choi, Percy Liang, and Luke Zettlemoyer. 2018. QuAC: Question answering in context. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 2174–2184, Brussels, Belgium. Association for Computational Linguistics. Reuben Cohn-Gordon, Noah Goodman, and Christopher Potts. 2018. Pragmatically informative image captioning with character-level inference. In *Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational* Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 439–443, New Orleans, Louisiana. Association for Computational Linguistics. Harm de Vries, Dzmitry Bahdanau, and Christopher Manning. 2020. Towards ecologically valid research on language user interfaces. *arXiv preprint* arXiv:2007.14435. Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2018. Wizard of wikipedia: Knowledge-powered conversational agents. *arXiv preprint arXiv:1811.01241*. Michael C Frank and Noah D Goodman. 2012. Predicting pragmatic reasoning in language games. *Science*, 336(6084):998–998. Daniel Fried, Ronghang Hu, Volkan Cirik, Anna Rohrbach, Jacob Andreas, Louis-Philippe Morency, Taylor Berg-Kirkpatrick, Kate Saenko, Dan Klein, and Trevor Darrell. 2018. Speaker-follower models for vision-and-language navigation. *Advances in* Neural Information Processing Systems, 31. H. P. Grice. 1975. Logic and conversation. In Peter Cole and Jerry L. Morgan, editors, *Speech Acts: Syntax and Semantics Volume 3*, pages 41–58. Academic Press, New York. Gautier Izacard and Edouard Grave. 2021. Leveraging passage retrieval with generative models for open domain question answering. In *Proceedings of the 16th* Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 874–880, Online. Association for Computational Linguistics. Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601–1611, Vancouver, Canada. Association for Computational Linguistics. S Jerrold Kaplan. 1982. Cooperative responses from a portable natural language query system. Artificial Intelligence, 19(2):165–187. Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for opendomain question answering. In *Proceedings of the* 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769–6781, Online. Association for Computational Linguistics. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question answering research. *Transactions of the Association for Computational Linguistics*, 7:452–466. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 7871–7880, Online. Association for Computational Linguistics. Nelson F Liu, Tianyi Zhang, and Percy Liang. 2023. Evaluating verifiability in generative search engines. arXiv preprint arXiv:2304.09848. Daniel Marcu. 1998. A surface-based approach to identifying discourse markers and elementary textual units in unrestricted texts. In Discourse Relations and Discourse Markers. Alexander Miller, Will Feng, Dhruv Batra, Antoine Bordes, Adam Fisch, Jiasen Lu, Devi Parikh, and Jason Weston. 2017. ParlAI: A dialog research software platform. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 79–84, Copenhagen, Denmark. Association for Computational Linguistics. Sewon Min, Julian Michael, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2020. AmbigQA: Answering ambiguous open-domain questions. In *Proceedings of* the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5783– 5797, Online. Association for Computational Linguistics. Will Monroe, Robert X.D. Hawkins, Noah D. Goodman, and Christopher Potts. 2017. Colors in context: A pragmatic neural model for grounded language understanding. *Transactions of the Association for* Computational Linguistics, 5:325–338. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics. Siva Reddy, Danqi Chen, and Christopher D. Manning. 2019. CoQA: A conversational question answering challenge. *Transactions of the Association for Computational Linguistics*, 7:249–266. Pedro Rodriguez, Paul Crook, Seungwhan Moon, and Zhiguang Wang. 2020. Information seeking in the spirit of learning: A dataset for conversational curiosity. In *Proceedings of the 2020 Conference on* Empirical Methods in Natural Language Processing (EMNLP), pages 8153–8172, Online. Association for Computational Linguistics. Jon Scott Stevens, Anton Benz, Sebastian Reuße, and Ralf Klabunde. 2016. Pragmatic question answering: A game-theoretic approach. Data & Knowledge Engineering, 106:52–69. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc. Wolfgang Wahlster, Heinz Marburger, Anthony Jameson, and Stephan Busemann. 1983. Over-answering yes-no questions: Extended responses in a nl interface to a vision system. In Proceedings of the Eighth international joint conference on Artificial intelligence-Volume 2, pages 643–646. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 2369–2380, Brussels, Belgium. Association for Computational Linguistics. Weizhe Yuan, Graham Neubig, and Pengfei Liu. 2021. Bartscore: Evaluating generated text as text generation. *Advances in Neural Information Processing* Systems, 34. | Genre | Communities | Examples | | | |----------------------|---------------|--------------------------------------------|--------|--------| | Anime | 14 | Soul | Eater, | One | | Piece, Studio Ghibli | | | | | | Books | 12 | H. P. Lovecraft, Wizard of Oz, The Maze Runner | | | | Comics | 8 | Sonic the Hedgehog, Batman, Peanuts Comics | | | | Games | 8 | Halo, Fallout, The Legend of Zelda | | | | Lifestyle | 6 | Olympics, The Formula 1, LEGO | | | | Movies | 10 | Pixar, Harry Potter, The Matrix | | | | Music | 7 | Lady | Gaga, | 'Cats' | | Musical, | Taylor | | | | | Swift | | | | | | TV | 8 | Doom Patrol, Game of Thrones, Doctor Who | | | Table 5: Communities used in PRAGMATICQA collection. ## A Fandom Communities Used In Collection Of Pragmaticqa We collect data for PRAGMATICQA on eight genres of Fandom communities, and ensure that the coverage for genres is roughly even. Table 5 contains the number of communities used in each genre and example communities. We select up to 1,000 pages from each community by following hyperlinks from a hand-chosen landing page to up to three levels. The resulting average community used during data collection contains about 390 web pages each, which results in 401,042 DPR passages when processed and converted into plaintext. Each HTML element is marked with a unique UUID key to record span start and end during data collection, so that PRAGMATICQA can provide strong supervision for answer spans rather than relying on distant supervision post hoc. ## B Guidelines For Crowdworkers Please see Figure 7 for the guidelines we use for our crowdsourcing task. ## C Crowdworker Interface Please see Figure 8 for an example of our crowdsourcing interface. Our interface is built on the Mephisto toolkit6and ParlAI (Miller et al., 2017). Both interfaces consist of a chat window with the full chat history that allows crowd workers to type in questions and answers on the right, and a side pane on the left that displays functional elements. For the teacher, the side pane contains instructions and controls to select spans from the web page below to serve as the literal and pragmatic answers, as well as an embedded web page with hyperlinks and back and forward controls to emulate a basic browser. For the student, the side pane contains basic task information and instructions for most parts of the task, and when an answer is available from the teacher, the student is tasked to rate it on answer quality, how well it addresses the student's unspoken information needs, as well as how faithful the final paraphrased answer is to the spans selected from web pages. ## D Analyzing Question Types Featured In Pragmaticqa To determine the question type, we first locate WHwords (what, when, where, who, whom, which, whose, why, how) in the question. When that fails, we attempt to locate auxiliary verbs (is, are, was, were, did, do, does). From these words, we count up to three words to the right and summarize the salient patterns. When neither a WH-word or an auxiliary verb can be found, we categorize the question as "OTHER", which can include imperatives like "Tell me more about ...". ## E Additional Hyperparameters During training, we truncate input texts to at most 512 tokens and the concatenated output to 128 tokens for efficiency. We retrieve top 5 documents for FiD training. The model is evaluated on the dev set every 0.25 epochs during training, and learning rate is halved every time dev perplexity fails to improve. We stop training if dev perplexity does not improve for five consecutive evaluations, and select the model that achieves the best dev perplexity during training. ## Teaching And Learning In A Conversation Background In this task, you will be invited to chat with another crowd worker to teach them or learn from them something you are both interested in. The goal of this task is to collect data to teach computer assistants (think Siri, Alexa) to answer our questions engagingly and helpfully. Specifically, we would like to teach computers to answer questions by addressing the unspoken **intent** behind them and providing **helpful leads** when appropriate. What do we mean by **intent** and **helpful leads**? Consider asking your friend who's knowledgeable about astronomy: "Is there water on Mars?" Your friend's answer is probably more informative than the "robotic" answer, "Yes, there is water on Mars.", which is what today's computer assistants tend to offer, as they tend to interpret questions literally. In contrast, sensing your desire to learn a bit about water on Mars if it does exist, your friend would probably say something like "Yes, but only in the form of ice caps at its poles." This can be seen as them anticipating your relatively predictable follow-up question "In what form?" and addressing both questions in a single response, which is part of the unspoken intent of the asker. In this task, we define "**intent**" as unspoken needs of information that can be reasonably inferred after the question is answered literally. Aside from this, your friend might also know about water on other planets in the Solar System, and mention that in their response. Although not directly answering your original question, it would help to prompt you to engage and explore more in the conversation and learn from their knowledge. In this task, we call this "**helpful leads**", which we define as extra information from the answerer's knowledge that would help the asker explore beyond their original question. Note a good answer might both address the asker's unspoken intent and offer helpful leads to engage the asker. In the HIT, we will assign Turkers in the role of either a **student** or a **teacher**, where the student's task is to ask inquisitive and relevant questions about a topic they are interested in learning about but have limited knowledge of, and the teacher's task is to help us answer the question both literally and helpfully so we can quantify the difference between the two. Task workflow 1. If you are working on this task for the first time, you will be asked to complete a qualifying task that familiarizes you with the idea of literal answers and helpful answers. 2. Once you pass the qualification, we will pair you with another Turker to engage in a teaching/learning conversation. Before the conversation starts, we will first ask each of you your interest in teaching or learning about a set of 5–15 topics. Your mutual interest will determine the topic of the conversation, as well as the teacher/learner role assigned. 3. Once a topic is chosen, the learner can start by asking the first question. 4. Given each question, the teacher will follow instructions to help us answer the question literally first, then furnish helpful information to engage in a more friendly conversation. 5. Given each response from the teacher, the student will be asked to rate it on different aspects following the instructions on our task interface. 6. You would need to each finish 6 rounds of conversation (plus rating for the student) to complete a conversation in this task. 7. At the end of the conversation, the teacher will be asked to rate the learner on several aspects, to determine if they are engaged and asking meaningful questions. [... Task reward section omitted ...] Disclosure and Consent By participating in this task, you acknowledge and give explicit consent that we record your MTurk ID (but no personal identifiable information otherwise) to improve our ability to match you with topics you might be interested in teaching or learning about, as well as record your qualification status to perform the task and to teach a particular topic assigned to you. You further give us permission to release an anonymized version of your MTurk ID (that cannot be traced back to you or your actual MTurk ID) along with the data we collected, to help future researchers study how different people approach this task, as well as how your knowledge of a certain topic might have evolved by participating in this task. Figure 7: Guidelines for the crowdsourcing task for PRAGMATICQA. ![14_image_0.png](14_image_0.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 6 ✓ A2. Did you discuss any potential risks of your work? 6 ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** 3,4 ✓ B1. Did you cite the creators of artifacts you used? 3,4 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? 3 ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? 3 B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. The data is based on public, community maintained data with caveats discussed in the Limitations section. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? 3, Appendix A ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. 3.5 ## C ✓ **Did You Run Computational Experiments?** 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 4 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 4, Appendix E ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 4 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 4 ## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** 3 ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? 3, Appendix B, Appendix C ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? 3, 6 ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Appendix B D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? 6
hauzenberger-etal-2023-modular
Modular and On-demand Bias Mitigation with Attribute-Removal Subnetworks
https://aclanthology.org/2023.findings-acl.386
Societal biases are reflected in large pre-trained language models and their fine-tuned versions on downstream tasks. Common in-processing bias mitigation approaches, such as adversarial training and mutual information removal, introduce additional optimization criteria, and update the model to reach a new debiased state. However, in practice, end-users and practitioners might prefer to switch back to the original model, or apply debiasing only on a specific subset of protected attributes. To enable this, we propose a novel modular bias mitigation approach, consisting of stand-alone highly sparse debiasing subnetworks, where each debiasing module can be integrated into the core model on-demand at inference time. Our approach draws from the concept of diff pruning, and proposes a novel training regime adaptable to various representation disentanglement optimizations. We conduct experiments on three classification tasks with gender, race, and age as protected attributes. The results show that our modular approach, while maintaining task performance, improves (or at least remains on-par with) the effectiveness of bias mitigation in comparison with baseline finetuning. Particularly on a two-attribute dataset, our approach with separately learned debiasing subnetworks shows effective utilization of either or both the subnetworks for selective bias mitigation.
# Modular And On-Demand Bias Mitigation With Attribute-Removal Subnetworks Lukas Hauzenberger, Shahed Masoudian, Deepak Kumar, Markus Schedl, Navid Rekabsaz Johannes Kepler University Linz, Austria Linz Institute of Technology, AI Lab {first_name.family_name}@jku.at ## Abstract Societal biases are reflected in large pre-trained language models and their fine-tuned versions on downstream tasks. Common in-processing bias mitigation approaches, such as adversarial training and mutual information removal, introduce additional optimization criteria, and update the model to reach a new debiased state. However, in practice, end-users and practitioners might prefer to switch back to the original model, or apply debiasing only on a specific subset of protected attributes. To enable this, we propose a novel modular bias mitigation approach, consisting of stand-alone highly sparse debiasing subnetworks, where each debiasing module can be integrated into the core model on-demand at inference time. Our approach draws from the concept of *diff* pruning, and proposes a novel training regime adaptable to various representation disentanglement optimizations. We conduct experiments on three classification tasks with gender, race, and age as protected attributes. The results show that our modular approach, while maintaining task performance, improves (or at least remains onpar with) the effectiveness of bias mitigation in comparison with baseline finetuning. Particularly on a two-attribute dataset, our approach with separately learned debiasing subnetworks shows effective utilization of either or both the subnetworks for selective bias mitigation. ## 1 Introduction A large body of research evidences the existence of societal biases and stereotypes in pre-trained language models (PLMs) (Zhao et al., 2019; Sheng et al., 2019; Rekabsaz et al., 2021), and their potential harms when used in down-stream tasks (Blodgett et al., 2020; De-Arteaga et al., 2019; Rekabsaz and Schedl, 2020; Stanovsky et al., 2019). Common in-processing approaches to bias mitigation update a model's (typically all) parameters to satisfy specific attribute erasure criteria through optimization methods such as adversarial training (Elazar and Goldberg, 2018; Rekabsaz et al., 2021), and mutual information reduction (Colombo et al., 2021). These methods are shown to be effective in reducing the footprint of protected attributes (e.g., gender, race, etc.) in the resulting model. However when using such debiasing models in practice and in specific use-cases, system designers or end-users might still prefer to instead use the original model, or a debiased variation in respect to a particular subset of protected attributes. This can be due to various reasons such as the nature of a given input, preference of an individual end-user, or fairness-utility trade-off considerations. For instance, while a bias-aware model should indeed be agnostic to genders when the input is about genderneutral occupations (such as nurse or CEO), certain topics like *pregnancy* may specifically require gender information for a correct model decision.1 Also as shown in previous studies (Zerveas et al., 2022; Biega et al., 2018; Rekabsaz et al., 2021), since improving fairness on specific tasks may come with the cost of performance degradation, it is necessary to provide on-demand control over whether to impose fairness/debiasing criteria. Using existing approaches, this would require maintaining and deploying multiple large parallel models for every protected attribute, resulting in overly complex and resource-heavy pipelines and increased latency. To address this, we introduce a novel modular bias mitigation approach using sparse weightdifference networks. In our approach, the required changes in a model's parameters for erasing a bias attribute are stored in a decoupled subnetwork, trained simultaneously by a debiasing and a sparsification objective. At inference time, adding each debiasing module to the core model results in delivering debiasing qualities to a model's prediction in respect to the corresponding protected attribute. Our approach extends the principle idea 1See also the discussion in Krieg et al. (2023) about the need to separate bias-sensitive queries from "normal" ones. of *diff pruning* (Guo et al., 2021) introduced for parameter-efficient task training to bias mitigation, by viewing the objective of erasing a protected attribute as a stand-alone *diff* module. This module replaces fine-tuning by training only a small set of parameters added to the corresponding PLM's parameters, to deliver bias mitigation of a specific protected attribute. We further propose a novel procedure to train such debiasing subnetworks separately, and to selectively add an arbitrary set of them to the core model at inference time (§3). Our approach can be applied to any debiasing and representation disentanglement method, provided that its objective has separate learning signals for the task and each protected attribute. In comparison with adapter networks (Rebuffi et al., 2017; Houlsby et al., 2019), as shown by Guo et al. (2021) and also evidenced in our experiments, even more parameter-efficiency can be provided. Additionally, since our approach extends the base model with *diff* subnetworks, the resulting model is expected to perform (at least) as good as the fine-tuning variation, avoiding possible performance degradations. The modularity of our approach supports separating the process of developing debiasing solutions for a task from using them, such that stand-alone debiasing modules can be created and shared, and later be utilized in a final system on-demand. We evaluate our approach on three bias mitigation tasks: occupation prediction from biographies involving gender (De-Arteaga et al., 2019), hate speech detection with dialect-based race as sensitive attribute (Founta et al., 2018); and mention prediction in tweets with two attributes of gender and age of the authors (Pardo et al., 2016). The last dataset particularly enables the study of combining independently trained debiasing modules (details in §4). The evaluation results show that our approach, due to learning the subnetworks specialized on the narrow functionality of debiasing an attribute, provides on par or better debiasing performance in comparison with strong baselines. Additionally, we observe that on the mention detection task, learning the debiasing subnetworks post-hoc to model training provides effective results when combining the two (independently trained) subnetworks at inference time. Remarkably, these results are achieved with debiasing subnetworks of maximum 1% size of the core model (BERT-Base), in some cases (e.g., for gender attribute) even only 0.01% (details in §5). ## 2 Related Work 2.1 **Parameter-Efficient And Modular Training** The concept of subnetworks is grounded in the lottery ticket hypothesis (Frankle and Carbin, 2019), stating that in deep neural networks, one can find many sparse subnetworks with a capacity comparable to that of the base network. Zhou et al. (2019) show that spotting such subnetworks through binary masks can in fact be seen as a form of model training. Zhao et al. (2020) further consider the magnitude of the parameters for pruning and filter out the ones lower than a specific threshold. Guo et al. (2021) use L0-regularization (Louizos et al., 2018) to reduce the number of active neurons. Hu et al. (2021) propose low-rank adaptation via rank decomposition matrices. These methods are extended by structural pruning approaches with additional architectural constraints, commonly led to a higher parameter-efficiency (Lagunas et al., 2021; Jaszczur et al., 2021). Recent studies exploit subnetworks in the context of multi-task learning. Wortsman et al. (2020) isolate the learning signal of each task in a separate masking subnetwork, while Ben-Zaken et al. (2021) only finetune the bias weights. Xu et al. (2021) learn a subset of parameters via masking out the gradients of other parameters during backwardpass. Guo et al. (2021) suggest learning a sparse diff subnetwork for each task, whose parameter values are added to the corresponding parameters of the base network. An alternative architecture are adapter networks, first introduced in the context of multi-task learning (Rebuffi et al., 2017; Houlsby et al., 2019; Stickland and Murray, 2019), and are then extended in respect to their parameter efficiency (Rücklé et al., 2021; Han et al., 2021a), architectural variations (Mahabadi et al., 2021), and transfer learning capacity (Pfeiffer et al., 2021). As stated by (Sung et al., 2021), adapters (in their original form) slightly increase the inference cost of a model in comparison with the original form or pruning-based variations. Our proposed approach contributes to this line of research by extending this concept to on-demand bias mitigation. ## 2.2 Fairness & Bias Mitigation In Nlp Several studies explore methods for debiasing PLMs, such as linearly projecting embeddings into the space with minimum correlations to protected features (Ravfogel et al., 2020; Kaneko and Bollegala, 2021; Bolukbasi et al., 2016), utilizing a ![2_image_0.png](2_image_0.png) distribution alignment loss (Guo et al., 2022), or penalizing bias by utilizing the encoded information in models (Schick et al., 2021). Adversarial training, originally introduced in domain adaptation (Ganin et al., 2016; Ganin and Lempitsky, 2015) is utilized in the context of fair representation learning (Xie et al., 2017; Madras et al., 2018), and later to erase demographic data from text classifiers (Elazar and Goldberg, 2018; Barrett et al., 2019; Han et al., 2021b; Wang et al., 2021), information retrieval models (Rekabsaz et al., 2021), and recommendation systems (Ganhör et al., 2022). Mutual information removal is an alternative approach, which minimizes the approximate upper bound of the common information between the task and protected attributes (Cheng et al., 2020; Colombo et al., 2021). Our work utilizes these optimizations to learn a novel modular on-demand bias mitigation approach. Few recent studies explore parameter-efficient training for bias mitigation. Lauscher et al. (2021) approach debiasing PLMs using a stack of adapters. While shown effective in practice, the adapters in the higher levels inherently depend on the ones in the lower levels and cannot be learned nor utilized stand-alone. Zhang et al. (2021) approach bias mitigation with binary masks applied to the base network. More recently and in the context of removing spurious shortcuts in natural language understanding datasets, Meissner et al. (2022) train sparse binary masks on a finetuned model. Our work extends this line of research by encapsulating concept erasure modules in separate *diff* subnetworks for each protected attribute, and selectively applying them to the base model at inference time. ## 3 Modular Debiasing With Diff **Subnets** We start with defining the general approach to model bias mitigation. We consider an arbitrary PLM denoted by fθ with the set of parameters θ. The model learns the task τ using the loss function Lτ . The predictions of fθ might be sensitive to the variations in any of the k protected attributes of the set P = {ρ1*, ..., ρ*k}. The bias mitigation objective is to make the model invariant to these variations while maintaining the effectiveness on the task, approached by defining the debiasing loss Lρi for the protected attribute ρi. We discuss two realizations of this loss function in Section 3.2. A debiased model in respect to attributes P is achieved by training on the following loss function: $${\mathcal{L}}_{t o t a l}={\mathcal{L}}_{\tau}+{\mathcal{L}}_{\rho_{1}}+\ldots+{\mathcal{L}}_{\rho_{k}}$$ Using L*total*, one can finetune all parameters of the model (Elazar and Goldberg, 2018; Rekabsaz et al., 2021), or utilize any parameter-efficient training such as adapters (Lauscher et al., 2021), *diff* pruning (Guo et al., 2021), or binary masks (Zhang et al., 2021). We use some of these methods as baselines, explained in the following sections. In the remainder of this section, we introduce our *Modular Debiasing with Diff Subnetworks* (MODDIFFY), explain its training and inference procedure, and describe two debiasing optimization methods. ## 3.1 Modd**Iffy** We aim to encapsulate the debiasing functionality of the protected attribute ρi, provided by the signal of the corresponding loss Lρi , into the sparse *diff* subnetwork characterized by the set of parameters δρi . Each parameter in δρi corresponds to a parameter in θ, such that adding δρi to the corresponding parameters in θ results in debiasing the model f. To learn the model, let us consider a training data item in form of ⟨x, yτ , yρ1 , ..., , yρk⟩, where x is the input, yτ the task label, and yρi denotes the label of the corresponding protected attribute ρi. Figure 1 depicts the MODDIFFY approach with adversarial bias mitigation optimization. We first optimize for the task by encoding x into the vector zτ using f(.; θ). An arbitrary decoder network denoted as gτ uses zτ to predict the task output, and task loss is calculated using cross entropy (CE), as formulated below: $$z_{\tau}=f(x;\theta),\quad{\cal L}_{t o t a l}^{(0)}=\mathrm{CE}(g_{\tau}(z_{\tau}),y_{\tau})\quad(2)$$ This loss updates θ as well as the parameters of gτ . While in our experiments we opt for fully finetuning fθ, in practice the model can be trained using any parameter-efficient method. Next, we iterate over the protected attributes, and in each step learn the corresponding debiasing diff subnetwork. Specifically in step i dedicated to the protected attribute ρi, we learn the set of sparse additional parameters δρi such that by being added to θ, the resulting f(.; θ + δρi ) model is debiased. Learning δρi is characterized by three loss functions. The first loss, L task ρi maintains the task performance when the task output is predicted from the altered encoder formulated below: $$\mathbf{z}_{\rho_{i}}=f(x;\mathbf{\theta}+\mathbf{\delta}_{\rho_{i}}),\quad\mathcal{L}_{\rho_{i}}^{\text{task}}=\text{CE}(g_{\tau}(\mathbf{z}_{\rho_{i}}),y_{\tau})\tag{3}$$ The second is the debiasing loss $\mathcal{L}_{\rho_{i}}^{\text{debiasing}}$, defined The second is the debiasing loss L ρi, defined based on zρi and the label of the corresponding protected attribute yρi . We discuss adversarial bias removal and mutual information reduction as two possible realization of this representation disentanglement loss in Section 3.2. The third loss imposes the sparsity constraint, defined as the L0 regularization of δρi . The L0 loss aims to reduce the number of non-zero parameters, namely the term P|δρi| j=1 1{δρi,j ̸= 0}, and is realized with the differentiable approximation proposed by Louizos et al. (2018). Following Guo et al. (2021), δρi is decomposed into the element-wise multiplication of two sets of parameters: δρi = mρi ⊙ wρi . The parameter set wρi stores the magnitude changes (*diff* values), and mρi learns to mask out the parameters. mρi is characterized by the hard concrete distribution (Guo et al., 2021) with (log αρi , 1) parameters, and γ < 0 and ζ > 1 hyperparameters. This results in the following formulation: $$\begin{array}{c}{{\left|\delta_{\rho_{i}}\right|}}\\ {{{\mathcal L}_{\rho_{i}}^{L_{0}}=\sum_{j=1}^{}\sigma\left(\log\alpha_{\rho_{i},j}-\log\left(-\frac{\gamma}{\zeta}\right)\right)}}\end{array}\quad\mathrm{(4)}$$ and we have to the minimal function as shown in where σ denotes the sigmoid function, and as in Guo et al. (2021) β is set to 1. The masking network can be simply reduced by assigning a mask to a group of parameters (such as a weight matrix, or a layer) instead of each individual one. Putting all together, the objective of MODDIFFY at step i > 0 is defined as: $${\mathcal{L}}_{t o t a l}^{(i)}={\mathcal{L}}_{\rho_{i}}^{\mathrm{task}}+{\mathcal{L}}_{\rho_{i}}^{\mathrm{debias}}+{\mathcal{L}}_{\rho_{i}}^{L_{0}}\qquad\quad(5)$$ $\mathbf{b}\cdot\mathbf{b}=-\mathbf{b}\cdot\mathbf{a}$ 2. We should note that while each subnetwork is trained independently from the others, the output embeddings zρi are passed to the same decoder network gτ . This design choice forces the learned embeddings to remain in the same embedding space, making it possible to add multiple (independently trained) subnetworks together to the core model. Another aspect is that the sparsity rate of each resulting subnetwork is not fixed and may vary due to the factors such as the hyperparameter setting, and the extent of encoded information content. To achieve a fixed sparsity rate, we further apply magnitude pruning by only keeping a fixed portion of the parameters with the largest absolute values, and finetuning the resulting parameters with the task and debiasing loss terms. We conduct optimization with L*total* under two training regimes. In the first one referred to as MODDIFFY-PAR, we repeat the mentioned training steps for each training batch. The second approach referred to as MODDIFFY-POST trains debiasing subnetworks post-hoc to training the core model. While MODDIFFY-PAR accommodates for a higher flexibility in optimization by training the networks in parallel, MODDIFFY-POST provides the practical benefits of learning various debiasing solutions for an already trained core model. Finally at *inference time*, one can use the core model in its original form f(x; θ), or in combination with any of the debiasing subnetworks in the form of f(x; θ + δρi ). Our approach also enables simultaneously debiasing all or any subset of the protected attributes by adding their corresponding subnetworks to the core network, for instance as in f(x; θ + δρ1 + ... + δρk ). We should note that, although these subnetworks reside in the same distributional space, they might affect each others functionality, depending on the inherent nature of the biases as well as the possible correlations between the protected attributes. We will examine this case in the next sections on a dataset with two protected attributes of gender and age. ## 3.2 Bias Mitigation Objectives We explain two bias mitigation optimization methods used in MODDIFFY in the following. Adversarial Bias Removal This method first defines a new classification head hρi for each protected attribute ρi. This head receives zρi as input, predicts the corresponding protected attribute, and calculates the cross entropy loss function L debias ρi. This loss needs to remove the information of ρi from f but train hρi such that it can effectively predict the protected attribute. This optimization forms the min-max game: L debias ρi = minf maxhρi CE(hρi (zρi ), yρi ). A common approach to turn this loss into a minimization problem is by using gradient resversal layer (GRL) (Ganin and Lempitsky, 2015) added before the debiasing heads. GRL multiplies the gradient of L debias ρi with a factor of −λi, and thereby simplifies the learning process to a standard gradient-based optimization, formulated below: $${\mathcal{L}}_{\rho_{i}}^{\mathrm{debias}}=\operatorname*{min}_{f,h_{\rho_{i}}}\mathrm{CE}(h_{\rho_{i}}(z_{\rho_{i}}),y_{\rho_{i}})$$ Mutual Information (MI) Reduction This approach represents a family of algorithms that aims to remove the mutual information of the encoded embeddings of the task and protected attributes. Maximum Mean Discrepancy (MMD), first introduced in the context of domain adaptation (Gretton et al., 2012; Tzeng et al., 2014), offers a realization of MI reduction by minimizing the ability to separate the subsets belonging to two protected attributes. In particular, given a set of data points X split into two subsets XA ρi and XB ρi according to the values of the (binary) protected attribute ρi, MMD minimizes the distance between the encoded embeddings of the subgroups with the following loss formulation: L debias ρi = $$\left(\frac{\sum_{x^{A}\in X_{\rho_{i}}^{A}}\phi(f(x^{A}))}{|X_{\rho_{i}}^{A}|}-\frac{\sum_{x^{B}\in X_{\rho_{i}}^{B}}\phi(f(x^{B}))}{|X_{\rho_{i}}^{B}|}\right)^{2}$$ where ϕ is the feature map kernel defined as a linear combination of multiple Gaussian kernels. ## 4 Experiment Design Datasets We evaluate our approach on three datasets on the tasks of occupation prediction, hate speech detection, and mention prediction; involving protected attributes of gender, age, and race dialect. The first dataset is **BIOS** (De-Arteaga et al., 2019) which contains short biographies used to predict a person's job, where the name and any indication of the person's gender (such as pronouns) in the biography are omitted. The BIOS dataset contains around 430K data points with 28 occupations, and two protected attribute classes (female/male). The second dataset is **FDCL18** (Founta et al., 2018) for hate speech detection, containing a set of tweets each classified as hateful, abusive, spam, or none. As discussed in Xia et al. (2020), hate speech might have a strong correlation with dialect-based racial bias. Following previous studies (Sap et al., 2019; Ravfogel et al., 2020), we assign race dialect labels of *African American* and *White American* to FDCL18 using the probabilistic model developed by Blodgett et al. (2016), resulting in the dataset of approximately 62K data points. The third dataset is **PAN16** (Rangel et al., 2016) containing a set of tweets accompanied with the labels of gender and age of the authors. The task's objective is to predict mentions (whether another user is mentioned in a tweet). PAN16 provides approximately 200K data points with binary task classes (*mention*, no mention), as well as two gender labels and five age groups. Further details on the three datasets are provided in Appendix A. Models and Baselines We conduct the experiments on the following models and baselines. F**INETUNE**: finetuning all parameters of the PLM on the task without any bias mitigation objective. FINETUNE-**DEBIAS**: the same model as FINETUNE but with the bias mitigation objective. A**DAPTER**: learning the task with an adapter network while the rest of the PLM's parameters are kept frozen. ADAPTER-**DEBIAS**: the same model as ADAPTER but the adapter is trained on both task and bias mitigation objectives. DIFFPRUN: using a *diff* network to learn the task while PLM parameters remain unchanged. DIFFPRUN-**DEBIAS**: same as DIFFPRUN but the *diff* network is trained on both task and bias mitigation objectives. MODDIFFY-**POST**: our introduced post-hoc approach Model BIOS (gender) FCDL18 (race-dialect) Adversarial MI Reduction Adversarial MI Reduction Task↑ Probe↓ Task↑ Probe↓ Task↑ Probe↓ Task↑ Probe↓ FINETUNE 84.10.3 67.20.7 84.10.3 67.20.7 82.00.4 93.00.4 82.00.4 93.00.4 ADAPTER 84.40.1 65.90.1 84.40.1 65.90.1 80.90.1 82.14.1 80.90.1 82.14.1 FINETUNE-DEBIAS 84.20.1 56.90.9 84.10.7 61.70.8 81.90.7 84.53.9 81.60.4 87.30.5 ADAPTER-DEBIAS 84.60.2 60.80.4 84.40.0 65.60.2 80.50.7 65.6 ♣ 0.180.10.4 82.1 ♣ 1.3 DIFFPRUN 84.60.1 68.90.2 84.60.1 68.90.2 81.30.3 93.20.3 81.30.3 93.20.3 DIFFPRUN-DEBIAS 84.50.1 62.40.3 84.20.4 63.21.4 81.60.4 66.83.7 81.30.2 91.80.2 MODDIFFY-POST 84.50.1 61.60.7 84.30.1 64.50.4 81.30.2 66.01.2 81.20.2 91.10.9 MODDIFFY-PAR 84.20.2 53.7 ♣ 1.584.50.2 58.8 ♣ 1.281.20.6 75.43.7 81.30.7 85.50.4 Table 1: Results of the BIOS and FCDL18 datasets on **BERT-Base** with adversarial bias removal and mutual information (MI) reduction. Task performance is measured with accuracy, and bias mitigation with balanced accuracy of the probes. The protected attribute is gender for BIOS, and race-dialect for FCDL18. The results with the best bias mitigation performance (lowest values) among the models that use *diff* subnetworks (lower part of the table) are shown in **bold**, and among all models with the ♣ symbol. Subscript values indicate standard deviation. where the debiasing modules are learned after training the model. We use FINETUNE as the base model for MODDIFFY-POST, whose parameters are kept frozen during post-hoc training. MODDIFFY-PAR: our introduced parallel approach where the debiasing modules are learned together with the base model finetuned on the task. Additionally, to provide a comprehensive view on bias mitigation methods, we evaluate the datasets on the **INLP** (Ravfogel et al., 2020) approach using the implementation and suggested hyperparameter setting. The evaluation results of the INLP method are separately reported in Table 8 in Appendix B. As the PLM encoder for all models, we using two versions of BERT (Devlin et al., 2019) with different sizes, namely BERT-Mini (Turc et al., 2019) and BERT-Base. This provides us a more comprehensive picture regarding the effect of encoder size and number of involved parameters on the bias mitigation methods. All debiasing models are separately trained according to the adversarial bias mitigation and mutual information reduction methods. We particularly opt for a non-linear adversarial head with two fully connected layers and the tanh activation. Furthermore, to improve the capacity of adversarial learning, we initialize 5 instances of hρi and calculate the average of the loss for the backward pass. In MI reduction, in the case of protected attributes with more than two classes, we turn the multi-classes setting to multiple one-versus-rest splits. For the models with *diff* subnetworks, we conduct preliminary experiments to find proper thresholds for magnitude pruning that improves sparsity as much as possible without sacrificing performance. For BERT-Base and BERT-Mini, we set the minimum sparsity threshold to 99% and 95% (maximum size of 1% and 5%), respectively. We note that these ratios are the lower bounds, and the L0 regularization may by itself reach a higher sparsity. The complete details of our hyperparameters setting and training procedure are explained in Appendix A. Our code and trained resources are available in https://github.com/CPJKU/ ModularizedDebiasing. Evaluation Metrics We evaluate the performance of the classifiers on the core task using the accuracy metric. We evaluate bias mitigation based on the concept of *fairness through blindness*, namely by examining whether models are agnostic about the protected attributes. Concretely following previous works (Elazar and Goldberg, 2018; Barrett et al., 2019), we report the leakage of a protected attribute in terms of the performance of a strong probe (or attacker) network. To train the probe, we freeze the model's parameters, and train a new classification head (two-layer feed-forward layer with a tanh activation) to predict the protected attribute from the z encoding vector. For each evaluation, we train an ensemble of 5 probes for 40 epochs with early stopping if validation loss does not increase over 5 epochs, and report the results of the best performing probe. We report the performance of the probe in terms of balanced/macro accuracy (average of per-class accuracy scores). Balanced accuracy has the benefit of better reflecting the performance of the methods when considering minority groups, particularly given the unbalanced | Model | Adversarial Bias Mitigation | MI Reduction | | | | | |-------------------------|-------------------------------|----------------|------------|---------|------------|------------| | Task↑ | ProbeG↓ | ProbeA↓ | Task↑ | ProbeG↓ | ProbeA↓ | | | FINETUNE | 93.50.2 | 70.42.2 | 53.63.2 | 93.50.2 | 70.42.2 | 53.63.2 | | ADAPTER | 87.30.1 | 67.80.6 | 37.11.7 | 87.30.1 | 67.80.6 | 37.11.7 | | FINETUNE-DEBIASG | 93.70.0 | 52.30.5 | 34.81.1 | 93.70.1 | 62.20.1 | 41.91.4 | | FINETUNE-DEBIASA | 92.90.2 | 52.30.3 | 31.42.6 | 93.70.1 | 61.50.7 | 41.22.1 | | FINETUNE-DEBIASG&A | 93.00.2 | 51.71.4 | 33.91.8 | 93.50.1 | 61.40.7 | 40.81.0 | | ADAPTER-DEBIASG | 86.90.1 | 54.90.4 | 29.21.7 | 87.00.1 | 67.40.3 | 37.02.2 | | ADAPTER-DEBIASA | 86.10.2 | 58.50.2 | 25.6 ♣ 2.0 | 87.10.3 | 67.10.5 | 36.61.2 | | ADAPTER-DEBIASG&A | 92.20.2 | 51.6 ♣ 2.4 | 27.51.9 | 92.80.1 | 64.20.4 | 34.90.2 | | DIFFPRUN | 93.00.1 | 76.10.4 | 62.40.5 | 93.00.1 | 76.10.4 | 62.40.5 | | DIFFPRUN-DEBIASG | 93.40.1 | 56.30.8 | 48.51.2 | 93.30.0 | 73.00.7 | 58.20.5 | | DIFFPRUN-DEBIASA | 92.90.1 | 64.50.8 | 34.11.1 | 93.40.0 | 73.10.8 | 57.10.6 | | DIFFPRUN-DEBIASG&A | 92.90.2 | 54.72.6 | 29.72.0 | 93.60.2 | 73.30.3 | 57.20.9 | | MODDIFFY-POSTG | 93.70.1 | 57.01.0 | 45.41.3 | 93.60.1 | 66.20.2 | 46.21.4 | | MODDIFFY-POSTA | 93.70.1 | 63.70.9 | 31.72.8 | 93.60.0 | 66.60.2 | 46.52.8 | | MODDIFFY-POSTG + dittoA | 92.30.6 | 57.71.2 | 32.02.8 | 93.40.1 | 66.40.8 | 46.61.8 | | MODDIFFY-POSTG&A | 93.60.1 | 52.62.2 | 30.90.5 | 93.60.1 | 66.90.4 | 47.40.1 | | MODDIFFY-PARG | 93.50.2 | 53.01.6 | 32.21.1 | 93.60.0 | 59.30.4 | 35.70.4 | | MODDIFFY-PARA | 93.50.2 | 53.81.9 | 30.11.0 | 93.70.2 | 55.00.2 | 29.90.9 | | MODDIFFY-PARG + dittoA | 93.50.2 | 52.81.7 | 30.20.8 | 93.60.2 | 56.11.1 | 30.11.0 | | MODDIFFY-PARG&A | 93.80.2 | 52.31.4 | 28.31.6 | 93.60.2 | 52.7 ♣ 1.3 | 29.4 ♣ 0.6 | distributions over protected labels in the datasets. To account for possible variabilities, we repeat every experiment five times and report the mean and standard deviation. ## 5 Results And Analysis Single-attribute Evaluation Table 1 reports the evaluation results of the BIOS and FCDL18 datasets on BERT-Base using adversarial bias removal and mutual information (MI) reduction. The results of the same experiments on BERT-Mini are shown in Table 6 in Appendix B. Starting from task accuracy, the models using subnetworks (variations of DIFFPRUN and MODDIFFY shown at the lower part of the table) consistently perform the same as the fully finetuned models on both datasets and debiasing methods. We observe a slight decrease in performance for the adapter-based models on FCDL18. Looking at the leakage probing performance of bias attributes, MODDIFFY models show better performance (lower values) among the subnetworkbased models on both datasets, and overall on BIOS.2In particular, MODDIFFY models outperform the directly comparable baselines FINETUNE-DEBIAS and DIFFPRUN-DEBIAS on all configurations, indicating the benefits of learning separate debiasing modules on bias mitigation performance. This indeed comes with the core advantages of MODDIFFY models in proving modularized and on-demand bias mitigation. The results also show that MODDIFFY-POST, while (as expected) slightly weaker than MODDIFFY-PAR, provides competitive bias mitigation performance (i.e., on par with DIFFPRUN-DEBIAS). Finally, comparing between debiasing optimizations, MI reduction shows consistently worse bias 2Particularly on FCDL18, we observe high variations, which requires us to more cautiously interpret results. We assume that this is due to the small size of this dataset especially in the learning regimes with many parameters on BERT-Base, as this effect is less pronounced on BERT-Mini (Table 6 in Appendix B). mitigation performance in comparison with adversarial training, particularly on FCDL18 where the protected attribute has more than two labels. Two-attribute Evaluation The results on PAN16 using BERT-Base are reported in Table 2, and the same experiments on BERT-Mini in Table 7 in Appendix B. In this experiment, every debiasing model is trained based on either gender, age, or simultaneously on both gender and age, shown with the subscripts G, A, G&A, respectively. An additional evaluation is indicated with MODDIFFY-*G + *ditto*A, which refers to adding the two separately trained gender and age debiasing subnetworks to the core model at inference time. In fact, for the experiments of each MODDIFFY model indicated with "+*ditto*", G, and A, we train only one model with gender and age subnetworks, and then add the respective subnetwork(s) to the core model. Looking at task performance results, similar to the previous datasets the models perform on par with fully finetuning, except the adapter-based models which in this case significantly underperform in both optimization methods. Regarding bias mitigation performance, we observe similar patterns to the ones discussed on the other datasets: MODDIFFY models particularly MODDIFFY-PAR show the least attribute leakage among the subnetworkbased models consistently, and also over all models with MI reduction. This reaffirms the benefits of modularizing debiasing of the attributes separately from the task. The results also indicate the existence of a correlation between the gender and age attributes in this dataset, such that debiasing each attribute also results in a decrease in leakage of the other attribute. The overall best results are achieved on the models that simultaneously optimize on both attributes (G&A). Finally, let us have a closer look at the results of applying the two independently trained subnetworks. On both training regimes, we observe on par debiasing performance between MODDIFFY-*G + dittoA and the corresponding results with one subnetwork, namely the ones of gender and age debiasing in MODDIFFY-*G and MODDIFFY-*A, respectively. These results indicate the viability of our approach to effectively merge subnetworks at inference time. Subnetworks analysis We further investigate the achieved sparsity rate of the subnetworks, keeping PAN16 BIOS FCDL18 Gender Age Gender Dialect Overall 0.01% 1.00% 0.27% 0.18% Layer 12 0.04% 7.00% 0.26% 0.92% Layer 11 0.02% 3.59% 0.30% 0.52% Layer 10 0.01% 1.76% 0.33% 0.30% Layer 9 0.00% 0.49% 0.20% 0.18% Layer 8 0.00% 0.38% 0.22% 0.11% Layer 7 0.01% 0.21% 0.30% 0.14% Layer 6 0.00% 0.15% 0.28% 0.13% Layer 5 0.01% 0.03% 0.28% 0.07% Layer 4 0.01% 0.25% 0.43% 0.08% Layer 3 0.01% 0.25% 0.56% 0.08% Layer 2 0.01% 0.38% 0.41% 0.07% Layer 1 0.01% 0.21% 0.44% 0.06% Embeddings 0.00% 0.04% 0.05% 0.03% Table 3: The percentage of non-masked parameters in the debiasing subnetworks of MODDIFFY-PAR. in mind that the maximum capacity of the debiasing subnetworks on BERT-Base is set to 1%. The size of a subnetwork indicates the amount of information or in fact modifications needed to be applied, in order to debias a protected attribute. Table 3 reports the percentage of the number of non-masked parameters in every layer, and also overall, in the subnetworks of the MODDIFFY-PAR models regarding the protected attributes. The results show interesting patterns in respect to various protected attributes: the gender attribute on both PAN16 and BIOS dataset require much smaller subnetworks, such that the subnetwork on PAN16 is only 0.01% of the size of the core model. The age attribute appears to be a more complex topic in the underlying PLM, as it fully uses the 1% maximum capacity. Looking across the layers, the results show that debiasing the gender attribute is mostly handled in the lower transformer layers (particularly on BIOS), while debiasing age and dialect attributes mostly happens at the higher layers. In Appendix C, we further discuss this topic for all models on the level of individual weight matrices. Moreover, in Appendix D we investigate to what extent the subnetworks of a model across several runs affect on the same set of parameters. ## 6 Conclusion We propose MODDIFFY, a novel bias mitigation approach which enables integration of an arbitrary subset of the debiasing modules at inference time. Our method encapsulates the functionality of bias mitigation in respect to a protected attribute into a separate magnitude-difference subnetwork, which can then be applied to the core model on-demand. Our experiments on three classification tasks show that MODDIFFY improves bias mitigation achieved by separating the debiasing from the task network, and effectively mitigates the bias of two (and possibly more) attributes when their respective subnetworks are simultaneously utilized. ## 7 Limitations An important limitation of our work concerns the definition of the protected attributes in the datasets used for evaluation. In particular, gender in BIOS and PAN16 is limited to the binary female/male, lacking an inclusive and nuanced definition of gender. Similarly in FDCL18, we consider only two dialects of *African American* and *White American*, while clearly this definition is limited and noninclusive. Furthermore as in previous work (Sap et al., 2019; Ravfogel et al., 2020; Zhang et al., 2021), the labels of this protected attribute are assigned through a probabilistic model, and hence the dataset might not represent the nuances and traits of the real-world. The second limitation regards reaching strong conclusions on the generalizability of the multiattribute setting for MODDIFFY over any possible number of protected attributes or subset of them. Our multi-attribute experiments are conducted on one dataset with two attributes of gender and age, particularly due to the lack of available suitable datasets. Hence, Further studies (as well as more suitable datasets) are required for achieving a more comprehensive picture on the topic. Finally, we should also highlight two general limitations, shared with the other related studies in the area of model bias mitigation. First, we should consider that the aim of representation disentanglement optimizations is to reduce the existing *correlations* in the model with the protected attributes based on the *observed data*. These data-oriented approaches might lack effective generalization, particularly when the model is evaluated in other domains or out-of-distribution data. Second, our bias mitigation evaluation is grounded in the notion of fairness through blindness, and the debiasing optimization methods are designed to support this form of fairness. The effects of our method on other possible definitions of fairness are therefore left for future work. ## 8 Acknowledgment This work received financial support by the Austrian Science Fund (FWF): P33526 and DFH-23; and by the State of Upper Austria and the Federal Ministry of Education, Science, and Research, through grants LIT-2020-9-SEE-113 and LIT-2021- YOU-215. ## References Maria Barrett, Yova Kementchedjhieva, Yanai Elazar, Desmond Elliott, and Anders Søgaard. 2019. Adversarial removal of demographic attributes revisited. In Proceedings of the Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6331–6336. Elad Ben-Zaken, Shauli Ravfogel, and Yoav Goldberg. 2021. Bitfit: Simple parameter-efficient fine-tuning for transformer-based masked languagemodels. *ArXiv*, abs/2106.10199. Asia J Biega, Krishna P Gummadi, and Gerhard Weikum. 2018. Equity of attention: Amortizing individual fairness in rankings. In The 41st international ACM SIGIR Conference on Research & Development in Information Retrieval, pages 405–414. Su Lin Blodgett, Solon Barocas, Hal Daumé III, and Hanna Wallach. 2020. Language (technology) is power: A critical survey of "bias" in nlp. In *Proceedings of the 58th Annual Meeting of the Association* for Computational Linguistics, pages 5454–5476. Su Lin Blodgett, Lisa Green, and Brendan O'Connor. 2016. Demographic dialectal variation in social media: A case study of African-American English. In *Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing*, pages 1119–1130, Austin, Texas. Association for Computational Linguistics. Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Advances in Neural Information Processing Systems, volume 29. Curran Associates, Inc. Tianlong Chen, Jonathan Frankle, Shiyu Chang, Sijia Liu, Yang Zhang, Zhangyang Wang, and Michael Carbin. 2020. The lottery ticket hypothesis for pretrained BERT networks. In *Proceedings of NeurIPS*. Pengyu Cheng, Martin Renqiang Min, Dinghan Shen, Christopher Malon, Yizhe Zhang, Yitong Li, and Lawrence Carin. 2020. Improving disentangled text representation learning with information-theoretic guidance. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 7530–7541, Online. Association for Computational Linguistics. Pierre Colombo, Pablo Piantanida, and Chloé Clavel. 2021. A novel estimator of mutual information for learning to disentangle textual representations. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6539– 6550. Maria De-Arteaga, Alexey Romanov, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, and Adam Tauman Kalai. 2019. Bias in bios: A case study of semantic representation bias in a high-stakes setting. page 120–128, New York, NY, USA. Association for Computing Machinery. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In *Conference of the North American Chapter of* the Association for Computational Linguistics: Human Language Technologies, volume 1, pages 4171– 4186. Association for Computational Linguistics. Yanai Elazar and Yoav Goldberg. 2018. Adversarial removal of demographic attributes from text data. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 11– 21. Association for Computational Linguistics. Antigoni Maria Founta, Constantinos Djouvas, Despoina Chatzakou, Ilias Leontiadis, Jeremy Blackburn, Gianluca Stringhini, Athena Vakali, Michael Sirivianos, and Nicolas Kourtellis. 2018. Large scale crowdsourcing and characterization of twitter abusive behavior. In Twelfth International AAAI Conference on Web and Social Media. Jonathan Frankle and Michael Carbin. 2019. The lottery ticket hypothesis: Finding sparse, trainable neural networks. In *Proceedings of International Conference on Learning Representations, ICLR*. Christian Ganhör, David Penz, Navid Rekabsaz, Oleg Lesota, and Markus Schedl. 2022. Unlearning protected user attributes in recommendations with adversarial training. In *Proceedings of the 45th International ACM SIGIR Conference on Research and* Development in Information Retrieval, SIGIR '22, page 2142–2147, New York, NY, USA. Association for Computing Machinery. Yaroslav Ganin and Victor Lempitsky. 2015. Unsupervised domain adaptation by backpropagation. In International conference on machine learning, pages 1180–1189. Proceedings of Machine Learning Research. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. 2016. Domain-adversarial training of neural networks. In Journal of Machine Learning Research, volume 17, pages 1–35. Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Schölkopf, and Alexander Smola. 2012. A kernel two-sample test. The Journal of Machine Learning Research, 13(1):723–773. Demi Guo, Alexander Rush, and Yoon Kim. 2021. Parameter-efficient transfer learning with diff pruning. In *Proceedings of the 59th Annual Meeting of* the Association for Computational Linguistics, pages 4884–4896. Yue Guo, Yi Yang, and Ahmed Abbasi. 2022. Autodebias: Debiasing masked language models with automated biased prompts. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1012–1023. Wenjuan Han, Bo Pang, and Ying Nian Wu. 2021a. Robust transfer learning with pretrained language models through adapters. In *Proceedings of the 59th* Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 854–861. Xudong Han, Timothy Baldwin, and Trevor Cohn. 2021b. Diverse adversaries for mitigating bias in training. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2760–2765, Online. Association for Computational Linguistics. Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for NLP. In *International Conference on Machine Learning*, volume 97, pages 2790–2799. Proceedings of Machine Learning Research. Edward Hu, Yelong Shen, Phil Wallis, Zeyuan AllenZhu, Yuanzhi Li, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. Sebastian Jaszczur, Aakanksha Chowdhery, Afroz Mohiuddin, Łukasz Kaiser, Wojciech Gajewski, Henryk Michalewski, and Jonni Kanerva. 2021. Sparse is enough in scaling transformers. Advances in Neural Information Processing Systems, 34. Masahiro Kaneko and Danushka Bollegala. 2021. Debiasing pre-trained contextualised embeddings. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1256–1266. Klara Krieg, Emilia Parada-Cabaleiro, Gertraud Medicus, Oleg Lesota, Markus Schedl, and Navid Rekabsaz. 2023. Grep-biasir: A dataset for investigating gender representation-bias in information retrieval results. In *Proceeding of the ACM SIGIR Conference* On Human Information Interaction And Retrieval (CHIIR). François Lagunas, Ella Charlaix, Victor Sanh, and Alexander M Rush. 2021. Block pruning for faster transformers. In *Proceedings of the 2021 Conference* on Empirical Methods in Natural Language Processing, pages 10619–10629. Anne Lauscher, Tobias Lueken, and Goran Glavaš. 2021. Sustainable modular debiasing of language models. In *Findings of the Association for Computational* Linguistics: EMNLP 2021, pages 4782–4797, Punta Cana, Dominican Republic. Association for Computational Linguistics. Christos Louizos, Max Welling, and Diederik P. Kingma. 2018. Learning sparse neural networks through l0 regularization. In *International Conference on Learning Representations*. David Madras, Elliot Creager, Toniann Pitassi, and Richard Zemel. 2018. Learning adversarially fair and transferable representations. In *Proceedings of* the International Conference on Machine Learning, pages 3384–3393. PMLR. Rabeeh Karimi Mahabadi, James Henderson, and Sebastian Ruder. 2021. Compacter: Efficient low-rank hypercomplex adapter layers. In *Advances in Neural* Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 1022–1035. Johannes Mario Meissner, Saku Sugawara, and Akiko Aizawa. 2022. Debiasing masks: A new framework for shortcut mitigation in NLU. In Proceeding of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP). Francisco Manuel Rangel Pardo, Paolo Rosso, Ben Verhoeven, Walter Daelemans, Martin Potthast, and Benno Stein. 2016. Overview of the 4th author profiling task at pan 2016: Cross-genre evaluations. In CLEF. Jonas Pfeiffer, Aishwarya Kamath, Andreas Rücklé, Kyunghyun Cho, and Iryna Gurevych. 2021. Adapterfusion: Non-destructive task composition for transfer learning. In *Proceedings of the 16th Conference of the European Chapter of the Association* for Computational Linguistics: Main Volume, pages 487–503. Francisco Rangel, Paolo Rosso, Ben Verhoeven, Walter Daelemans, Martin Potthast, and Benno Stein. 2016. Pan16 author profiling. Zenodo. Shauli Ravfogel, Yanai Elazar, Hila Gonen, Michael Twiton, and Yoav Goldberg. 2020. Null it out: Guarding protected attributes by iterative nullspace projection. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7237–7256. Sylvestre-Alvise Rebuffi, Hakan Bilen, and Andrea Vedaldi. 2017. Learning multiple visual domains with residual adapters. In *Advances in Neural Information Processing Systems (NeurIPS)*, volume 30. Navid Rekabsaz, Simone Kopeinik, and Markus Schedl. 2021. Societal biases in retrieved contents: Measurement framework and adversarial mitigation of BERT rankers. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 306–316. Navid Rekabsaz and Markus Schedl. 2020. Do neural ranking models intensify gender bias? In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 2065–2068. Andreas Rücklé, Gregor Geigle, Max Glockner, Tilman Beck, Jonas Pfeiffer, Nils Reimers, and Iryna Gurevych. 2021. AdapterDrop: On the efficiency of adapters in transformers. In *Proceedings of the* 2021 Conference on Empirical Methods in Natural Language Processing, pages 7930–7946. Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A. Smith. 2019. The risk of racial bias in hate speech detection. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1668–1678, Florence, Italy. Association for Computational Linguistics. Timo Schick, Sahana Udupa, and Hinrich Schütze. 2021. Self-diagnosis and self-debiasing: A proposal for reducing corpus-based bias in NLP. Transactions of the Association for Computational Linguistics, 9:1408– 1424. Emily Sheng, Kai-Wei Chang, Prem Natarajan, and Nanyun Peng. 2019. The woman worked as a babysitter: On biases in language generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3398–3403. Gabriel Stanovsky, Noah A Smith, and Luke Zettlemoyer. 2019. Evaluating gender bias in machine translation. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 1679–1684. Asa Cooper Stickland and Iain Murray. 2019. BERT and PALs: Projected attention layers for efficient adaptation in multi-task learning. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of *Proceedings of Machine Learning* Research, pages 5986–5995. PMLR. Yi-Lin Sung, Varun Nair, and Colin Raffel. 2021. Training neural networks with fixed sparse masks. *ArXiv*, abs/2111.09839. Iulia Turc, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Well-read students learn better: On the importance of pre-training compact models. arXiv preprint arXiv:1908.08962v2. Eric Tzeng, Judy Hoffman, Ning Zhang, Kate Saenko, and Trevor Darrell. 2014. Deep domain confusion: Maximizing for domain invariance. arXiv preprint arXiv:1412.3474. Liwen Wang, Yuanmeng Yan, Keqing He, Yanan Wu, and Weiran Xu. 2021. Dynamically disentangling social bias from task-oriented representations with adversarial attack. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, pages 3740–3750. Mitchell Wortsman, Vivek Ramanujan, Rosanne Liu, Aniruddha Kembhavi, Mohammad Rastegari, Jason Yosinski, and Ali Farhadi. 2020. Supermasks in superposition. *Advances in Neural Information Processing Systems*, 33. Mengzhou Xia, Anjalie Field, and Yulia Tsvetkov. 2020. Demoting racial bias in hate speech detection. In Proceedings of the Eighth International Workshop on Natural Language Processing for Social Media, pages 7–14, Online. Association for Computational Linguistics. Qizhe Xie, Zihang Dai, Yulun Du, Eduard Hovy, and Graham Neubig. 2017. Controllable invariance through adversarial feature learning. In *Proceedings of the 31st International Conference on Neural* Information Processing Systems, pages 585–596. Runxin Xu, Fuli Luo, Zhiyuan Zhang, Chuanqi Tan, Baobao Chang, Songfang Huang, and Fei Huang. 2021. Raise a child in large language model: Towards effective and generalizable fine-tuning. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 9514– 9528, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. George Zerveas, Navid Rekabsaz, Daniel Cohen, and Carsten Eickhoff. 2022. Mitigating bias in search results through contextual document reranking and neutrality regularization. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '22, page 2532–2538, New York, NY, USA. Association for Computing Machinery. Xiongyi Zhang, Jan-Willem van de Meent, and Byron Wallace. 2021. Disentangling representations of text by masking transformers. In *Proceedings of the 2021* Conference on Empirical Methods in Natural Language Processing, pages 778–791, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Ryan Cotterell, Vicente Ordonez, and Kai-Wei Chang. 2019. Gender bias in contextualized word embeddings. In *Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics:* Human Language Technologies, pages 629–634. Mengjie Zhao, Tao Lin, Fei Mi, Martin Jaggi, and Hinrich Schütze. 2020. Masking as an efficient alternative to finetuning for pretrained language models. In Empirical Methods in Natural Language Processing, pages 2226–2241. Association for Computational Linguistics. Hattie Zhou, Janice Lan, Rosanne Liu, and Jason Yosinski. 2019. Deconstructing lottery tickets: Zeros, signs, and the supermask. *Advances in Neural Information Processing Systems*, 32:3597–3607. ## A Experiment Settings - Additional Details In FDCL18 dataset, we use the TwitterAAE model (Blodgett et al., 2016) to assign racial dialect classes. The TwiiterAAE model predicts four racial classes, African American, White American, *Hispanic*, and *Others*. We labeled a tweet as African American or *White American* if the prediction score was greater than 0.5. For PAN16 dataset, following (Sap et al., 2019) we balanced the task labels and sampled 200K data. The age groups of this dataset are 18-24, 25-34, 35-49, 50-64, and 65+. We randomly split the dataset into train, validation, and test set with the proportions 63:12:15 for BIOS, 63:12:15 for FDCL18, and 80:5:15 on PAN16. We use the validation set for hyperparameter tuning, and the best result on the validation set is evaluated on test set for the final results. The validation and test sets in all datasets follow the same distribution as the whole dataset. To address the unbalancedness of the dataset and the potential problems in adversarial learning, we apply upsampling only on the *training sets* of BIOS and FDCL18 datasets, to balance the protected attribute labels within each task label. For instance, genders are balanced in the dentist class by repeating the data items of the minority subgroup. Adversarial heads consist of five classifiers with different initialization. The loss for the five classifiers is averaged and accuracy is measured via majority vote. All baseline models are trained for 20 epochs. All DIFFPRUN and MODDIFFY model variants are trained for 30 epochs as they need to account for the two phases of *diff* pruning, where a model requires more training to recover its performance after the magnitude pruning step. We fix the learning rate of BERT weights to 2e−5 and the learning rate for the classifier heads to 1e−4. We set the batch size to 64 for all experiments. We keep other *diff*-specific hyperparameters the same as suggested by Guo et al. (2021). Adapter baselines follow Pfeiffer et al. (2021) with reduction factor of two. Rest of the hyperparameters are same as the subnetwork-based models. Table 4 reports the hyperparameters of our experiments. ## B Additional Results The results of all models using BERT-Mini are shown in Table 6 for BIOS and FCDL18 datasets, and in Table 7 for PAN16. Table 8 reports the evaluation results of the INLP method. | training | | |-------------------------|----------| | batch_size | 64 | | structured_diff_pruning | True | | alpha_init | 5 | | concrete_samples | 1 | | concrete_lower | -1.5 | | concrete_upper | 1.5 | | num_epochs | 20 | | num_epochs_finetune | 15 | | num_epochs_fixmask | 15 | | learning_rate | 2e-05 | | learning_rate_task_head | 0.0001 | | learning_rate_adv_head | 0.0001 | | learning_rate_alpha | 0.1 | | task_dropout | 0.3 | | task_n_hidden | 0 | | adv_dropout | 0.3 | | adv_n_hidden | 1 | | adv_count | 5 | | adv_lambda | 1.0 | | sparsity_pen | 1.25e-07 | | max_grad_norm | 1.0 | | adv attack | | | batch_size | 64 | | num_epochs | 40 | | learning_rate | 0.0001 | | adv_n_hidden | 1 | | adv_count | 5 | | adv_dropout | 0.3 | Table 4: Hyperparameters used for training | Parameter Name | Size | |------------------------|-------------| | word_embeddings | 117,204,480 | | intermediate.dense | 11,811,840 | | output.dense | 11,800,320 | | attention.self.query | 2,952,960 | | attention.self.key | 2,952,960 | | attention.self.value | 2,952,960 | | attention.output.dense | 2,952,960 | | position_embeddings | 1,966,080 | | output.adapter | 590,976 | | others | 7680 | Table 5: BERT-Base number of parameters ## C Sparsity Rates Of Subnetworks We visualize the percentage of non-masked parameters of the subnetworks in BERT-Base for each parameter matrix in Figures 2, 3 and 4 for MODDIFFY-PAR, MODDIFFY-POST, and DIFFPRUN-DEBIAS, respectively. In addition to the discussion in Section 5, we observe in these detailed figures that the LayerNorm module of the last Transformer block generally has a high density. We assume | BIOS | FCDL18 | | | | | | | | |-----------------|--------------|-------------|--------------|---------|----------|------------|----------|------------| | Adversarial | MI Reduction | Adversarial | MI Reduction | | | | | | | Task↑ | Probe↓ | Task↑ | Probe↓ | Task↑ | Probe↓ | Task↑ | Probe↓ | | | FINETUNE | 82.90.1 | 65.50.4 | 82.90.1 | 65.50.4 | 82.10.2 | 90.30.9 | 82.10.2 | 90.30.9 | | ADAPTER | 81.60.2 | 65.70.2 | 81.60.2 | 65.70.2 | 81.90.0 | 79.20.5 | 81.90.0 | 79.20.5 | | FINETUNE-DEBIAS | 81.60.2 | 56.41.7 | 81.60.1 | 59.90.9 | 80.00.5 | 73.72.7 | 79.70.5 | 87.32.6 | | ADAPTER-DEBIAS | 81.50.1 | 63.40.1 | 81.70.1 | 65.40.1 | 81.10.2 | 64.6 ♣ 1.3 | 81.80.1 | 78.5 ♣ 0.5 | | DIFFPRUN | 83.50.1 | 65.80.3 | 83.50.1 | 65.80.3 | 82.80.1 | 92.60.9 | 82.80.1 | 92.60.9 | | DIFFPRUN-DEBIAS | 83.30.1 | 59.10.7 | 82.30.1 | 65.50.9 | 82.30.4 | 65.82.8 | 82.50.3 | 91.90.9 | | MODDIFFY-POST | 83.10.0 | 57.30.7 | 83.10.0 | 63.41.4 | 81.60.4 | 69.7 ♣ 1.9 | 82.3.0.1 | 89.30.4 | | 0.6 | 81.40.2 | 58.8♣ 0.8 | 80.20.3 | 73.86.7 | 79.980.6 | 85.41.6 | | | Table 6: Results of the BIOS and FCDL18 datasets on **BERT-Mini** with adversarial bias removal and mutual information (MI) reduction. Task performance is measured with accuracy, and bias mitigation with balanced accuracy of the probes. The protected attribute is gender for BIOS, and race-dialect for FCDL18. The results with the best bias mitigation performance (lowest values) among the models that use *diff* subnetworks (lower part of the table) are shown in **bold**, and among all models with the ♣ symbol. Model Adversarial Bias Mitigation MI Reduction Task↑ ProbeG ↓ ProbeA↓ Task↑ ProbeG↓ ProbeA↓ FINETUNE 91.50.2 64.80.6 46.80.3 91.50.2 64.80.3 88.41.1 ADAPTER 78.40.2 65.80.2 35.30.8 78.40.2 65.80.2 35.30.8 FINETUNE-DEBIASG 91.70.2 54.60.6 42.20.6 91.60.2 61.30.6 42.40.5 FINETUNE-DEBIASA 91.10.2 61.90.6 39.10.8 91.40.6 61.90.1 43.20.0 FINETUNE-DEBIASG&A 91.20.1 57.01.0 38.50.6 91.90.0 62.50.1 42.10.9 ADAPTER-DEBIASG 78.10.1 59.60.4 32.01.3 78.50.2 65.90.2 34.80.0 ADAPTER-DEBIASA 77.30.1 60.50.9 27.30.9 78.30.1 65.50.2 34.1 ♣ 0.3 ADAPTER-DEBIASG&A 80.90.6 55.60.4 25.5 ♣ 1.282.00.1 64.20.4 34.90.2 DIFFPRUN 90.00.1 67.20.3 49.40.8 90.00.1 67.20.3 49.40.8 DIFFPRUN-DEBIASG 90.10.1 54.1 ♣ 1.144.10.7 78.50.2 65.90.2 34.80.0 DIFFPRUN-DEBIASA 89.20.3 64.80.6 39.41.9 78.30.1 65.50.2 34.1 ♣ 0.3 DIFFPRUN-DEBIASG&A 89.10.1 57.91.7 35.82.2 86.30.0 68.80.1 52.30.9 MODDIFFY-POSTG 91.70.2 55.71.2 42.70.6 91.50.0 62.60.0 43.71.3 MODDIFFY-POSTA 91.40.2 62.30.5 34.80.8 91.50.0 62.80.3 44.00.0 MODDIFFY-POSTG + *ditto*A 91.00.4 59.30.6 37.31.2 !!! 91.50.5 62.80.4 43.90.8 MODDIFFY-POSTG&A 91.50.2 56.71.3 35.12.1 91.50.0 63.00.3 43.90.4 MODDIFFY-PARG 91.60.2 55.90.8 41.70.7 91.60.1 60.90.4 40.30.8 MODDIFFY-PARA 91.30.2 61.30.5 37.61.2 91.50.2 60.80.6 41.60.7 MODDIFFY-PARG + *ditto*A 91.40.4 60.71.0 39.61.6 91.70.1 60.2 ♣ 0.539.70.1 MODDIFFY-PARG&A 91.30.3 55.51.1 33.71.6 91.50.3 60.71.3 41.31.1 Table 7: Results of the PAN16 dataset on **BERT-Mini**. The subscripts G and A refer to the protected attributes gender and age, respectively. The sign G&A denotes that the bias mitigation loss of the model consists of the debiasing loss terms of both gender and age. The MODDIFFY models in the form of "+*ditto*" refer to the case, where first two debiasing subnetworks with the same core model are trained separately, and then they are added to the base model at inference time. The results with the best bias mitigation performance (lowest values) among the models that use *diff* subnetworks (lower part of the table) are shown in **bold**, and among all models with the ♣ symbol. | BIOS (gender) | FDCL18 (race) | | | | |---------------------|-----------------|----------|----------|---------| | Model | Task↑ | Probe↓ | Task↑ | Probe↓ | | BERT-Mini | 71.10.2 | 59.80.9 | 73.61.6 | 57.90.4 | | BERT-Base | 66.20.4 | 50.50.3 | 71.62.2 | 50.20.1 | | (a) BIOS and FDCL18 | | | | | | Model | Task↑ | ProbeG ↓ | ProbeA ↓ | | | BERT-Mini | G | 69.30.1 | 60.10.3 | 29.20.8 | | A | 66.71.8 | 60.60.1 | 25.60.2 | | | BERT-Base | G | 69.40.1 | 54.40.1 | 25.50.2 | | A | 49.80.8 | 54.60.6 | 27.50.2 | | | (b) PAN16 | | | | | that this is due to the additive nature of these *diff*based methods, as changing the weight magnitude through adding a subnetwork requires rescaling the final output. ## D Consistency In Finding Subnetworks Figures 5, 6 and 7 show the percentage of common non-masked parameters in subnetworks on a particular weight matrix/layer across 5 runs for MODDIFFY-PAR, MODDIFFY-POST, and DIFFPRUN-DEBIAS, respectively. We report the percentage of overlap between the subnetworks of two, three, four, and five runs, separated with the "/". The results show that the equivalent subnetworks across various runs (with different initialization seeds) seem to be largely separated. This results are consistent with the observations on the lottery ticket hypothesis on large neural networks (Chen et al., 2020). ![15_image_0.png](15_image_0.png) ![16_image_0.png](16_image_0.png) ![17_image_0.png](17_image_0.png) ![18_image_0.png](18_image_0.png) ![19_image_0.png](19_image_0.png) ![20_image_0.png](20_image_0.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Left blank. ✓ A2. Did you discuss any potential risks of your work? Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Left blank. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4, 7 And Appendix A ✓ B1. Did you cite the creators of artifacts you used? Section 4 ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Left blank. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section 4, 7 and Appendix A ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Section 4 and Appendix A ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 4 and Appendix A ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 4 and Appendix A ## C ✓ **Did You Run Computational Experiments?** Section 4, 5, And Appendix A ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix A The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4 and Appendix A ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 5, and Appendix B C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? No response. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
vladika-matthes-2023-scientific
Scientific Fact-Checking: A Survey of Resources and Approaches
https://aclanthology.org/2023.findings-acl.387
The task of fact-checking deals with assessing the veracity of factual claims based on credible evidence and background knowledge. In particular, scientific fact-checking is the variation of the task concerned with verifying claims rooted in scientific knowledge. This task has received significant attention due to the growing importance of scientific and health discussions on online platforms. Automated scientific fact-checking methods based on NLP can help combat the spread of misinformation, assist researchers in knowledge discovery, and help individuals understand new scientific breakthroughs. In this paper, we present a comprehensive survey of existing research in this emerging field and its related tasks. We provide a task description, discuss the construction process of existing datasets, and analyze proposed models and approaches. Based on our findings, we identify intriguing challenges and outline potential future directions to advance the field.
# Scientific Fact-Checking: A Survey Of Resources And Approaches Juraj Vladika and **Florian Matthes** Department of Computer Science Technical University of Munich Garching, Germany {juraj.vladika, matthes}@tum.de ## Abstract The task of fact-checking deals with assessing the veracity of factual claims based on credible evidence and background knowledge. In particular, scientific fact-checking is the variation of the task concerned with verifying claims rooted in scientific knowledge. This task has received significant attention due to the growing importance of scientific and health discussions on online platforms. Automated scientific fact-checking methods based on NLP can help combat the spread of misinformation, assist researchers in knowledge discovery, and help individuals understand new scientific breakthroughs. In this paper, we present a comprehensive survey of existing research in this emerging field and its related tasks. We provide a task description, discuss the construction process of existing datasets, and analyze proposed models and approaches. Based on our findings, we identify intriguing challenges and outline potential future directions to advance the field. ## 1 Introduction In today's digital age, vast amounts of data are generated and new scientific breakthroughs achieved at a rapid pace. With millions of scientific articles being published annually, it has become increasingly challenging for researchers and the general public to stay informed about the latest developments and discoveries across various fields. On top of that, an especially challenging task for researchers is finding appropriate evidence for scientific claims and research hypotheses they are currently investigating. Exploring large academic databases and thoroughly examining scientific publications in them in order to verify specific facts is a time-consuming process. Automating the process of fact-checking scientific claims using methods based on Natural Language Processing (NLP) for knowledge exploration and evidence mining can greatly aid researchers in these efforts. One way how the Internet has benefited society is by making scientific knowledge easily accessible, transferable, and searchable in a matter of seconds. Inevitably, this has introduced new risks and challenges - it has become difficult to discern reliable sources from dubious content. Many scientific claims found in online articles, social media posts, or news reports are not always trustworthy and backed by reliable evidence. Furthermore, not only are humans prone to creating inaccurate information - modern generative language models can also produce misleading text that sounds convincing. All of these factors, combined with the quick pace at which content is proliferated online, contribute to the spread of misinformation, which has negative societal consequences (West and Bergstrom, 2021). Fact-checking is the task of assessing the veracity of factual claims appearing in written or spoken sources. It is traditionally performed manually by experts in journalism and dedicated applied fields. Automated fact-checking appeared as an approach where methods of Natural Language Processing (NLP) and Machine Learning (ML) are used to assist experts in making these decisions or completely automating the whole process (Nakov et al., 2021). Fact-checking becomes especially relevant during major political events like elections or referendums because of a sharp increase in deceptive and propagandist content. Most recently, the COVID-19 pandemic has brought the scientific discourse and the misinformation that comes with it into the spotlight. Medical misinformation is especially dangerous because it has influenced people to try unproven cures and treatments and make harmful health-related decisions. (Roozenbeek et al., 2020; Pennycook et al., 2020). We define *scientific fact-checking* as a subset of the fact-checking task concerned with verifying the veracity of claims related to scientific knowledge. While the primary role of general fact-checking is 6215 to help detect misinformation and curb its spread, scientific fact-checking additionally aids scientists in testing their hypotheses and helps wider audiences contextualize new scientific findings. The most popular scientific domain in scientific NLP research is the biomedical domain (Rajpurkar et al., 2022), but insights learned from it can be generalized to other scientific domains. Scientific factchecking can be performed both over the highly structured and complex language of science found in research publications and over the more easily understandable language found in news articles and online postings meant for lay audiences. Many scientists have decried the misinterpretation of their work when presented in the news press (Yavchitz et al., 2012), which makes scientific fact-checking even more relevant in bridging the gap between these two registers by performing an evidencebased assessment of scientific discoveries. Considering the constantly increasing amount of misinformation in the digital era and the expanding number of scientific publications, the interest in developing automated fact-checking solutions and efficient resources for it is on the rise. We present this survey to systematize the existing work in this area. To the best of our knowledge, this is the first survey on fact-checking with a specific focus on the scientific domain. Our three main contributions are: 1. We describe existing datasets for scientific fact-checking, including their construction process and main characteristics. 2. We analyze the developed approaches and models for solving the task of scientific factchecking, focusing on their components and design choices. 3. We outline general findings, identify challenges, and highlight promising future directions for this emergent task. ## 2 Task Definition 2.1 General Fact-Checking In general, *fact-checking* can be defined as the task of assessing whether a factual claim is valid based on evidence. It is a time-consuming task that is still usually performed manually by journalists. Automated approaches based on NLP have emerged to help assist humans in parts of the fact-checking process. Popular datasets used for benchmarking this task in NLP contain rewritten Wikipedia sentences as claims and annotated articles as evidence (Thorne et al., 2018; Jiang et al., 2020). For realworld settings, datasets were constructed by collecting claims and expert-written verdicts from dedicated fact-checking websites, such as PolitiFact (Vlachos and Riedel, 2014), Snopes (Hanselowski et al., 2019), or MultiFC (Augenstein et al., 2019) which draws from 26 fact-checking portals. This type of datasets usually contains claims currently trending in society, related to topics from world news, politics, media, or online rumors and hoaxes. ## 2.2 Scientific Fact-Checking We define *scientific fact-checking* as a variation of the fact-checking task that deals with assessing claims rooted in scientific knowledge. The dominant purpose of general fact-checking is to combat the spread of misinformation, while scientific fact-checking has the additional motive of helping scientists verify their research hypotheses, discover evidence, and facilitate scientific work. Scientific fact-checking comes with specific challenges not always present in general fact-checking, such as: - **Claims:** Facts to be checked can be research hypotheses that scientists want to verify, claims made by everyday social media users, or queries posed to search engines dealing with scientific concepts (e.g., healthrelated concerns). - **Evidence:** Scientific knowledge is constantly evolving when new research is conducted, which can make previous evidence obsolete and invalid. Moreover, different studies can come to diverging conclusions which complicates the final assessment of a claim. In clinical settings, this obstacle is facilitated by systematic reviews, which provide levels of evidence and strength of recommendations for any decision. - **Domain:** The scientific language used in research publications is highly complex and contains domain-specific terminology, which presents a challenge for a general-purpose language model. This requires adapting the NLP systems to the scientific domain. On top of that, scientific text often contains relations between concepts spanning multiple sentences, which makes representation of the full context and long-text modeling an essential aspect. - **Structure:** The highly structured nature of scientific knowledge makes it convenient to model it with structured representations like knowledge graphs, which can aid the factchecking process. On the other hand, scientific publications commonly include different visualization techniques like tables, charts, and figures, all of which introduce additional multimodal challenges to verification. These characteristics and other challenges with scientific fact-checking will be discussed in more detail in the following sections, especially in the Discussion section. ## 3 Related Tasks In this section, we present tasks related to scientific fact-checking. We group them into three categories: (1) tasks related to misinformation detection; (2) retrieval of claims, arguments, and evidence from text; and (3) NLP tasks in the scientific domain. ## 3.1 Misinformation Detection Since the principal function of fact-checking is to curb the spread of misinformation, it naturally belongs to a group of NLP tasks concerned with misinformation detection. Related tasks in this domain include fake news detection (Zhou and Zafarani, 2020), propaganda detection (Da San Martino et al., 2021), rumor detection (Bian et al., 2020), or stance detection (Hardalov et al., 2022). While most of these tasks deal with misinformation related to politics and society, recently, there has been an increase in scientific and health-related misinformation detection, especially pertaining to content related to the COVID-19 pandemic (Shahi and Nandini, 2020; Hossain et al., 2020; Antypas et al., 2021). ## 3.2 Claim Detection And Evidence Mining A crucial prerequisite for automated fact-checking is devising methods that detect claims in the open domain. To achieve this, Yuan and Yu (2019) used a rule-based system to identify health claims in news headlines, while Wührl and Klinger (2021) develop a BERT-based model to detect biomedical claims in social media posts. After the claims are detected, an important next step is determining whether a claim is check-worthy since all claims are deemed relevant or interesting enough to be fact-checked. Check-worthiness for scientific claims was studied in the shared task CLEF-CheckThat! (Nakov et al., 2022) and by Zuo et al. (2022), where annotators helped construct a dataset of health-related claims from news articles. Automatic gathering of evidence for scientific claims constitutes another line of research. There is work in this area focusing on humanities and social sciences (Stahlhut, 2021), although the majority of work we found is once again in life sciences. Numerous tools have been developed for searching PubMed, the largest database of biomedical publications (Lu, 2011), such as PubTator (Wei et al., 2019), Textpresso (Müller et al., 2018), LitSense (Allot et al., 2019), and EvidenceMiner (Wang et al., 2020). These methods usually look at the posed query (claim) and detect named entities, keywords, or metadata patterns to retrieve relevant results from the database. The end goal of this process is to help scientists gather evidence for their research, while in fact-checking, evidence retrieval is just one component of the whole process. ## 3.3 Scientific Nlp Tasks Scientific fact-checking belongs to a group of NLP tasks dealing with scientific text understanding. These tasks share a common challenge: working with highly complex scientific language and specific terminology. This has become even more apparent with the underwhelming performance of large language models, pre-trained on vast amounts of news data and web content, on NLP tasks in the scientific domain. Domain adaption is an essential cornerstone of modern NLP models working with specialized domains. The task of Natural Language Inference (NLI), commonly equated with Recognizing Textual Entailment (RTE), is the task of inferring whether a premise entails or contradicts a given hypothesis. This task is a crucial component of automated factchecking since predicting the final veracity of the claim is modeled entailment recognition between a claim and found evidence. For the scientific domain, datasets like MedNLI, which features medical claims rooted in the medical history of patients (Romanov and Shivade, 2018); SciNLI, which has claims from the domain of computational linguistics (Sadat and Caragea, 2022); and NLI4CT, with claims and evidence that originate from clinical trials reports of breast cancer patients (Vladika and Matthes, 2023). Another knowledge-intensive NLP task related to fact-checking is question answering. In par- | Dataset | # Claims | Claim Origin | Evidence Source | Domain | |-----------------------------------------|------------|----------------|---------------------|----------------| | SCIFACT (Wadden et al., 2020) | 1,409 | Researchers | Research papers | Biomedical | | PUBHEALTH (Kotonya and Toni, 2020b) | 11,832 | Fact-checkers | Fact-checking sites | Public health | | CLIMATE-FEVER (Diggelmann et al., 2020) | 1,535 | News articles | Wikipedia articles | Climate change | | HEALTHVER (Sarrouti et al., 2021) | 1,855 | Search queries | Research papers | Health | | COVID-FACT (Saakyan et al., 2021) | 4,086 | Reddit posts | Research, news | COVID-19 | | COVERT (Mohr et al., 2022) | 300 | Twitter posts | Research, news | Biomedical | Table 1: Datasets for the task of scientific fact-checking and claim verification ticular, open-domain question answering aims to find answers to given questions in unstructured textual corpora (Karpukhin et al., 2020), reminiscent of the process of finding relevant evidence for given claims in fact-checking. Popular datasets for biomedical QA are BioASQ (Tsatsaronis et al., 2015) and PubMedQA (Jin et al., 2019). Another important benchmark is BLURB (Biomedical Language Understanding and Reasoning Benchmark), introduced by Gu et al. (2022) to measure the performance of models in six different natural language understanding tasks over biomedical text. Finally, automated evidence synthesis is a task that aims to automate the process of creating systematic reviews for clinical trials (Brassey et al., 2021). ## 3.4 Related Surveys There are already existing surveys that cover general automated fact-checking (Thorne and Vlachos, 2018; Zeng et al., 2021; Guo et al., 2022) by formalizing the task, outlining the most important datasets and proposed solutions, and discussing challenges. The survey by Kotonya and Toni (2020a) focuses on explainability methods in existing fact-checking approaches and present the most important explainability aspects these systems should satisfy. The survey by Bekoulis et al. (2021) focuses on approaches for tackling FEVER, the most popular dataset for fact verification (Thorne et al., 2018). ## 4 Datasets In this section, we outline the existing datasets for scientific fact-checking that we found in the literature. The discovery process started with querying the well-known databases ACL Anthology,1 IEEE Explore,2and ACM Digital Library3 with the search string ("scientific" OR "biomedical") AND ("fact checking" OR "fact verification" OR "claim verification"). Retrieved articles were collected and the list was further expanded with any cited or citing paper from the initial batch of articles, according to Semantic Scholar.4In order for a dataset to be considered a fact-checking dataset, we stipulate it needs to provide claims, evidence (either documents or sentences), and final veracity labels. Such a dataset enables both the task of evidence retrieval and verdict prediction. This is important because the end goal of many automated fact-checking systems is to emulate the work of experts, where both seeking the evidence and making conclusions based on them constitute the process. This requirement narrowed the final list to the datasets summarized in Table 1. In the remainder of the section, we will describe the process and challenges related to constructing datasets. ## 4.1 Claim Creation The starting point in the dataset construction process is collecting the claims that will later be factcheked. Claims in fact-checking are usually divided into synthetic, referring to claims written by annotators (e.g., by modifying sentences from Wikipedia), and natural, which are those claims crawled from real-world sources like fact-checking sites or social media posts. The first type of claims end up being fluent, atomic, and decontextualized, which is very appropriate for processing by NLP models (Wright et al., 2022b). Other authors focus on more organic and noisy claims found in online 1https://aclanthology.org/ posts since such claims are usually relevant and interesting to be fact-checked automatically (Mohr et al., 2022). One common approach is to take original sentences from an appropriate source and have annotators reformulate them to a cleaner form. The dataset SCIFACT features biomedical claims that originate from human-written citation sentences in research articles but with the final form rewritten by annotators to make them more atomic and easily processed. Similarly, claims in HEALTHVER originate from Bing snippets of the most-searched user queries related to health and COVID-19, eventually reformulated by annotators. In the same vein, CLIMATE-FEVER contains sentences related to climate change extracted from online blogs and news websites, rewritten by annotators. The remaining datasets from Table 1 relied on completely automatically retrieving claims. PUBHEALTH used news titles from fact-checking articles related to public health as its claims. This assumption works in many cases where titles are indeed factual claims, but some examples in the final dataset are generic titles with no relevance for fact-checking. COVID-FACT scraped claims from posts of a highly moderated subreddit *r/COVID19*, where users were already required to make atomic claims in their post titles. They also automatically constructed all of their negative (refuted) claims with word in-filling from masked language models, which ended with some unusable examples. Finally, COVERT is the only dataset in the list that features completely organic claims found in Twitter posts. They used a biomedical claim detection model Wührl and Klinger (2021) to extract claims that feature a causative relation and also included mentions of any biomedical entities. ## 4.2 Evidence Set Construction Once the claims are collected, the next step is pairing them with appropriate evidence that addresses their veracity. The evidence source are often scientific publications, featuring highly complex and structured scientific language, or more easily understandable sources like news articles and Wikipedia articles. While working with text from scientific publications is more challenging both for humans and NLP models alike, they provide more rigorous scientific evidence. On the other hand, the general-purpose text provides evidence in a more explainable and intuitive form to a wider audience. The SCIFACT dataset pairs the claims with abstracts of those scientific publications where they originated from, adding distractor abstracts to make detecting appropriate evidence more challenging. Likewise, claims in HEALTHVER are also mapped to appropriate scientific publications found by the annotators. Datasets COVID-FACT and COVERT feature a combination of both scientific publications and news articles as their evidence source, while PUBHEALTH uses solely the web articles from fact-checking websites where their claims originated from. In the same way as the original FEVER dataset, CLIMATE-FEVER uses Wikipedia articles as its evidence source. ## 4.3 Class Labels Another integral component of dataset construction is labeling the claims with appropriate veracity labels. Following the tradition set by the FEVER dataset (Thorne et al., 2018), most of the datasets include three labels: SUPPORTED, REFUTED, and NOT ENOUGH INFORMATION (NEI). The definition of the NEI label has a different meaning in different datasets. In SCIFACT , this label refers to those claims for which none of the candidate abstracts contain suitable evidence to make a decision. In other datasets, it refers to the case where relevant evidence itself implies or states that there is currently not enough information to make a reliable and informed conclusion about the claim's veracity. Additionally, the dataset PUBHEALTH is the only one to feature a MIXED label, a label denoting a claim that consists of multiple factual statements with opposite veracity labels. ## 5 Approaches In this section, we describe different modeling approaches devised for the task of scientific factchecking. The standard framework usually consists of three major components that can all be modeled as well-established NLP tasks: document retrieval, evidence (rationale) selection, and verdict prediction (Zeng et al., 2021). This framework is visualized in Figure 1. Table 2 summarizes the models we found in the literature, developed for the scientific factchecking datasets from the previous chapter, with three framework components in each of them highlighted. While the most common approach is building separate models for each element and applying them in a pipeline, the best-performing systems ![5_image_0.png](5_image_0.png) jointly learn the rationale selection and verdict prediction with a shared representation. The dataset SCIFACT has the most models developed for it, partly owing to the shared task SCIVER (Wadden and Lo, 2021). For some of the datasets, we did not find dedicated models other than baselines provided in their originating papers. We analyze each part of the framework in more detail. ## 5.1 Document Retrieval Given a corpus of documents that serve as the knowledge source, document retrieval is concerned with retrieving the relevant documents that might contain evidence related to the claim. It is usually solved with approaches typical for Information Retrieval. These can be separated into sparse retrieval and dense retrieval approaches. Sparse retrieval uses ranking functions such as TF-IDF and BM25, which match exact keywords in a query with an inverted index of all words from the corpus. Conversely, dense retrieval deploys dense vector representations of queries, which consider the semantic meaning of the query and can catch synonyms and related concepts (Karpukhin et al., 2020). For SCIFACT, the document retrieval task focuses on retrieving relevant abstracts from a corpus of around 5 thousand given scientific abstracts. The baseline model VeriSci uses the simple TF-IDF metric to retrieve top k relevant abstracts. The models VerT5erini and later MultiVerS use the approach of first retrieving top k relevant abstracts using the BM25 metric and then adjusting the rankings using a T5 (Raffel et al., 2020) neural pointwise re-ranker based on (Nogueira et al., 2020), which is trained on the MS MARCO passage dataset used for machine reading comprehension (Nguyen et al., 2016). On the other hand, ParagraphJoint and ARSJoint used the dense vector representation BioSentVec (Chen et al., 2019), which was trained from 30 million biomedical publications and clinical notes. Searching for evidence in a small corpus of documents (5k in SCIFACT ) is useful for experimental settings but not realistic for real-world settings where large databases with millions of scientific publications have to potentially be queried to find appropriate evidence. When expanding document retrieval for SCIFACT to 500k documents in (Wadden et al., 2022a) and using the same BM25 + T5 re-ranking approach, the authors noticed performance drops of at least 15 points in the final F1 score of veracity prediction. This shows the need for a more precise semantic search of evidence documents. The authors of COVID-FACT tackle this by using snippets of the top 10 results returned by Google Search API for a given claim. This mimics how humans would approach fact-checking, but usually, additional verification of source quality and trustworthiness is needed in such an approach. ## 5.2 Evidence Selection Evidence selection is the task of selecting relevant rationale sentences from the previously retrieved documents to be used as evidence for claim veracity prediction in the next step. Even though this step can be modeled as a span detection task, evidence is usually modeled at a sentence level. It can then be taken as a binary classification task of predicting whether a sentence is relevant or irrelevant. Most commonly, top k sentences are selected, similarly to the document retrieval step. A common approach to evidence selection is to deploy models for sentence similarity and take those sentences that are the most similar to the claim being checked. The baselines for PUBHEALTH and COVID-FACT both use the SentenceBERT model (Reimers and Gurevych, 2019) to retrieve the top 5 most similar sentences. SentenceBERT is a model based on siamese networks and provides semantically rich sentence embeddings that can easily be compared using cosine-similarity. VerT5erini uses a T5 model fine-tuned on MS MARCO (same as in the previous step) for this task. While using sentence similarity for evidence selection is a straightforward and intuitive approach, | Document | Rationale | Verdict | | | | | | |----------------------------------|--------------------|---------------|---------------|--------------|--------------|-------|----| | Dataset | Model | Retrieval | Selection | Prediction | Result (F1) | | | | VeriSci | (Wadden | TF-IDF | BERT | BERT | 0.395 | | | | et al., 2020) ParagraphJoint (Li | BioSentVec | BERT + MLP / | BERT + MLP | 0.609 | | | | | et al., 2021) | BERT + KGAT | | | | | | | | VerT5erini (Pradeep et | al., | | | | | | | | 2021) | BM25 + T5 reranker (tuned on MS MARCO) | T5 | (tuned | on | T5 | (no | fine | | tuning) | 0.634 | | | | | | | | MS MARCO) | | | | | | | | | SCIFACT | ARSJoint | (Zhang | BioSentVec | BioBERT, MLP | BioBERT, MLP | 0.655 | | | et al., 2021) MultiVerS (Wadden et al., 2022b) | BM25 + T5 reranker | Longformer (binary head) | Longformer | 0.672 | | | | | (ternary head) | | | | | | | | | COVERT | Zero-shot | Mul | | | | | | | tiVerS (Wührl and Klinger, 2022) | BM25 + T5 reranker | Longformer (binary head) | Longformer | 0.620 | | | | | (ternary head) | | | | | | | | | PUBHEALTH | Baseline (Kotonya | provided | Sentence-BERT | SciBERT | 0.705 | | | | and Toni, 2020b) | | | | | | | | | CLIMATEFEVER | ClimateBERT (Webersinke et al., 2021) | provided | provided | ClimateBERT | 0.757 | | | | HEALTHVER | Baseline (Sarrouti | provided | provided | T5-base | 0.796 | | | | et al., 2021) | | | | | | | | | COVIDFACT | Baseline (Saakyan | Google Search | Sentence-BERT | RoBERTa (finetuned on GLUE) | 0.820 | | | | et al., 2021) | | | | | | | | Table 2: Models developed for scientific fact-checking with three pipeline components and verdict prediction performance on their respective dataset it can fall short because evidence sentences could be paraphrased or use rather different wording from the original claim. Consequently, Wright et al. (2022a) improve the performance of evidence selection on COVERT and COVID-FACT datasets by fine-tuning sentence similarity models on pairs of sentences about scientific findings from scientific articles matched with paraphrased sentences from news and social media reporting on these findings. In all mentioned approaches, evidence selection and verdict prediction are made with two separate models, which means that the final claim veracity predictor might not have knowledge of the full context of evidence. ParagraphJoint, ARSJoint, and MultiVerS are so-called joint models because they all use multi-task learning to jointly learn the tasks of rationale selection and verdict prediction. For this purpose, they use a shared representation of the claim and the abstract obtained by concatenating the claim with the full abstract of a candidate document and converting it to a dense representation. This alleviates the problem of missing context during final label prediction. ParagraphJoint uses BERT (Devlin et al., 2019) as the encoder model, while ARSJoint uses the domain-specific BioBERT model (Lee et al., 2020), pre-trained on the text of biomedical research publications. Evidence selection is performed by passing the representation of each candidate sentence (extracted from the full abstract representation) to a multi-layer perceptron (MLP) classifier. Likewise, MultiVerS obtains the joint claim-abstract representations and perform rationale selection with the Longformer model (Beltagy et al., 2020), a transformer model for long documents that takes up to 4096 tokens. ## 5.3 Verdict Prediction The final step of the fact-checking pipeline is for a model to produce the verdict on a given claim's veracity. As mentioned in the datasets section, the most common setting is to have three labels (SUPPORTED, REFUTED, NOT ENOUGH INFOR-MATION), although models developed for one set of labels can be adapted to a dataset with a different set of labels. This component can easily be modeled as a classification task where the classifier learns to predict one of the three classes. All the baselines from Table 2 perform this task by finetuning large language models for label prediction on their respective datasets. The base models used include the general-purpose BERT or T5 and the domain-specific BioBERT and SciBERT (Beltagy et al., 2019) models. These models receive as their input pairs of claims and accompanying rationale sentences selected in the previous step and then give the final output as output. As described previously, the joint models developed for solving SCIFACT use multi-task learning to learn both the evidence selection and verdict prediction steps with a shared claim+abstract representation. Both ParagraphJoint and ARSJoint again use a dedicated MLP that takes the previous step's representation. At the same time, ParagraphJoint also experimented with Kernel Graph Attention Network (KGAT), which performed well for general fact-checking datasets by learning relations between evidence sentences using a graph structure (Liu et al., 2020). MultiVerS once again uses the Longformer model, this time with a three-way classification head over the encoding of the entire claim and rationale sentences. The MultiVerS model was also used in a zeroshot setting by Wührl and Klinger (2022) to factcheck the COVERT dataset. Since this dataset consists of tweets and is pretty noisy when compared to expert-written claims found in SCIFACT , the authors transformed the tweets into atomic claims consisting of triples (entity, cause, entity). Such a representation significantly improved the performance on this dataset and showed that models developed for one scientific fact-checking dataset can provide promising results for other datasets when the claims are represented in an appropriate form. ## 6 Discussion In this chapter, we discuss the current challenges in scientific fact-checking and provide directions for future work and trends. Evidence quality. A common challenge in factchecking is ensuring that the evidence used for making veracity decisions is appropriate and of high quality. Especially in scientific fact-checking, the nature of scientific knowledge is such that it is updated and readjusted as new discoveries appear, so a claim that was once refuted by evidence could become supported with more substantial, more recent evidence. Time-aware scoring for evidence ranking was explored for general fact-checking (Allein et al., 2021). Additionally, scientific sources can contradict one another and give differing results for the same research hypotheses, which is related to the ML concept of learning with label disagreement (Uma et al., 2021). In the medical field, systematic reviews provide evidence-based clinical recommendations with the level of evidence (how much testing was performed) and the strength of recommendation (is it just a hint or a strict medical recommendation) (Cro et al., 2020). So far, none of the datasets have taken into account the evidence that is changing with time, disagreeing evidence, or differing levels and strength of evidence. A promising research direction is constructing resources and benchmarks that would consider these intricacies of scientific fact-checking. Reasoning and Explainability. Fact-checking is one of the NLP tasks where making the models and their decision process transparent and explainable to humans is of high importance for their wide-scale adoption (Augenstein, 2021). Modern deep neural models for NLP tasks are generally described as black-box models, and their inner workings are still hard to grasp completely. While there have been explainable approaches for general factchecking, the only explainable method in this survey was proposed by Kotonya and Toni (2020b). It uses a combination of extractive and abstractive text summarization of evidence source documents to provide end users with a concise explanation of why a certain verdict was produced. Considering that scientists often present their thoughts with argumentative structures (Lauscher et al., 2018), a promising research approach is learning the conceptual relations between multiple pieces of evidence to come up with a conclusion. This was used by Krishna et al. (2022) to develop a neuro-symbolic model that learns logical relations between evidence sentences for FEVER. Another promising research avenue is using counterfactual explanations, which have proven useful in many NLP tasks (Keane et al., 2021). Dataset size. A common obstacle in factchecking for all domains and related misinformation detection tasks is the small size of existing datasets. One way to overcome this performance hindrance is combining multiple scientific fact-checking datasets or datasets for related NLP tasks that deal with seeking rationale in text. The model MultiVerS described in the previous chapter utilized this approach by combining datasets HEALTHVER, COVID-FACT, and SCIFACT together with FEVER, PubMedQA, and EvidenceInference datasets to improve the final performance on the fact-checking task. Other than combining datasets for training purposes, another emerging approach to mitigate the lack of training data is generating new scientific claims to augment the existing data. Wright et al. (2022b) apply this approach by using the generative model BART and external biomedical knowledge sources to construct claims while showing promising zero-shot performance. External knowledge. Scientific knowledge is complex and contains lots of interconnected concepts. This makes it suitable for representation with structures like Knowledge Graphs (KGs) that model world knowledge in the form of entities and relations between them. KGs have been constructed for various scientific disciplines, while the most well-known one for biomedical knowledge is Unified Medical Language System (UMLS) (Bodenreider, 2004), which models various interactions between proteins, drugs, diseases, genes, and other concepts. KGs have proven useful in enhancing a wide array of NLP tasks (Schneider et al., 2022). Enhancing BERT with infused disease knowledge from MeSH (He et al., 2020b) and structured medical knowledge from UMLS (He et al., 2020a) showed improved performance over knowledge-intensive biomedical NLP tasks, as well as for the open-domain question answering (Yu et al., 2021). Recent work has shown that reasoning over knowledge graphs can improve encyclopedic fact verification (Kim et al., 2023). Multimodality and multilinguality. Misinformation is increasingly being spread in forms other than text, including misleading images, artificially constructed videos, or incorrect figures (Nielsen and McConville, 2022). Visuals were an especially popular tool for spreading misinformation about the COVID-19 pandemic (Brennen et al., 2020). Particularly in scientific publications, authors present their data in the forms of figures, tables, and other visualizations. The FEVEROUS shared task (Aly et al., 2021) made progress in this direction by requiring participants to develop systems that verify claims over evidence in the structured format (tables and lists). Other than multiple modalities, online claims are made in a multitude of world languages, which calls for the development of efficient multilingual models for scientific fact-checking. Human-centered fact-checking. Most of the developed fact-checking systems are still limited in practical use because their system design often does not take into account how fact-checking is done in the real world (Glockner et al., 2022) and ignores the insights and needs of various stakeholder groups core to the fact-checking process (Juneja and Mitra, 2022). Several works started to investigate human evaluation in fact-checking systems. Examples include effectively delivering the misinformation detection results to users (Seo et al., 2019) or guiding the user toward fact-checked news (Lo et al., 2022). Making the process of NLP-based fact-checking more human-centered is a promising future direction that will make it more reliable, trustworthy, and easier for wide-scale adoption. ## 7 Conclusion In this survey, we reviewed and systematized existing datasets and solutions for the task of scientific fact-checking. We introduced the task and compared it to its related NLP endeavors, described the existing datasets and their construction process, and explained the models used for scientific factchecking with their pipeline components. Finally, we provided a critical discussion of current challenges and highlighted promising future directions for the task of scientific fact-checking. ## 8 Limitations Even though we performed a rigorous literature search to try to cover all existing work on scientific fact-checking, there is possibly work that was left uncovered due to different keywords, naming conventions (e.g., fact-checking vs. claim verification). Whenever possible, we tried covering all related work and all relevant cited papers. All approaches for automated scientific factchecking described in this work are still not safe for widespread adoption in practice due to constraints to their performance. Having deployed automated fact-checking systems that would produce incorrect verdicts could lead to mistrust in their usefulness and the process of fact-checking itself, including the work of dedicated manual fact-checkers. ## Acknowledgements This research has been supported by the German Federal Ministry of Education and Research (BMBF) grant 01IS17049 Software Campus 2.0 (TU München). We would like to thank the anonymous reviewers for their helpful feedback. ## References Liesbeth Allein, Isabelle Augenstein, and MarieFrancine Moens. 2021. Time-aware evidence ranking for fact-checking. *Journal of Web Semantics*, 71:100663. Alexis Allot, Qingyu Chen, Sun Kim, Roberto Vera Alvarez, Donald C Comeau, W John Wilbur, and Zhiyong Lu. 2019. LitSense: making sense of biomedical literature at sentence level. *Nucleic Acids Research*, 47(W1):W594–W599. Rami Aly, Zhijiang Guo, Michael Sejr Schlichtkrull, James Thorne, Andreas Vlachos, Christos Christodoulopoulos, Oana Cocarascu, and Arpit Mittal. 2021. The fact extraction and VERification over unstructured and structured information (FEVEROUS) shared task. In *Proceedings of the* Fourth Workshop on Fact Extraction and VERification (FEVER), pages 1–13, Dominican Republic. Association for Computational Linguistics. Dimosthenis Antypas, Jose Camacho-Collados, Alun Preece, and David Rogers. 2021. COVID-19 and misinformation: A large-scale lexical analysis on Twitter. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: Student Research Workshop, pages 119–126, Online. Association for Computational Linguistics. Isabelle Augenstein. 2021. Towards Explainable Fact Checking. Ph.D. thesis. Isabelle Augenstein, Christina Lioma, Dongsheng Wang, Lucas Chaves Lima, Casper Hansen, Christian Hansen, and Jakob Grue Simonsen. 2019. Multifc: A real-world multi-domain dataset for evidence-based fact checking of claims. In *Conference on Empirical* Methods in Natural Language Processing. Giannis Bekoulis, Christina Papagiannopoulou, and Nikos Deligiannis. 2021. A review on fact extraction and verification. *ACM Comput. Surv.*, 55(1). Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciBERT: A pretrained language model for scientific text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3615– 3620, Hong Kong, China. Association for Computational Linguistics. Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv:2004.05150. Tian Bian, Xi Xiao, Tingyang Xu, Peilin Zhao, Wen bing Huang, Yu Rong, and Junzhou Huang. 2020. Rumor detection on social media with bi-directional graph convolutional networks. In AAAI Conference on Artificial Intelligence. Olivier Bodenreider. 2004. The Unified Medical Language System (UMLS): integrating biomedical terminology. *Nucleic Acids Research*, 32(suppl1):D267– D270. Jon Brassey, Christopher Price, Jonny Edwards, Markus Zlabinger, Alexandros Bampoulidis, and Allan Hanbury. 2021. Developing a fully automated evidence synthesis tool for identifying, assessing and collating the evidence. *BMJ Evidence-Based Medicine*, 26(1):24–27. J Brennen, Felix Simon, and Rasmus Nielsen. 2020. Beyond (mis)representation: Visuals in covid-19 misinformation. The International Journal of Press/Politics, 26. Qingyu Chen, Yifan Peng, and Zhiyong Lu. 2019. BioSentVec: creating sentence embeddings for biomedical texts. In *2019 IEEE International Conference on Healthcare Informatics (ICHI)*. IEEE. Suzie Cro, Tim P Morris, Michael G Kenward, and James R Carpenter. 2020. Sensitivity analysis for clinical trials with missing continuous outcome data using controlled multiple imputation: a practical guide. *Statistics in medicine*, 39(21):2815–2842. Giovanni Da San Martino, Stefano Cresci, Alberto Barrón-Cedeño, Seunghak Yu, Roberto Di Pietro, and Preslav Nakov. 2021. A survey on computational propaganda detection. In *Proceedings of the* Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI'20. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Thomas Diggelmann, Jordan L. Boyd-Graber, Jannis Bulian, Massimiliano Ciaramita, and Markus Leippold. 2020. Climate-fever: A dataset for verification of real-world climate claims. *ArXiv*, abs/2012.00614. Max Glockner, Yufang Hou, and Iryna Gurevych. 2022. Missing counter-evidence renders NLP fact-checking unrealistic for misinformation. In *Proceedings of* the 2022 Conference on Empirical Methods in Natural Language Processing, pages 5916–5936, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, and Hoifung Poon. 2022. Domain-specific language model pretraining for biomedical natural language processing. ACM Transactions on Computing for Healthcare, 3(1):1–23. Zhijiang Guo, Michael Schlichtkrull, and Andreas Vlachos. 2022. A survey on automated fact-checking. Transactions of the Association for Computational Linguistics, 10:178–206. Andreas Hanselowski, Christian Stab, Claudia Schulz, Zile Li, and Iryna Gurevych. 2019. A richly annotated corpus for different tasks in automated factchecking. In *Proceedings of the 23rd Conference on* Computational Natural Language Learning (CoNLL), pages 493–503, Hong Kong, China. Association for Computational Linguistics. Momchil Hardalov, Arnav Arora, Preslav Nakov, and Isabelle Augenstein. 2022. A survey on stance detection for mis- and disinformation identification. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 1259–1277, Seattle, United States. Association for Computational Linguistics. Bin He, Di Zhou, Jinghui Xiao, Xin Jiang, Qun Liu, Nicholas Jing Yuan, and Tong Xu. 2020a. Bert-mk: Integrating graph contextualized knowledge into pretrained language models. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 2281–2290. Yun He, Ziwei Zhu, Yin Zhang, Qin Chen, and James Caverlee. 2020b. Infusing Disease Knowledge into BERT for Health Question Answering, Medical Inference and Disease Name Recognition. In *Proceedings of the 2020 Conference on Empirical Methods* in Natural Language Processing (EMNLP), pages 4604–4614, Online. Association for Computational Linguistics. Tamanna Hossain, Robert L. Logan IV, Arjuna Ugarte, Yoshitomo Matsubara, Sean Young, and Sameer Singh. 2020. COVIDLies: Detecting COVID-19 misinformation on social media. In *Proceedings of* the 1st Workshop on NLP for COVID-19 (Part 2) at EMNLP 2020, Online. Association for Computational Linguistics. Yichen Jiang, Shikha Bordia, Zheng Zhong, Charles Dognin, Maneesh Singh, and Mohit Bansal. 2020. HoVer: A dataset for many-hop fact extraction and claim verification. In *Findings of the Association* for Computational Linguistics: EMNLP 2020, pages 3441–3460, Online. Association for Computational Linguistics. Qiao Jin, Bhuwan Dhingra, Zhengping Liu, William Cohen, and Xinghua Lu. 2019. Pubmedqa: A dataset for biomedical research question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2567–2577. Prerna Juneja and Tanushree Mitra. 2022. Human and technological infrastructures of fact-checking. *Proc.* ACM Hum.-Comput. Interact., 6(CSCW2). Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for opendomain question answering. In *Proceedings of the* 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769–6781, Online. Association for Computational Linguistics. Mark T. Keane, Eoin M. Kenny, Eoin Delaney, and Barry Smyth. 2021. If only we had better counterfactual explanations: Five key deficits to rectify in the evaluation of counterfactual xai techniques. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, pages 4466–4474. International Joint Conferences on Artificial Intelligence Organization. Survey Track. Jiho Kim, Sungjin Park, Yeonsu Kwon, Yohan Jo, James Thorne, and Edward Choi. 2023. Factkg: Fact verification via reasoning on knowledge graphs. In *Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics*, Toronto, Canada. Association for Computational Linguistics. Neema Kotonya and Francesca Toni. 2020a. Explainable automated fact-checking: A survey. In *Proceedings of the 28th International Conference on Computational Linguistics*, pages 5430–5443, Barcelona, Spain (Online). International Committee on Computational Linguistics. Neema Kotonya and Francesca Toni. 2020b. Explainable automated fact-checking for public health claims. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7740–7754, Online. Association for Computational Linguistics. Amrith Krishna, Sebastian Riedel, and Andreas Vlachos. 2022. Proofver: Natural logic theorem proving for fact verification. *Transactions of the Association for* Computational Linguistics, 10:1013–1030. Anne Lauscher, Goran Glavaš, and Simone Paolo Ponzetto. 2018. An argument-annotated corpus of scientific publications. In Proceedings of the 5th Workshop on Argument Mining, pages 40–46, Brussels, Belgium. Association for Computational Linguistics. Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234–1240. Xiangci Li, Gully A Burns, and Nanyun Peng. 2021. A paragraph-level multi-task learning model for scientific fact-verification. In *SDU@ AAAI*. Zhenghao Liu, Chenyan Xiong, Maosong Sun, and Zhiyuan Liu. 2020. Fine-grained fact verification with kernel graph attention network. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7342–7351, Online. Association for Computational Linguistics. Kuan-Chieh Lo, Shih-Chieh Dai, Aiping Xiong, Jing Jiang, and Lun-Wei Ku. 2022. Victor: An implicit approach to mitigate misinformation via continuous verification reading. In *Proceedings of the ACM* Web Conference 2022, WWW '22, page 3511–3519, New York, NY, USA. Association for Computing Machinery. Zhiyong Lu. 2011. PubMed and beyond: a survey of web tools for searching biomedical literature. Database, 2011. Baq036. Isabelle Mohr, Amelie Wührl, and Roman Klinger. 2022. Covert: A corpus of fact-checked biomedical covid19 tweets. In Proceedings of the Language Resources and Evaluation Conference, pages 244–257, Marseille, France. European Language Resources Association. H.-M Müller, Kimberly Auken, Y. Li, and P. Sternberg. 2018. Textpresso central: A customizable platform for searching, text mining, viewing, and curating biomedical literature. *BMC Bioinformatics*, 19. Preslav Nakov, Alberto Barrón-Cedeño, Giovanni da San Martino, Firoj Alam, Julia Maria Struß, Thomas Mandl, Rubén Míguez, Tommaso Caselli, Mucahid Kutlu, Wajdi Zaghouani, Chengkai Li, Shaden Shaar, Gautam Kishore Shahi, Hamdy Mubarak, Alex Nikolov, Nikolay Babulkov, Yavuz Selim Kartal, Michael Wiegand, Melanie Siegel, and Juliane Köhler. 2022. Overview of the clef–2022 checkthat! lab on fighting the covid-19 infodemic and fake news detection. In *Experimental* IR Meets Multilinguality, Multimodality, and Interaction: 13th International Conference of the CLEF Association, CLEF 2022, Bologna, Italy, September 5–8, 2022, Proceedings, page 495–520, Berlin, Heidelberg. Springer-Verlag. Preslav Nakov, David Corney, Maram Hasanain, Firoj Alam, Tamer Elsayed, Alberto Barr'on-Cedeno, Paolo Papotti, Shaden Shaar, and Giovanni Da San Martino. 2021. Automated fact-checking for assisting human fact-checkers. In *IJCAI*. Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. Ms marco: A human generated machine reading comprehension dataset. In *CoCo@ NIPs*. Dan S. Nielsen and Ryan McConville. 2022. Mumin: A large-scale multilingual multimodal fact-checked misinformation social network dataset. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '22, page 3141–3153, New York, NY, USA. Association for Computing Machinery. Rodrigo Nogueira, Zhiying Jiang, Ronak Pradeep, and Jimmy Lin. 2020. Document ranking with a pretrained sequence-to-sequence model. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 708–718, Online. Association for Computational Linguistics. Gordon Pennycook, Jonathon McPhetres, Yunhao Zhang, Jackson G. Lu, and David G. Rand. 2020. Fighting covid-19 misinformation on social media: Experimental evidence for a scalable accuracy-nudge intervention. *Psychological Science*, 31(7):770–780. PMID: 32603243. Ronak Pradeep, Xueguang Ma, Rodrigo Nogueira, and Jimmy Lin. 2021. Scientific claim verification with VerT5erini. In Proceedings of the 12th International Workshop on Health Text Mining and Information Analysis, pages 94–103, online. Association for Computational Linguistics. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(1). Pranav Rajpurkar, Emma Chen, Oishi Banerjee, and Eric J Topol. 2022. Ai in health and medicine. *Nature medicine*, 28(1):31–38. Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992. Alexey Romanov and Chaitanya Shivade. 2018. Lessons from natural language inference in the clinical domain. In *Proceedings of the 2018 Conference* on Empirical Methods in Natural Language Processing, pages 1586–1596. Jon Roozenbeek, Claudia R. Schneider, Sarah Dryhurst, John Kerr, Alexandra L. J. Freeman, Gabriel Recchia, Anne Marthe van der Bles, and Sander van der Linden. 2020. Susceptibility to misinformation about covid-19 around the world. *Royal Society Open Science*, 7(10):201199. Arkadiy Saakyan, Tuhin Chakrabarty, and Smaranda Muresan. 2021. COVID-fact: Fact extraction and verification of real-world claims on COVID-19 pandemic. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics* and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2116–2129, Online. Association for Computational Linguistics. Mobashir Sadat and Cornelia Caragea. 2022. Scinli: A corpus for natural language inference on scientific text. In *Proceedings of the 60th Annual Meeting of* the Association for Computational Linguistics (Volume 1: Long Papers), pages 7399–7409. Mourad Sarrouti, Asma Ben Abacha, Yassine Mrabet, and Dina Demner-Fushman. 2021. Evidence-based fact-checking of health-related claims. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 3499–3512, Punta Cana, Dominican Republic. Association for Computational Linguistics. Phillip Schneider, Tim Schopf, Juraj Vladika, Mikhail Galkin, Elena Simperl, and Florian Matthes. 2022. A decade of knowledge graphs in natural language processing: A survey. In *Proceedings of the 2nd* Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 601–614, Online only. Association for Computational Linguistics. Haeseung Seo, Aiping Xiong, and Dongwon Lee. 2019. Trust it or not: Effects of machine-learning warnings in helping individuals mitigate misinformation. In Proceedings of the 10th ACM Conference on Web Science, WebSci '19, page 265–274, New York, NY, USA. Association for Computing Machinery. Gautam Kishore Shahi and Durgesh Nandini. 2020. FakeCovid- A Multilingual Cross-domain Fact Check News Dataset for COVID-19. ICWSM. Chris Stahlhut. 2021. *Interactive Evidence Detection*. Ph.D. thesis, Technische Universität, Darmstadt. James Thorne and Andreas Vlachos. 2018. Automated fact checking: Task formulations, methods and future directions. In *Proceedings of the 27th International Conference on Computational Linguistics*, pages 3346–3359, Santa Fe, New Mexico, USA. Association for Computational Linguistics. James Thorne, Andreas Vlachos, Oana Cocarascu, Christos Christodoulopoulos, and Arpit Mittal. 2018. The fact extraction and VERification (FEVER) shared task. In *Proceedings of the First Workshop on* Fact Extraction and VERification (FEVER), pages 1– 9, Brussels, Belgium. Association for Computational Linguistics. George Tsatsaronis, Georgios Balikas, Prodromos Malakasiotis, Ioannis Partalas, Matthias Zschunke, Michael R Alvers, Dirk Weissenborn, Anastasia Krithara, Sergios Petridis, Dimitris Polychronopoulos, et al. 2015. An overview of the bioasq large-scale biomedical semantic indexing and question answering competition. *BMC bioinformatics*, 16(1):1–28. Alexandra N Uma, Tommaso Fornaciari, Dirk Hovy, Silviu Paun, Barbara Plank, and Massimo Poesio. 2021. Learning from disagreement: A survey. *Journal of* Artificial Intelligence Research, 72:1385–1470. Andreas Vlachos and Sebastian Riedel. 2014. Fact checking: Task definition and dataset construction. In *Proceedings of the ACL 2014 Workshop on Language Technologies and Computational Social Science*, pages 18–22, Baltimore, MD, USA. Association for Computational Linguistics. Juraj Vladika and Florian Matthes. 2023. Sebis at semeval-2023 task 7: A joint system for natural language inference and evidence retrieval from clinical trial reports. In Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023), Toronto, Canada. Association for Computational Linguistics. David Wadden, Shanchuan Lin, Kyle Lo, Lucy Lu Wang, Madeleine van Zuylen, Arman Cohan, and Hannaneh Hajishirzi. 2020. Fact or fiction: Verifying scientific claims. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language* Processing (EMNLP), pages 7534–7550, Online. Association for Computational Linguistics. David Wadden and Kyle Lo. 2021. Overview and insights from the SCIVER shared task on scientific claim verification. In Proceedings of the Second Workshop on Scholarly Document Processing, pages 124–129, Online. Association for Computational Linguistics. David Wadden, Kyle Lo, Bailey Kuehl, Arman Cohan, Iz Beltagy, Lucy Lu Wang, and Hannaneh Hajishirzi. 2022a. SciFact-open: Towards open-domain scientific claim verification. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 4719–4734, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. David Wadden, Kyle Lo, Lucy Wang, Arman Cohan, Iz Beltagy, and Hannaneh Hajishirzi. 2022b. MultiVerS: Improving scientific claim verification with weak supervision and full-document context. In *Findings of the Association for Computational Linguistics:* NAACL 2022, pages 61–76, Seattle, United States. Association for Computational Linguistics. Xuan Wang, Yingjun Guan, Weili Liu, Aabhas Chauhan, Enyi Jiang, Qi Li, David Liem, Dibakar Sigdel, John Caufield, Peipei Ping, and Jiawei Han. 2020. EVIDENCEMINER: Textual evidence discovery for life sciences. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 56–62, Online. Association for Computational Linguistics. Nicolas Webersinke, Mathias Kraus, Julia Anna Bingler, and Markus Leippold. 2021. Climatebert: A pretrained language model for climate-related text. arXiv preprint arXiv:2110.12010. Chih-Hsuan Wei, Alexis Allot, Robert Leaman, and Zhiyong Lu. 2019. PubTator central: automated concept annotation for biomedical full text articles. Nucleic Acids Research, 47(W1):W587–W593. Jevin D. West and Carl T. Bergstrom. 2021. Misinformation in and about science. *Proceedings of the* National Academy of Sciences, 118. Dustin Wright, Jiaxin Pei, David Jurgens, and Isabelle Augenstein. 2022a. Modeling information change in science communication with semantically matched paraphrases. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 1783–1807, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Dustin Wright, David Wadden, Kyle Lo, Bailey Kuehl, Arman Cohan, Isabelle Augenstein, and Lucy Wang. 2022b. Generating scientific claims for zero-shot scientific fact checking. In *Proceedings of the 60th* Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2448– 2460, Dublin, Ireland. Association for Computational Linguistics. Amelie Wührl and Roman Klinger. 2021. Claim detection in biomedical Twitter posts. In Proceedings of the 20th Workshop on Biomedical Language Processing, pages 131–142, Online. Association for Computational Linguistics. Amelie Wührl and Roman Klinger. 2022. Entity-based claim representation improves fact-checking of medical content in tweets. In *Proceedings of the 9th Workshop on Argument Mining*, pages 187–198, Online and in Gyeongju, Republic of Korea. International Conference on Computational Linguistics. Amélie Yavchitz, Isabelle Boutron, Aida Bafeta, Ibrahim Marroun, Pierre Charles, Jean Mantz, and Philippe Ravaud. 2012. Misrepresentation of randomized controlled trials in press releases and news coverage: A cohort study. *PLOS Medicine*, 9(9):1– 11. Donghan Yu, Chenguang Zhu, Yuwei Fang, Wenhao Yu, Shuohang Wang, Yichong Xu, Xiang Ren, Yiming Yang, and Michael Zeng. 2021. Kg-fid: Infusing knowledge graph in fusion-in-decoder for open-domain question answering. arXiv preprint arXiv:2110.04330. Shi Yuan and Bei Yu. 2019. Hclaime: A tool for identifying health claims in health news headlines. Inf. Process. Manage., 56(4):1220–1233. Xia Zeng, Amani S. Abumansour, and Arkaitz Zubiaga. 2021. Automated fact-checking: A survey. *Language and Linguistics Compass*, 15(10):e12438. Zhiwei Zhang, Jiyi Li, Fumiyo Fukumoto, and Yanming Ye. 2021. Abstract, rationale, stance: A joint model for scientific claim verification. In *Proceedings of the* 2021 Conference on Empirical Methods in Natural Language Processing, pages 3580–3586. Xinyi Zhou and Reza Zafarani. 2020. A survey of fake news: Fundamental theories, detection methods, and opportunities. *ACM Comput. Surv.*, 53(5). Chaoyuan Zuo, Kritik Mathur, Dhruv Kela, Noushin Faramarzi, and Ritwik Banerjee. 2022. Beyond belief: a cross-genre study on perception and validation of health information online. *International Journal* of Data Science and Analytics, 13:1–16. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 8 ✓ A2. Did you discuss any potential risks of your work? 8 ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? No response. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? No response. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? No response. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No response. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? No response. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. No response. ## C ✗ **Did You Run Computational Experiments?** Left blank. C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? No response. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? No response. C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? No response. C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? No response. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
song-etal-2023-uni
Uni-Encoder: A Fast and Accurate Response Selection Paradigm for Generation-Based Dialogue Systems
https://aclanthology.org/2023.findings-acl.388
Sample-and-rank is a key decoding strategy for modern generation-based dialogue systems. It helps achieve diverse and high-quality responses by selecting an answer from a small pool of generated candidates. The current state-of-the-art ranking methods mainly use an encoding paradigm called Cross-Encoder, which separately encodes each context-candidate pair and ranks the candidates according to their fitness scores. However, Cross-Encoder repeatedly encodes the same lengthy context for each candidate, resulting in high computational costs. Poly-Encoder addresses the above problems by reducing the interaction between context and candidates, but with a price of performance drop. In this work, we develop a new paradigm called Uni-Encoder, that keeps the full attention over each pair as in Cross-Encoder while only encoding the context once, as in Poly-Encoder. Uni-Encoder encodes all the candidates with the context in one forward pass. We use the same positional embedding for all candidates to ensure they are treated equally and design a new attention mechanism to avoid confusion. Our Uni-Encoder can simulate other ranking paradigms using different attention and response concatenation methods. Extensive experiments show that our proposed paradigm achieves new state-of-the-art results on four benchmark datasets with high computational efficiency. For instance, it improves R10@1 by 2.9{\%} with an approximately 4X faster inference speed on the Ubuntu V2 dataset.
## Uni-Encoder: A Fast And Accurate Response Selection Paradigm For Generation-Based Dialogue Systems Chiyu Song∗ 1,2, Hongliang He∗ 1,2, Haofei Yu3, Pengfei Fang4,2, Leyang Cui5**, Zhenzhong Lan**† 2 1Zhejiang University, 2School of Engineering, Westlake University 3Language Technologies Institute, Carnegie Mellon University, 4Southeast University, 5Tencent AI Lab {songchiyu, hehongliang, lanzhenzhong}@westlake.edu.cn [email protected], [email protected], [email protected] ## Abstract Sample-and-rank is a key decoding strategy for modern generation-based dialogue systems. It helps achieve diverse and high-quality responses by selecting an answer from a small pool of generated candidates. The current stateof-the-art ranking methods mainly use an encoding paradigm called *Cross-Encoder*, which separately encodes each context-candidate pair and ranks the candidates according to their fitness scores. However, *Cross-Encoder* repeatedly encodes the same lengthy context for each candidate, resulting in high computational costs. *Poly-Encoder* addresses the above problems by reducing the interaction between context and candidates, but with a price of performance drop. In this work, we develop a new paradigm called *Uni-Encoder*1, that keeps the full attention over each pair as in *CrossEncoder* while only encoding the context once, as in Poly-Encoder. *Uni-Encoder* encodes all the candidates with the context in one forward pass. We use the same positional embedding for all candidates to ensure they are treated equally and design a new attention mechanism to avoid confusion. Our *Uni-Encoder* can simulate other ranking paradigms using different attention and response concatenation methods. Extensive experiments show that our proposed paradigm achieves new state-of-theart results on four benchmark datasets with high computational efficiency. For instance, it improves R10@1 by 2.9% with an approximately 4× faster inference speed on the Ubuntu V2 dataset. ## 1 Introduction One of the major milestones of artificial intelligence is the ability to converse freely in natural language. Researchers in this field are working on building open-domain dialogue systems capable | Avoidance of | | | | | |--------------------|----|------------------|-------------|-------| | Paradigm | | Context-Response | | | | Full Attention | | Context | Performance | | | Recomputation | | | | | | Bi-Encoder | ✗ | | ✓ | 80.6% | | Cross-Encoder | ✓ | | ✗ | 82.8% | | Poly-Encoder | ✗ | | ✓ | 80.9% | | Uni-Encoder (Ours) | ✓ | | ✓ | 85.9% | Table 1: *Uni-Encoder* maintains the full attention between context and candidates while only encoding the lengthy context once. It is both fast and accurate compared with existing paradigms. Performance is the R@1 values evaluated on the Ubuntu Dialogue Corpus V2, and we refer to Humeau et al. (2019) for the results of Bi-, *Cross-*, and *Poly-Encoder*. The pre-trained BERT weights are all from Devlin et al. (2019). of handling a variety of topics. Depending on the implementation, these works can be categorized as retrieval-based (Lowe et al., 2015; Tao et al., 2019; Yuan et al., 2019) or **generation-based** (Vinyals and Le, 2015; Serban et al., 2016). Retrieval-based systems carry out conversations by selecting an optimal response from a **large** candidate pool, which shows advantages in producing fluency and relevant response. However, retrieval-based systems may be limited by the capacity of the pre-defined candidate pool. Generation-based systems generate reasonable responses by a sequence-to-sequence model. Previous work shows that generation-based systems tend to give repetition or contradictory responses (Nie et al., 2021; Cui et al., 2022). To combine the advantage of both methods, Adiwardana et al. (2020) proposed a "sample-and-rank" method, which first samples a **small** pool of candidate responses from the generator and then reranks the candidates to get the best response by a ranker. Because a ranking model can view the whole responses while a pure generation method can only generate answers based on partial information, sample-and-rank method often performs better than the pure sample method. Under the sampleand-rank framework, researchers have greater freedom to explore different ranking methods (Zhang et al., 2020; Roller et al., 2021; Bao et al., 2021; Thoppilan et al., 2022). They can encode candidates on-the-fly and encode them with the context. Cross-Encoder (Urbanek et al., 2019) is one such paradigm. It jointly encodes the historical context with every candidate using full attention and ranks them according to the context-candidate matching scores. Despite its superior performance, *CrossEncoder* repeatedly encodes the context for each candidate. Since contexts are often much longer than responses, the computation is slow for practical use. *Poly-Encoder* (Humeau et al., 2019; Roller et al., 2021) mitigates the above problem by reducing the full attention at every layer of Transformer (Vaswani et al., 2017) to global attention at the last layer. However, later work (Gu et al., 2020, 2021; Han et al., 2021) confirms the importance of full attention and still uses *Cross-Encoder* as the base building block for response selection. One interesting research question is whether there is a way to realize full attention between each context-response pair without repeatedly encoding the same long context. To answer the above question, we proposed a new paradigm called *UniEncoder*, as presented in Table 1. In this new paradigm, all the candidates are concatenated with the context and jointly input to the same encoder in one forward pass. In the end, a softmax classifier is used to decide which candidate needs to be selected. If we concatenate candidates and context, we will get two problems. First, it is challenging to learn a good set of representations for candidates as they have different positional embeddings. Second, the averaging effect of the attention mechanism makes it difficult to distinguish various candidates. To address the above two problems, we propose two modifications to the traditional encoder networks. First, we use the same set of positional embeddings for all candidates so that they are all treated equally because each is a possible continuation of the given context. Second, we also design a novel attention mechanism for our new paradigm that only allows contextcandidate attention and forbids the candidates to attend to each other directly. Through changing these two designs, *UniEncoder* can simulate the effects of any other paradigm (Cross-, Bi- or *Poly-Encoder*) by changing how context and candidate attend to each other and how many candidates are processed in a single forward pass. We evaluate our new paradigm on four benchmark datasets: PersonaChat (Zhang et al., 2018), Ubuntu Dialogue Corpus V1 (Lowe et al., 2015), Ubuntu Dialogue Corpus V2 (Lowe et al., 2017), and Douban Conversation Corpus (Wu et al., 2017). Empirical results show that our method achieves state-of-the-art performance, jointly with high computational efficiency. For instance, our Uni-Encoder has an absolute 2.9% R@1 improvement over the state-of-the-art *Cross-Encoder* on the widely used Ubuntu Dialogue Corpus V2 dataset. It also has a lower computational cost than *CrossEncoder* and is approximately four times faster at inference time. Our source code and model checkpoints will be released for reproducibility and future research2. ## 2 Related Work Neural approaches for open-domain dialogue have seen significant recent progress. Due to this progress, generation-based dialogue systems have started outperforming retrieval-based methods (Roller et al., 2021) as they can handle a wider variety of topics. Adiwardana et al. (2020) show that sample-and-rank provides much more diverse and content-rich responses than beam-search. An additional ranking step allows responses to have full attention/view over themselves and the context, while pure generation methods only have left attention/view. This different view is why an additional ranking process is needed. In this study, we particularly focus on improving this ranking process. Because scoring candidates given a context is a classical problem in machine learning, numerous methods (Urbanek et al., 2019; Reimers and Gurevych, 2019; Adiwardana et al., 2020) have been developed over the years. We will only discuss a few closely related works. Please refer to Humeau et al. (2019) for a more detailed discussion. Bi-Encoder (Reimers and Gurevych, 2019) encodes the context and the candidate separately, then scores the relatedness between their representations. Due to its simplicity and efficiency, *BiEncoder* often serves as a baseline method when a new dataset introduces (Lowe et al., 2015; Dinan et al., 2019). One significant advantage of the Bi-Encoder is that its response representations can 2https://github.com/dll-wu/Uni-Encoder be pre-computed as they are context-independent. However, in modern generation-based dialogue systems, this advantage becomes a weakness. It is not necessary to pre-encode responses that are generated on-the-fly. And without context-response interaction, the ranking performance is severely weakened. *Poly-Encoder* (Humeau et al., 2019) improves the accuracy of the *Bi-Encoder* by adding a learned self-attention layer on top of the context and candidate features extracted from both encoders. Nevertheless, *Cross-Encoder* is preferable to generation-based dialogues systems in practice due to its high effectiveness (Urbanek et al., 2019; Humeau et al., 2019). Instead of encoding each context and response pair separately, they encode them jointly using a full attention mechanism. Recent improvements in response selection are mostly on *Cross-Encoder*. For example, Li et al. (2021) adapt contrastive learning to *Cross-Encoder* with a specially designed strategy and obtain a significant performance gain. Lu et al. (2020) and Gu et al. (2020) add speaker change information to the inputs showing a large improvement in the response selection task. Whang et al. (2020) and Han et al. (2021) further post-train the encoder on domainspecific data and see additional improvements. To further utilize target data, Xu et al. (2021) and Whang et al. (2021) investigate some additional self-supervised learning tasks. These tasks served as additional objectives jointly trained with the response selection task. Unlike all the above improvements, our improvement is on the encoder itself and can incorporate these additional tricks. ## 3 Methods This section elaborates on the problem formulation of dialogue response selection, compares different paradigms to model this task, and describes our implementation of *Uni-Encoder*. ## 3.1 Problem Formulation Re-ranking methods formulate the multi-turn response selection as a set of binary classification tasks. In practice, given a dialogue context C = {u1, u2*, ..., u*N }, where uk, k = 1*, . . . , N* denotes a single utterance from either speaker, the response selection task is required to choose an optimal response from a candidate pool, denoted by P = {r1, r2*, ..., r*M}. Every candidate riis respectively paired with the context C, denoted as f(*C, r*i). The encoding function f yields a representation that later undergoes non-linear transformations to predict a value of 1 for a proper match and 0 otherwise. However, this binary classification view is not an efficient way of training the encoder because we need to encode the context C once for each pair of context-response comparisons. Instead, Humeau et al. (2019) leveraged in-batch negative training and viewed this task as a multi-choice selection problem. This formulation optimizes, e.g., sof tmax(f(C)· f(r1)*, ..., f*(C)· f(rM)) by a ground truth label that is one-hot on the index of the sole positive candidate. ## 3.2 Task Modeling Paradigms In the following, we reuse the same set of notations in Section 3.1. Accordingly, Bi-, Poly-, *Cross-*, and Uni-Encoder model the response selection task as follows. For *Bi-Encoder*, selecting the proper response r is picking the candidate that has the highest dot product with the context: $$f(C)\cdot f(r_{1}),...,f(C)\cdot f(r_{M})$$ $$(1)$$ where the response encoding is independent of the context encoding. Humeau et al. (2019) show that, under the multi-choice view, the larger the M is, the better the results are. Poly-Encoder is a variant of *Bi-Encoder*. The only difference is that it adds an additional lightweight attention layer: $$g(f(C),f(r_{1})),...,g(f(C),f(r_{M}))\quad\quad(2)$$ where g is the light-weight attention component over the context and response representations generated by encoder f. Cross-Encoder has full attention between the context and responses. However, it has difficulty in taking the multi-choice view because it needs to recompute the context for each candidate, which can result in a memory explosion. That is, for *CrossEncoder*, each context and response pair needs to go through the network f together: $$f(C,r_{1}),...,f(C,r_{M})$$ $\ensuremath{\mathcal{L}}^d$ f(C, r1), ..., f(*C, r*M) (3) In this way, for a batch containing K contextresponse pairs, the heavy encoder f needs to encode K2times, both computationally and memory intensive. ![3_image_0.png](3_image_0.png) Uni-Encoder also has full attention between the context and responses. Since all the candidate responses are concatenated and jointly encoded with the context in one forward pass, it naturally integrates the multi-choice view. Then the representation of each response is aggregated, and the most confident candidate is selected after feeding them into a softmax function: $$s o f t m a x(f(C,r_{1},...,r_{M}))$$ Comparing formulas 1 to 4, we can see that *BiEncoder* has no interaction between context and responses in the encoding process; *Poly-Encoder* allows partial interaction through a light-weight attention component; both *Cross-* and *Uni-Encoder* allow full interaction. Meanwhile, *Uni-Encoder* avoids the drawback of *Cross-Encoder* that repeatedly encodes the same lengthy context. Additionally, it establishes an exchange of information between candidates during the encoding process. ## 3.3 Inputs To The Ranking Models: Same Positional Embedding For All Responses We take the pre-trained BERT (Devlin et al., 2019) as our encoder. As illustrated in Fig. 1, the inputs to the BERT encoder consist of three components: the token embeddings, the segment embeddings help to distinguish between context and candidates, and the positional embeddings. In our setting, the positional embeddings for all the responses (E6 to E8 in Fig.1) are repeated, treating each candidate as a coequal because they are all possible continuations of the context. We also have a separate speaker token for each utterance in the context to tell the model who is speaking. A [CLS] and a [SEP] token are placed before and after each candidate separately. ## 3.4 Attention Mechanisms: An Unified Ranking Framework As Shown in Fig. 2, we design a new attention mechanism called Arrow Attention for *UniEncoder*. Arrow Attention allows full attention between context and candidates while forbidding candidates from directly attending to each other. It realizes parallel processing of multiple candidates while only needing to process the context once. Fig. 2 also shows that *Uni-Encoder* can simulate other popular ranking frameworks by using different attention mechanisms. Specifically, (a) our work is equivalent to *Bi-Encoder* if the Diagonal Attention is used instead, where the context and the candidates do not attend to each other. (b) The Light-Arrow Attention corresponds to *PolyEncoder*, where the context and candidates interact only at the last encoder layer through some additional light-weight attention. And the response representations are only available at the global feature level, e.g., the [CLS] head or average token embedding. (c) The Arrow attention is tailored for Uni-Encoder, where the context and the candidates have full attention, but the candidates do not attend to each other. (d) To test the extreme, we also have Square Attention, where all the context and responses attend to each other. However, it brings confusion among candidates as they share the same set of positional embeddings. The position confusion problem is addressed if it only processes one candidate at a time, which is equivalent to *CrossEncoder* by doing so. ## 4 Experiments 4.1 Experimental Setup We initialize our implementation with the BERT (Devlin et al., 2019) checkpoint provided by the ![4_image_0.png](4_image_0.png) Huggingface package3. We also test post-training (Whang et al., 2021; Han et al., 2021) on top of pre-trained BERT when the checkpoints are available. The post-trained checkpoints are provided by Han et al. (2021). As introduced in Section 2, the post-training strategy is a common technique to adapt the general pre-trained knowledge to the target domain. In practice, it continues the models' pre-training on domain-specific texts before finetuning them on downstream tasks to attain better performances. All the experiments are run on six NVIDIA A100-SXM4-40GB GPUs with CUDA 11.1. We use the Noam scheduler and the Adam optimizer with β1 = 0.9, β2 = 0.98, and weight decay = 0.01. For experiments on the Ubuntu Corpus V2, we use a peak lr of 2e-4. As we want each dataset to reach the maximum batch size in training, their learning rates are also adjusted accordingly in Section 4.4. As for the loss function, we add masked language modeling (MLM) loss on top of the classification loss with the same weight coefficients. We use the average token embedding from each candidate as the input to the softmax function. Models are all run until they converge, measured by a validation set. ## 4.2 Dataset And Evaluation Metrics In this section, we evaluate the proposed *UniEncoder* across four standard datasets, i.e., PersonaChat (Zhang et al., 2018), Ubuntu Dialogue Corpus V1 (Lowe et al., 2015), Ubuntu Dialogue Corpus V2 (Lowe et al., 2017), and Douban Con- ## Versation Corpus (Wu Et Al., 2017). PersonaChat (Zhang et al., 2018) is a crowdsourced dataset with two-speaker talks conditioned on their given persona, containing short descriptions of characters they will imitate in the dialogue. Ubuntu Dialogue Corpus V1 (Lowe et al., 2015) contains 1 million conversations about technical support for the Ubuntu system. We use the clean version proposed by Xu et al. (2017), which has numbers, URLs, and system paths replaced by special placeholders. Ubuntu Dialogue Corpus V2 (Lowe et al., 2017) has several updates and bug fixes compared to V1. The major one is that the training, validation, and test sets are split into different periods. We choose this dataset to conduct a detailed study of *Uni-Encoder* as it is the only dataset that *PolyEncoder* (Humeau et al., 2019) uses and has complete train/dev/test sets published. Douban Conversation Corpus (Wu et al., 2017) consists of web-crawled dyadic dialogs from a Chinese social networking website called Douban. Topics in this dataset are open-domain, and all the conversations are longer than two turns. Unlike other datasets where each context only has one proper response, the test set of Douban provides multiple proper responses. The statistics of four benchmark datasets are shown in Table 2. They vary greatly in volume, language, and topic. During training, we recycle the other labels in the same batch as negative samples instead of using the pre-defined negative candidates in each dataset. Several metrics are used to evalu3https://huggingface.co/models | Dataset | Train | Valid | Test | | |-------------------|---------|---------|---------|--------| | PersonaChat | Turns | 65,719 | 7,801 | 7,512 | | Positive:Negative | 1:19 | 1:19 | 1:19 | | | Ubuntu V1 | Pairs | 1M | 0.5M | 0.5M | | Positive:Negative | 1:1 | 1:9 | 1:9 | | | Ubuntu V2 | Pairs | 1M | 195.6k | 189.2k | | Positive:Negative | 1:1 | 1:9 | 1:9 | | | Douban | Pairs | 1M | 50k | 6,670 | | Positive:Negative | 1:1 | 1:1 | 1.2:8.8 | | Table 2: Statistics of four benchmark datasets. ate our model following previous works. We use Rc@k to evaluate the model performance across four datasets. The mean reciprocal rank (MRR) metric is additionally calculated for PersonChat and Douban Conversation Corpus datasets. In the Douban Conversation Corpus, we also report the P@1 and mean average precision (MAP) values because it contains multiple positive candidates for a given context. It is also noted that the proportion of the positive and negative samples of the validation set is significantly different from that of the test set in the Douban Conversation Corpus. To alleviate this discrepancy, we also utilize the in-batch negative labels in the validation stage to determine an appropriate checkpoint for inference. ## 4.3 Validating Our Design Choices In this section, we will validate our two design choices through a set of controlled experiments. As described in Section 3.3 and 3.4, we are able to simulate different paradigms by replacing the attention mechanism in *Uni-Encoder* with some minor modifications. We thus conduct experiments in this unified framework to control all other variables and make the fairest comparisons. Note that the *CrossEncoder* (iii) has to repeatedly encode the same lengthy context with every candidate, resulting in high memory usage and smaller batch size (5 in our experiments). The experimental results are shown in Table 3. ## Why Repeating Position Id For Responses? Let us first compare the results in Row (i) vs. Row (ii), where the only difference is that Row (i) use the same set of position IDs for all responses while Row (ii) has unique position IDs. Uni-Encoder with repeated position ID has significantly better results. This observation confirms our hypothesis that our responses should be treated equally. Why using Full Attention Between Context and Responses? If we compare the results of Row (i) with Row (v) and Row (vi), where the main differences lie in how much attention we have between context and responses, we can see that full attention can significantly boost performance. In fact, the more interaction (attention) they have, the better results they can get. Specifically, Poly-Encoder in Row(vi) has more interaction than Bi-Encoder in Row (v), and Uni-Encoder in Row (i) has more interaction than Poly-Encoder. These comparisons validate our design choices for full attention between context and responses. Why Avoiding Attention Among Responses? Comparing results in Row (i) and Row (iii), we can see that if we allow attention among responses, the performance drops significantly. This is easy to understand because if we allow attention among responses, it will be difficult for the ranker to distinguish them. Why Avoiding Recomputing the Context? It is easy to understand that if we recompute the lengthy context, the computational time increases dramatically, which we will measure quantitatively in Section 4.5. Here we show another dimension of the consequence of recomputing the context. As shown in Row (iv), the repetitive computation of the context stops the *Cross-Encoder* from having a large batch size because of the memory constraint. However, a good enough batch size, hence negative samples, is important for a multi-choice setting, as examined in Humeau et al. (2019). As a result, the performance of *Cross-Encoder* (iv) is only on par with *Poly-Encoder* (vi). ## 4.4 Comparison With State-Of-The-Art Methods We compare *Uni-Encoder* with the existing stateof-the-art methods in Table 4. Noted that, different from the comparison in Table 3, the methods in Table 4 are not entirely comparable as they have different additional training tricks. And these tricks often have a high impact on the performance of these methods. The only message we want to deliver here is that *Uni-Encoder* can achieve stateof-the-art performance even without some of these complex training tricks. For Ubuntu Corpus V1 and Douban Conversation Corpus, we also employ the advanced posttraining model from Han et al. (2021) and list the | Paradigm | Setup | Bs per GPU | Ubuntu Corpus V2 | | | | | |----------------------------|---------------------------|--------------------------------------|--------------------|-------|-------|-------|-------| | R10@1 | R10@2 | R10@5 | MRR | | | | | | (i) | Uni-Encoder | Arrow Attn w/ Res Concat | 8 | 0.859 | 0.938 | 0.990 | 0.915 | | (ii) | Uni-Encoder | Arrow Attn w/ Res Concat | 8 | 0.837 | 0.933 | 0.992 | 0.903 | | w/o Repeated Position ID | | | | | | | | | (iii) Concat-Cross-Encoder | Square Attn w/ Res Concat | 8 | 0.826 | 0.916 | 0.980 | 0.892 | | | (iv) | Cross-Encoder | Square Attn | | | | | | | w/o Res Concat | 5 | 0.844 | 0.930 | 0.987 | 0.905 | | | | (v) | Bi-Encoder | Diagonal Attn w/ Res Concat | 8 | 0.835 | 0.925 | 0.987 | 0.899 | | (vi) | Poly-Encoder | Light-Arrow Attn (360) w/ Res Concat | 8 | 0.844 | 0.929 | 0.989 | 0.906 | results separately with ♣ as it significantly affects the results and not all the methods use it. As shown in Table 4, *Uni-Encoder* achieves the best overall performance across all four benchmarks. For example, it improves the R@1 value on PersonaChat, Ubuntu V1, and Ubuntu V2 datasets by 2.6%, 0.5%, and 2.9%, respectively. However, *Uni-Encoder* only achieves the best results on the Douban Corpus on four of the six metrics. We conjecture that the positive example size discrepancy between the training set and test set is the reason for its poorer performance. In *UniEncoder*, we have chosen the multi-choice setting, assuming there is only one positive response. This setting allows us to leverage response concatenation and in-batch negative training to separate the positive sample from negative examples. However, multiple positive candidates in Douban Corpus at inference time (but not in training) break this assumption and may confuse the network. Our future study will quantify the impact of this assumption. Uni-Encoder also outperforms some of the more complex methods that rely on expensive training tricks, such as Liu et al. (2021) adapted BiGRU to capture conversation-level representations, and Su et al. (2021) leveraged hierarchical curriculum learning in their work. These approaches typically yield better outcomes, but at the expense of increased training budgets. In contrast, *Uni-Encoder* only retains the MLM loss from pre-training and adds two extra tokens to distinguish between dif- ![6_image_0.png](6_image_0.png) ## 4.5 Lower Computational Cost In addition to the accuracy gain, we also see that Uni-Encoder is computational efficiency compared to other paradigms. We test it on the Ubuntu V2 test set (189,200 contexts). The implementation of *Cross-* and *Poly-Encoder* follows the method proposed in Humeau et al. (2019). Despite the fact that candidate pools in generation-based dialogue systems are typically small, we are interested in understanding the performance of *Uni-Encoder* with enlarged pools. To this end, we vary the pool size from 10 and 20 | Models | Ubuntu Corpus V2 | PersonaChat | | | | | | | | |----------------------------------------|----------------------------|---------------|--------|--------|--------|-------|-------|-------|-------| | R10@1 | R10@2 | R10@5 | R20@1 | MRR | | | | | | | BERT (Devlin et al., 2019) | 0.781 | 0.890 | 0.980 | 0.707 | 0.808 | | | | | | Poly-Encoder 360 (Humeau et al., 2019) | 0.809 | - | 0.981 | - | - | | | | | | SA-BERT (Gu et al., 2020) | 0.830 | 0.919 | 0.985 | - | - | | | | | | BERT-CRA (Gu et al., 2021) | - | - | - | 0.843 | 0.903 | | | | | | Uni-Encoder (Ours) | 0.859⋆ | 0.938⋆ | 0.990⋆ | 0.869⋆ | 0.922⋆ | | | | | | Ubuntu Corpus V1 | Douban Conversation Corpus | | | | | | | | | | R10@1 | R10@2 | R10@5 | MAP | MRR | P@1 | R10@1 | R10@2 | R10@5 | | | BERT (Devlin et al., 2019) | 0.808 | 0.897 | 0.975 | 0.591 | 0.633 | 0.454 | 0.280 | 0.470 | 0.828 | | SA-BERT (Gu et al., 2020) | 0.855 | 0.928 | 0.983 | 0.619 | 0.659 | 0.496 | 0.313 | 0.481 | 0.847 | | BERT-SL (Xu et al., 2021) | 0.884 | 0.946 | 0.990 | - | - | - | - | - | - | | BERT+FGC (Li et al., 2021) | 0.829 | 0.910 | 0.980 | 0.614 | 0.653 | 0.495 | 0.312 | 0.495 | 0.850 | | UMSBERT (Whang et al., 2021) | 0.843 | 0.920 | 0.982 | 0.597 | 0.639 | 0.466 | 0.285 | 0.471 | 0.829 | | MDFN (Liu et al., 2021) | 0.866 | 0.932 | 0.984 | 0.624 | 0.663 | 0.498 | 0.325 | 0.511 | 0.855 | | SA-BERT+HCL (Su et al., 2021) | 0.867 | 0.940 | 0.992 | 0.639 | 0.681 | 0.514 | 0.330 | 0.531 | 0.858 | | ♣UMSBERT + (Whang et al., 2021) | 0.875 | 0.942 | 0.988 | 0.625 | 0.664 | 0.499 | 0.318 | 0.482 | 0.858 | | ♣BERT-UMS+FGC (Li et al., 2021) | 0.886 | 0.948 | 0.990 | 0.627 | 0.670 | 0.500 | 0.326 | 0.512 | 0.869 | | ♣BERT-FP (Han et al., 2021) | 0.911 | 0.962 | 0.994 | 0.644 | 0.680 | 0.512 | 0.324 | 0.542 | 0.870 | | Uni-Encoder (Ours) | 0.886 | 0.946 | 0.989 | 0.622 | 0.662 | 0.481 | 0.303 | 0.514 | 0.852 | | ♣Uni-Enc+BERT-FP (Ours) | 0.916⋆ | 0.965⋆ | 0.994 | 0.648⋆ | 0.688⋆ | 0.518 | 0.327 | 0.557 | 0.865 | to 50 and 100 for each context by randomly selecting additional candidates from the corpus. We then conducted all speed tests on a single NVIDIA A100-SXM4-40GB with CUDA 11.1. The batch size for each paradigm was maximized as much as possible. The results are presented in Figure 2. *UniEncoder* demonstrates 4× faster inference speed compared to *Cross-Encoder* when the pool size is appropriate. As the pool size increases, the advantages of *Uni-Encoder* become more pronounced. Compared with Poly-Encoder, *Uni-Encoder* exhibits a similar trend, with slightly better overall efficiency. Furthermore, we have also deployed Uni-Encoder in a commercial psychotherapy chatbot to rank the responses generated by large language models (LLMs). It has shown to be even more advantageous in this real-world dialogue application, as it returns results with only one forward pass, thus reducing the latency caused by other factors such as data transfer. ## 4.6 Qualitative Analysis To further understand the performance gap between different paradigms, we take the model checkpoints from Section 4.3 to go through examples that these methods predict differently. Some of the studied cases are shown in Table 5 in Appendix. *UniEncoder* is found to have the most specific and diverse selections. In contrast, even though some results of the other paradigms are not logically problematic, they sometimes prefer more generic responses. We conjecture this difference results from the fact that *Uni-Encoder* compares and scores all the responses simultaneously. Candidates can still interact adequately with each other through their common attention to the context. With such an advantage, it would be easier to distinguish hard negatives from true positives. ## 5 Discussion This paper presents a new paradigm for the generation-based dialogue response selection task. Our proposed *Uni-Encoder* avoids re-computing the lengthy context in the current state-of-the-art Cross-Encoder method while maintaining the full context to candidate attention. Experimental results on four benchmark datasets show that our approach is both fast and accurate. As *Uni-Encoder* holds the potential to build a more effective and efficient ranking paradigm, our future research will explore its usage in broader applications, such as improving the reward model in the reinforcement learning from human feedback (RLHF) framework (Stiennon et al., 2020; Nakano et al., 2021; Ouyang et al., 2022). ## 6 Limitations One major limitation of *Uni-Encoder* is its suitability only for generation-based dialogue systems in which the number of responses is small. A twostage approach is necessary for retrieval-based systems: Context-independent encoding methods like Poly-Encoder first filter out a small set of candidates from the large pool, then *Uni-Encoder* can pick out the best response from the pre-filtered collection. Moreover, as discussed in Section 5, Uni-Encoder could be a good component of the RLHF approach. However, the increasing research of pure generation methods with alignments bakedin (Arora et al., 2022; Liu et al., 2023) may gradually replace the SFT+RL method. Consequently, Uni-Encoder will have a smaller and smaller impact in terms of application. Nevertheless, because Uni-Encoder unified all other ranking paradigms, we believe it remains helpful even as a theoretical framework. ## References Daniel Adiwardana, Minh-Thang Luong, David R So, Jamie Hall, Noah Fiedel, Romal Thoppilan, Zi Yang, Apoorv Kulshreshtha, Gaurav Nemade, Yifeng Lu, et al. 2020. Towards a human-like open-domain chatbot. *ArXiv preprint*, abs/2001.09977. Kushal Arora, Kurt Shuster, Sainbayar Sukhbaatar, and Jason Weston. 2022. Director: Generator-classifiers for supervised language modeling. *ArXiv preprint*, abs/2206.07694. Siqi Bao, Huang He, Fan Wang, Hua Wu, Haifeng Wang, Wenquan Wu, Zhihua Wu, Zhen Guo, Hua Lu, Xinxian Huang, et al. 2021. Plato-xl: Exploring the large-scale pre-training of dialogue generation. ArXiv preprint, abs/2109.09519. Leyang Cui, Fandong Meng, Yijin Liu, Jie Zhou, and Yue Zhang. 2022. Towards robust online dialogue response generation. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2019. Wizard of wikipedia: Knowledge-powered conversational agents. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. Jia-Chen Gu, Tianda Li, Quan Liu, Zhen-Hua Ling, Zhiming Su, Si Wei, and Xiaodan Zhu. 2020. Speaker-aware BERT for multi-turn response selection in retrieval-based chatbots. In *CIKM '20: The* 29th ACM International Conference on Information and Knowledge Management, Virtual Event, Ireland, October 19-23, 2020, pages 2041–2044. ACM. Jia-Chen Gu, Hui Liu, Zhen-Hua Ling, Quan Liu, Zhigang Chen, and Xiaodan Zhu. 2021. Partner matters! an empirical study on fusing personas for personalized response selection in retrieval-based chatbots. ArXiv preprint, abs/2105.09050. Janghoon Han, Taesuk Hong, Byoungjae Kim, Youngjoong Ko, and Jungyun Seo. 2021. Finegrained post-training for improving retrieval-based dialogue systems. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, pages 1549–1558, Online. Association for Computational Linguistics. Samuel Humeau, Kurt Shuster, Marie-Anne Lachaux, and Jason Weston. 2019. Poly-encoders: Transformer architectures and pre-training strategies for fast and accurate multi-sentence scoring. *ArXiv* preprint, abs/1905.01969. Yuntao Li, Can Xu, Huang Hu, Lei Sha, Yan Zhang, and Daxin Jiang. 2021. Small changes make big differences: Improving multi-turn response selection\\in dialogue systems via fine-grained contrastive learning. *ArXiv preprint*, abs/2111.10154. H Liu, C Sferrazza, and P Abbeel. 2023. Chain of hindsight aligns language models with feedback. ArXiv preprint, abs/2302.02676. Longxiang Liu, Zhuosheng Zhang, Hai Zhao, Xi Zhou, and Xiang Zhou. 2021. Filling the gap of utteranceaware and speaker-aware representation for multiturn dialogue. In *Thirty-Fifth AAAI Conference* on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 13406–13414. AAAI Press. Ryan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The Ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. In Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 285–294, Prague, Czech Republic. Association for Computational Linguistics. Ryan Lowe, Nissan Pow, Iulian Vlad Serban, Laurent Charlin, Chia-Wei Liu, and Joelle Pineau. 2017. Training end-to-end dialogue systems with the ubuntu dialogue corpus. *Dialogue & Discourse*, 8(1):31–65. Junyu Lu, Xiancong Ren, Yazhou Ren, Ao Liu, and Zenglin Xu. 2020. Improving contextual language models for response retrieval in multi-turn conversation. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, SIGIR 2020, Virtual Event, China, July 25-30, 2020, pages 1805–1808. ACM. Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al. 2021. Webgpt: Browser-assisted questionanswering with human feedback. *ArXiv preprint*, abs/2112.09332. Yixin Nie, Mary Williamson, Mohit Bansal, Douwe Kiela, and Jason Weston. 2021. I like fish, especially dolphins: Addressing contradictions in dialogue modeling. In *Proceedings of the 59th Annual Meeting of* the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), Online. Association for Computational Linguistics. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. *arXiv preprint* arXiv:2203.02155. Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong, China. Association for Computational Linguistics. Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Eric Michael Smith, Y-Lan Boureau, and Jason Weston. 2021. Recipes for building an open-domain chatbot. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 300–325, Online. Association for Computational Linguistics. Iulian Vlad Serban, Alessandro Sordoni, Yoshua Bengio, Aaron C. Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierarchical neural network models. In *Proceedings of the Thirtieth AAAI Conference on Artificial* Intelligence, February 12-17, 2016, Phoenix, Arizona, USA, pages 3776–3784. AAAI Press. Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel M. Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F. Christiano. 2020. Learning to summarize with human feedback. In *Advances* in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. Yixuan Su, Deng Cai, Qingyu Zhou, Zibo Lin, Simon Baker, Yunbo Cao, Shuming Shi, Nigel Collier, and Yan Wang. 2021. Dialogue response selection with hierarchical curriculum learning. In *Proceedings* of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1740–1751, Online. Association for Computational Linguistics. Chongyang Tao, Wei Wu, Can Xu, Wenpeng Hu, Dongyan Zhao, and Rui Yan. 2019. Multirepresentation fusion network for multi-turn response selection in retrieval-based chatbots. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, WSDM 2019, Melbourne, VIC, Australia, February 11-15, 2019, pages 267– 275. ACM. Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, et al. 2022. Lamda: Language models for dialog applications. *ArXiv preprint*, abs/2201.08239. Jack Urbanek, Angela Fan, Siddharth Karamcheti, Saachi Jain, Samuel Humeau, Emily Dinan, Tim Rocktäschel, Douwe Kiela, Arthur Szlam, and Jason Weston. 2019. Learning to speak and act in a fantasy text adventure game. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 673–683, Hong Kong, China. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998–6008. Oriol Vinyals and Quoc Le. 2015. A neural conversational model. *ArXiv preprint*, abs/1506.05869. Taesun Whang, Dongyub Lee, Chanhee Lee, Kisu Yang, Dongsuk Oh, and Heuiseok Lim. 2020. An effective domain adaptive post-training method for BERT in response selection. In *Interspeech 2020, 21st Annual* Conference of the International Speech Communication Association, Virtual Event, Shanghai, China, 25-29 October 2020, pages 1585–1589. ISCA. Taesun Whang, Dongyub Lee, Dongsuk Oh, Chanhee Lee, Kijong Han, Dong-hun Lee, and Saebyeok Lee. 2021. Do response selection models really know what's next? utterance manipulation strategies for multi-turn response selection. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 14041–14049. AAAI Press. Yu Wu, Wei Wu, Chen Xing, Ming Zhou, and Zhoujun Li. 2017. Sequential matching network: A new architecture for multi-turn response selection in retrievalbased chatbots. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 496–505, Vancouver, Canada. Association for Computational Linguistics. Ruijian Xu, Chongyang Tao, Daxin Jiang, Xueliang Zhao, Dongyan Zhao, and Rui Yan. 2021. Learning an effective context-response matching model with self-supervised tasks for retrieval-based dialogues. In *Thirty-Fifth AAAI Conference on Artificial* Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 14158–14166. AAAI Press. Zhen Xu, Bingquan Liu, Baoxun Wang, Chengjie Sun, and Xiaolong Wang. 2017. Incorporating loosestructured knowledge into conversation modeling via recall-gate lstm. In *2017 international joint conference on neural networks (IJCNN)*, pages 3506–3513. IEEE. Chunyuan Yuan, Wei Zhou, Mingming Li, Shangwen Lv, Fuqing Zhu, Jizhong Han, and Songlin Hu. 2019. Multi-hop selector network for multi-turn response selection in retrieval-based chatbots. In *Proceedings* of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 111–120, Hong Kong, China. Association for Computational Linguistics. Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Personalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2204–2213, Melbourne, Australia. Association for Computational Linguistics. Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2020. DIALOGPT : Large-scale generative pre-training for conversational response generation. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics:* System Demonstrations, pages 270–278, Online. Association for Computational Linguistics. ## A Qualitative Analysis | # | Examples | | |-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------| | A: have you looked in system settings >brightness and lock ? not power options B: yes, of course. I'm here because the standard ways are failing on two my precise installations | | | | 1 | ⋆Uni: care to post a screenshot? | Cross: I was just wondering | | Bi: sry | Poly: Ah, ok. | | | A: Is there a way to force apt-get to install a package even if apt is locked by another running apt? B: you don't want to do that wait till the updates are done then A: It will take to long. Its a do-release-upgrade ⋆Uni/Cross: that will break things if you interupt it Bi: Yes. I've done it several times Poly: ok | | | | 2 | A: Does anyone know if there is a crossfeed plugin for Rhythmbox in the repositories? B: why do want to feed rhythmbox? A: crossfeed is a type of signal processing that removes the separation inherent in stereo recordings it's for headphone listening ⋆Uni/Cross/Poly: it's called crossfade ;) Bi: could you explain more about what you want? | | | 3 | | | Table 5: Cases studied from Ubuntu V2 for comparing selections of different paradigms where ⋆ denotes the correct ![11_image_0.png](11_image_0.png) choice. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Please refer to Section 6 ✓ A2. Did you discuss any potential risks of your work? Please refer to Section 6 ✓ A3. Do the abstract and introduction summarize the paper's main claims? Please refer to Section 1 and the abstract ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Please Refer To Section 4.2 ✓ B1. Did you cite the creators of artifacts you used? Please refer to Section 4.2 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Please refer to Section 4.2 The datasets we used are publicly available for research. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Please refer to Section 4.2 The datasets we used are publicly available for research. ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Please refer to Section 4.2 ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Please refer to Section 4.2 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Please refer to Section 4.2 ## C ✓ **Did You Run Computational Experiments?** Please Refer To Section 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Please refer to Section 4.1 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Please refer to Section 4.1 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Please refer to Section 4 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Please refer to Section 4.1 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
keleg-magdy-2023-dlama
{DLAMA}: A Framework for Curating Culturally Diverse Facts for Probing the Knowledge of Pretrained Language Models
https://aclanthology.org/2023.findings-acl.389
A few benchmarking datasets have been released to evaluate the factual knowledge of pretrained language models. These benchmarks (e.g., LAMA, and ParaRel) are mainly developed in English and later are translated to form new multilingual versions (e.g., mLAMA, and mParaRel). Results on these multilingual benchmarks suggest that using English prompts to recall the facts from multilingual models usually yields significantly better and more consistent performance than using non-English prompts. Our analysis shows that mLAMA is biased toward facts from Western countries, which might affect the fairness of probing models. We propose a new framework for curating factual triples from Wikidata that are culturally diverse. A new benchmark DLAMA-v1 is built of factual triples from three pairs of contrasting cultures having a total of 78,259 triples from 20 relation predicates. The three pairs comprise facts representing the (Arab and Western), (Asian and Western), and (South American and Western) countries respectively. Having a more balanced benchmark (DLAMA-v1) supports that mBERT performs better on Western facts than non-Western ones, while monolingual Arabic, English, and Korean models tend to perform better on their culturally proximate facts. Moreover, both monolingual and multilingual models tend to make a prediction that is culturally or geographically relevant to the correct label, even if the prediction is wrong.
# Dlama: A Framework For Curating Culturally Diverse Facts For Probing The Knowledge Of Pretrained Language Models Amr Keleg and Walid Magdy Institute for Language, Cognition and Computation School of Informatics, University of Edinburgh [email protected], [email protected] ## Abstract A few benchmarking datasets have been released to evaluate the factual knowledge of pretrained language models. These benchmarks (e.g., LAMA, and ParaRel) are mainly developed in English and later are translated to form new multilingual versions (e.g., mLAMA, and mParaRel). Results on these multilingual benchmarks suggest that using English prompts to recall the facts from multilingual models usually yields significantly better and more consistent performance than using non-English prompts. Our analysis shows that mLAMA is biased toward facts from Western countries, which might affect the fairness of probing models. We propose a new framework for curating factual triples from Wikidata that are culturally diverse. A new benchmark **DLAMA-v1** is built of factual triples from three pairs of contrasting cultures having a total of 78,259 triples from 20 relation predicates. The three pairs comprise facts representing the (Arab and Western), (Asian and Western), and (South American and Western) countries respectively. Having a more balanced benchmark (DLAMA-v1) supports that mBERT performs better on Western facts than non-Western ones, while monolingual Arabic, English, and Korean models tend to perform better on their culturally proximate facts. Moreover, both monolingual and multilingual models tend to make a prediction that is culturally or geographically relevant to the correct label, even if the prediction is wrong. ## 1 Introduction Transfer learning paradigms such as fine-tuning, few-shot learning, and zero-shot learning rely on pretrained language models (PLMs), that require having large compilations of raw data (Devlin et al. 2019; Brown et al. 2020; Chowdhery et al. 2022; Scao et al. 2022). These PLMs showed some ability to model different linguistic phenomena (Goldberg 2019; Jawahar et al. 2019) in addition to memorizing facts related to real-world knowledge. While there is a drive to have multilingual models, English is still the language that is better supported due to the abundance of large English raw corpora, diverse datasets, and benchmarks. Moreover, monolingual non-English PLMs are still being pretrained for other high-resource languages. As a way to probe the non-English and multilingual PLMs, researchers tend to translate English benchmarks into other languages, which might degrade the quality of the samples especially if the translation is performed automatically. While translating English benchmarks saves the time and money needed to build new language-specific benchmarks, it might introduce unintended biases or artifacts into the benchmarks. LAMA (Petroni et al., 2019) and ParaRel (Elazar et al., 2021) are two benchmarks developed to quantify the factual knowledge of the English PLMs. They used a setup in which a language model is said to know a specific fact if it can predict the right object for a prompt in a fill-the-gap setup (e.g., For the prompt **"The capital of England is** [MASK]", the model needs to fill the masked gap with **"London"**). Multilingual versions of these benchmarks namely: mLAMA (Kassner et al., 2021), and mParaRel (Fierro and Søgaard, 2022) were released to evaluate the performance of multilingual PLMs by translating LAMA and ParaRel into 53 and 46 languages respectively. The subjects and objects of the triples within these benchmarks were translated using their multilingual labels on Wikidata, while the templates were automatically translated from the English ones used in the original benchmarks. These templates transform triples into textual natural language prompts for probing the models. X-FACTR is another benchmark sharing the same setup, and is built for 23 different languages (Jiang et al., 2020). All three benchmarks sample factual triples in the form of (subject, relation predicate, object) from T-REx, a dump of Wikidata triples aligned to abstracts extracted 6245 from the English Wikipedia (Elsahar et al., 2018). The way T-REx is constructed might make it more representative of the facts related to Western cultures, which might introduce an unnoticed bias to the benchmarks based on it. We hypothesize that having a fair representation of the different cultures within a benchmark is vital for fairly probing models pretrained for multiple languages. The main contributions of our paper can be summarized as follows: 1. Investigating the impact of sampling mLAMA triples from T-REx on the distribution of the objects within the relation predicates. 2. Proposing DiverseLAMA (DLAMA), a methodology for curating culturally diverse facts for probing the factual knowledge of PLMs, and building 3 sets of facts from pairs of contrasting cultures representing the (ArabWest), (Asia-West), and (South AmericaWest) cultures, to form DLAMA-v11. 3. Showing the impact of having a less skewed benchmark DLAMA-v1 on the performance of mBERT and monolingual Arabic, English, Korean, and Spanish BERT models. 4. Demonstrating the importance of having contrasting sets of facts in diagnosing the behavior of the PLMs for different prompts. ## 2 Related Work Petroni et al. (2019) investigated the possibility of using PLMs as potential sources of knowledge, which can later substitute manually curated knowledge graphs. To this end, they created LAMA (LAnguage Model Analysis), a dataset of 34,000 relation triples representing facts from 41 different Wikidata relation predicates. These facts are extracted from a larger dataset called T-REx that contains 11 million relation triples, acquired from a large Wikidata dump of triples, that were automatically aligned to English Wikipedia abstracts (Elsahar et al., 2018). Manual English templates were written to transform the triples into prompts to probe the model's factual knowledge. The triples were limited to the ones whose objects are tokenized into a single subtoken. Kassner et al. (2021) constructed a multilingual version of LAMA (mLAMA) having 53 different languages. They handled the limitation of using single-subtoken objects by computing the proba-1The DLAMA-v1 benchmark and the codebase can be reached through: https://github.com/AMR-KELEG/DLAMA bility of a multi-subtoken object as the geometric mean of the subtokens' probabilities. They concluded that the performance of mBERT when probed with prompts written in 32 languages is significantly lower than mBERT's performance when probed with English prompts. Moreover, they observed insignificant performance improvement for German, Hindi, and Japanese when their corresponding templates were manually corrected. Similarly, Jiang et al. (2020) created X-FACTR by sampling relation triples from T-REx for 46 different Wikidata predicates. The multilingual Wikidata labels were used to translate the subjects and objects of the triples. They compared multiple decoding methods. Moreover, they employed different templates to generate prompts having the correct number/gender agreement with the subjects of the triples. English prompts still outperformed prompts written in 22 other languages. ParaRel and its multilingual version mParaRel are benchmarks created by sampling triples from T-REx for 38 relation predicates (Elazar et al. 2021; Fierro and Søgaard 2022). Their aim is to measure the consistency of the model in making the same prediction for different paraphrases of the same template. Results on both benchmarks showed that the multilingual mBERT and XLM-R models are less consistent than the monolingual English BERT model, especially when these multilingual models are prompted with non-English inputs. From a model diagnostics perspective, Cao et al. (2021) found that English PLMs might be biased to making specific predictions based on a predicate's template irrespective of the subjects used to populate this template. Thereafter, Elazar et al. (2023) designed a causal framework for modeling multiple co-occurrence statistics that might cause English PLMs to achieve high scores on some of LAMA's predicates. We focus on why a non-English PLM might fail to recall facts and hypothesize the following possible reasons: 1. The quality of the template might degrade after automatically translating it from English. 2. Non-English or multilingual PLM are generally pretrained on a lesser amount of nonEnglish data and thus might be less capable of recalling facts efficiently. 3. Translating the underlying facts of a benchmark, initially designed to probe English PLMs, might cause a representational bias. While the first two factors are studied in the literature, we believe that the third factor is a major quality issue that previous work has overlooked. Randomly sampling the triples from T-REx might introduce a representation bias toward Western cultures, since only facts aligned to English Wikipedia abstracts are considered. We investigate the presence of such bias (§3). Moreover, we empirically demonstrate how better model diagnostics can be performed when the benchmark is formed using two diverse and contrasting sets of facts (§5). ## 3 Cultural Bias In Mlama Probing PLMs using prompts is an analysis tool attempting to understand how they behave. A biased probing benchmark might be deceiving, as both a good-performing model and a model sharing the same bias found in the benchmark would achieve good performance. In this section, we investigate if the facts within mLAMA might be biased toward Western cultures, which can affect the reliability of the performance scores achieved by PLMs when probed using mLAMA. ## 3.1 Quantifying The Cultural Bias As a proxy for measuring the skewness of the triples of T-REx, LAMA, and X-FACTR toward Western cultures, 26 relation predicates are selected that have a person's name or a place as their subject or object. Moreover, 21 Western countries are identified as representative of Western cultures from Western European and South Western European countries2: Andorra, Austria, Belgium, France, Germany, Ireland, Italy, Liechtenstein, Luxembourg, Monaco, Netherlands, Portugal, San Marino, Spain, Switzerland, the United Kingdom, in addition to Canada, the United States of America, Australia, and New Zealand. For each relation predicate out of the 26, triples with a subject or object that either has a country of citizenship or is located in one of the 21 Western countries are counted. 63.6% of the triples within the LAMA benchmark are related to these Western countries compared to 62.7% for X-FACTR, and 57.1% for TREx (from which LAMA and X-FACTR are sampled)3. This highlights the issue that aligning Wikidata triples to English Wikipedia abstracts in TREx would skew them toward Western countries, ## Impacting Both Lama And X-Factr. 3.2 Qualitative Analysis Of The Bias And Its Impact Kassner et al. (2021) used mLAMA to probe mBERT using prompts in 41 languages. We find that all the languages in which prompts achieve the highest performance4 use the Latin script, while the ones with the least performance5 use other scripts. This might be attributed to the model's ability to share cross-lingual representations for common named entities for languages using the Latin script, which allows for cross-lingual knowledge sharing. Moreover, it is known that more than 78% of mBERT's vocabulary is Latin subwords6. However, there are still some relation predicates for which a non-Latin scripted language outperforms a Latin-scripted one. The P1407(religion or worldview) predicate is a clear example of these predicates. An example triple for the P140 predicate is: **(Edward I of England, religion or** worldview [P140], Christianity). mBERT has higher performance for Arabic (23.1%), Azerbaijani (8.1%), Korean (30.1%), Georgian (35.1%), Thai (13.4%), Tamil (4.0%), Russian (54.6%), and Japanese (30.0%) than for English (1.5%). Looking at the objects for the English triples within mLAMA, we find that 53.7% of the triples have Islam as their object. While the objects for the P140 predicate should be religions, we find that only seven triples have incorrect inflected forms of Muslim, *Christian*, and Hindu instead of Islam, *Christianity*, and *Hinduism*. Further investigation reveals that the English template used to transform the triples into prompts is (*[X] is affiliated with the [Y] religion .*) which would suit retrieving these infrequent inflected labels than the frequent labels. Therefore, most predictions for the English prompts are considered incorrect justifying the low performance achieved for English. To overcome penalizing these predictions, we mapped the model's predictions and the objects' labels such that for instance *Christian* and *Christianity* are both considered to represent the same prediction *Christianity*, and similarly for Hinduism and *Islam*. 4English, Indonesian, Malay, Afrikaans, Galician, Vietnamese, Danish, Spanish, Catalan, Cebuano, Romanian. 5Russian, Azerbaijani, Hebrew, Arabic, Korean, Armenian, Georgian, Tamil, Thai, Japanese. 6http://juditacs.github.io/2019/02/19/ bert-tokenization-stats.html 7Wikidata predicates' identifiers format is P[0 − 9]+. 6247 ![3_image_0.png](3_image_0.png) Figure 1 shows the distribution of mBERT's predictions for the P140 triples for prompts in 20 different languages after unifying the labels. We observe that: (1) For some languages, the predictions are skewed toward a specific wrong label that is culturally related to these languages. For example, the mode of the predictions of prompts in Armenian, Thai, Korean, and Tamil is Christianity, Buddhism, Buddhism, and Hinduism respectively. (2) Arabic, and Russian prompts tend to yield high performance. The same holds for Indonesian and Malay which achieve similar performance with less skewness in the predictions. Since the label distribution for this predicate within mLAMA is skewed toward a specific label *Islam*, one can not confidently conclude whether the model is choosing the right answer for having some knowledge of the facts or for making a biased guess that luckily coincides with the right label. While these findings signify the possibility that mLAMA is biased for the P140 predicate, it on the other hand might hint that mLAMA is also biased toward Western cultures for most of the remaining predicates. For instance, the P103 (Native Language) predicate in mLAMA has *French* as the correct label for 60.14% of the triples. ## 4 Building Dlama Our methodology aims at building a culturally diverse benchmark, which would allow for a fairer estimation of a model's capability of memorizing facts. Within DLAMA, query parameters form underlying SPARQL queries that are used to retrieve Wikidata triples as demonstrated in Figure 2. To operationalize the concept of cultures, we use countries as a proxy for the cultures of interest. For instance, countries that are members of the Arab League are considered representatives of Arab cultures. Conversely, Western countries mentioned in §3.1 represent Western cultures. Furthermore, China, Indonesia, Japan, Malaysia, Mongolia, Myanmar, North Korea, Philippines, Singapore, South Korea, Taiwan, Thailand, and Vietnam are 13 countries from East Asia, and Southeast Asia8 representing Asian cultures, while Argentina, Bolivia, Brazil, Chile, Colombia, Ecuador, Guyana, Paraguay, Peru, Suriname, Uruguay, Venezuela represent South American cultures. For predicates in which the subject is a person, we add a filter to the SPARQL query which limits the country of citizenship of the person to a specific set of countries (i.e., a specific culture). For predicates in which the subject is a place, we limit the values of the places to those located in a country within the predefined set of countries related to the target culture. We implemented a Python interface to simplify the process of querying Wikidata triples. Currently, 20 relation predicates are supported. The userfriendly interface allows the addition of new relation predicates and filters, which we hope would encourage contributions to DLAMA. 8Based on the UN stats classification: https://unstats. un.org/unsd/methodology/m49/ ![4_image_0.png](4_image_0.png) ## 4.1 Methodology Of Querying Triples For A Specific Predicate Step \#1 - Getting an exhaustive list of triples for a Wikidata predicate: A set of parameters need to be specified through the Python interface to generate an underlying SPARQL query. These parameters are (1) an entity label for the subject, and an entity label for the object9, (2) a set of countries representing specific cultures, (3) a Wikidata predicate relating the object to the subject, (4) a list of Wikipedia sites that are expected to contain facts related to each specified country, and (5) a list of languages for which the parallel labels of the subjects and the objects are acquired and later used to populate the multilingual probing templates. In addition to querying the Wikidata Unique Reference Identifiers (URIs) of the subjects and the objects, the Unique Reference Links (URLs) of the Wikipedia articles linked to the subjects are queried as optional fields. Step \#2 - Sorting the list of retrieved triples by their validity: Facts on Wikidata are crowdsourced, and contributors are encouraged to add references to the facts they modify. However, lots of the facts on Wikidata still have missing references. Therefore, we use the length of the Wikipedia article corresponding to the triple's subject as a proxy for the validity of the triple. The fact that contributors and editors spent time writing a long Wikipedia article implies that a group of people finds the article important. Therefore they will be keen on making sure the information there is factually sound (Bruckman, 2022). We believe that using the size of the article rather than other metrics such as the number of visits to the Wikipedia article, allows facts related to underrepresented groups on Wikipedia to still be ranked high, thus making the top-ranked facts more diverse and inclusive. We sort the retrieved triples by the size (in bytes) of the Wikipedia article linked to their subjects. In case a subject has articles on multiple Wikipedia sites, the size of the largest article is used. DLAMA also allows sorting the triples by the total number of edits (revisions) of their subjects' respective articles. Step \#3 - Querying all possible objects for each subject: Since a subject might be linked to multiple objects for the same relation predicate, another query is executed in order to ensure that all these objects are retrieved. For instance, a person might be a citizen of an Arab country in addition to another non-Arab country. This step ensures that the non-Arab country is still considered as a valid country of citizenship for the person, even if the initial query restricted the countries to Arab ones only. While previous benchmarks limited the object for each triple to a single value, we believe it is fairer to allow multiple valid labels instead of randomly picking one label out of the valid ones. Step \#4 - Querying the labels for the triples: Till this stage, the subjects and objects are represented by their Wikidata URIs. The Wikidata labels of all the subjects and objects need to be fetched for the languages of interest. Relation triples having missing subject or object labels in any of the languages specified are discarded in order to ensure that the triples are the same for all the languages. Step \#5 (optional) - Handling overlapping objects: The degree of granularity of the objects for Wikidata's relation predicates differs even among triples of the same predicate (e.g.: The official language of Australia is set to English while that of The United States of America is set to American English which is a subclass of English). To avoid penalizing models for picking an object that is a superclass of the correct object, a graph is built, modeling the hierarchical relations between all the objects of the sampled triples of a relation predicate. The graph is later used to augment the valid objects with their superclasses as detailed in §B of the Appendix. ## 4.2 The Dlama-V1 Benchmark We used the above method to build three sets of facts as part of DLAMA-v1 to assess the performance of PLMs on recalling facts related to 21 Western countries as compared to the 22 Arab, 13 Asian countries, and 12 South American countries10. The sets provide examples of how the framework can be used to compile facts from pairs of contrasting cultures. We hope the community will use the framework to introduce new pairs representing other countries and cultures. A maximum of 1000 triples from each predicate out of the 20 supported ones are independently queried for each set of countries within each pair. This ensures that the queried triples are balanced across the two sets of countries within the pair. In total, the (Arab-West) pair comprises 24535 triples with labels in Arabic and English, as compared to 27076 triples with labels in Korean, and English for the (Asia-West) pair, and 26657 triples with labels in Spanish, and English for the (South America-West) pair. Figure 3 shows an example of a triple of DLAMA-v1's (Arab-West) set. The underlying triples belonging to the Western cultures in the 3 sets are not identical. Triples in a set are discarded if their subjects or objects do not have labels in the languages. Regarding the languages of the labels, Arabic and Korean are chosen as they are two of the leastperforming languages on mLAMA. It is expected that facts related to Arab and East Asian/South East Asian countries are relevant to Arabic and Korean PLMs respectively, and would be contrasting to Western facts. Additionally, both languages have non-Latin scripts, use white spaces to sepa10Refer to §3.1 and §4 for the list of countries. - **Prompt**: Egypt is located in ... - **Subject**: {Egypt} - **Set of correct objects**: {Africa, Asia} - **Set of objects of the predicate to be ranked**: {Africa, Asia, Europe, Insular Oceania, North America} Figure 3: An example of a prompt created using a relation triple of DLAMA from the P30 (continent) relation predicate for the Arab-Western pair. rate tokens, and have an inventory of monolingual PLMs. On the other hand, the (South AmericaWest) pair is a trickier case since most South American countries use Spanish as their official language. One can argue that sharing the same language with Spain introduces commonalities between the SouthAmerican countries and the Western ones. Overlap between DLAMA-v1 and T-REx: For the three culture sets, we measured the percentage of triples found in T-REx. 17.92% of Arab-related facts are in T-REx compared to 39.85% of Westernrelated ones in the (Arab-Western) pair. Moreover, 22.64% of Asian-related facts are found in T-REx compared to 44.43% of Western-related ones in the (Asia-Western) pair. Lastly, the overlap percentages for the (South America-West) pair are 17.68% and 32.22% respectively. These values demonstrate that T-REx has less coverage of the Arab, Asian, and South American factual triples than its coverage of Western triples. Moreover, the fact that T-REx is tuned for higher precision means that its recall is affected and a lot of the Western facts expected to be found in English Wikipedia abstracts are discarded. Conversely, DLAMA-v1 is a less skewed benchmark across different cultures. ## 5 Probing Plms Via Dlama-V1 5.1 Experimental Setup We follow mLAMA's probing setup to evaluate the PLMs' factual knowledge. For each relation predicate [PREDICATE], the set {OBJECTS} of unique objects of the triples is first compiled. Then, for each relation triple within the [PREDICATE], the PLM is asked to assign a score for each object within {OBJECTS} by computing the probability of having this object replacing the masked tokens. This setup asks the model to choose the correct answer out of a set of possible choices, instead of decoding the answer as a generation task. The templates used in DLAMA to convert triples into natural language prompts are adapted from mLAMA and listed in Table F9 of the Appendix. | Prompt | Model | P@1 | P@1 | | | |-----------------------------------|------------|------------|---------|-------|-------| | Lang. | name | Arab | West | DLAMA | mLAMA | | N=10946 | N=13589 | N=24535 | N=17128 | | | | Arabic | mBERT-base | 13.7 | 15.1* | 14.5 | 15.2† | | arBERT | 33.6* | 23.0 | 27.7† | 24.4 | | | English | mBERT-base | 21.2 | 37.7* | 30.3 | 33.9† | | BERT-base | 27.5 | 31.3* | 29.6 | 37.9† | | | (a) DLAMA-v1 (Arab-West) | | | | | | | Prompt | Model | P@1 | P@1 | | | | Lang. | name | Asia | West | DLAMA | mLAMA | | N=13479 | N=13588 | N=27067 | N=14217 | | | | Korean | mBERT-base | 16.4 | 28.5* | 22.5† | 15.7 | | KyKim | 22.1* | 19.5 | 20.8† | 13.4 | | | English | mBERT-base | 33.0 | 39.9* | 36.4† | 35.1 | | BERT-base | 38.3* | 31.9 | 35.1 | 39.0† | | | (b) DLAMA-v1 (Asia-West) | | | | | | | Prompt | Model | P@1 | P@1 | | | | Lang. | name | S. America | West | DLAMA | mLAMA | | N=13071 | N=13586 | N=26657 | N=28168 | | | | Spanish | mBERT-base | 25.4 | 33.8* | 29.7 | 30.5† | | BETO | 16.0 | 26.5* | 21.4 | 22.7† | | | English | mBERT-base | 27.0 | 37.6* | 32.4 | 33.9† | | BERT-base | 26.9 | 31.3* | 29.2 | 37.1† | | | (c) DLAMA-v1 (South America-West) | | | | | | Models: We evaluated the cased multilingual BERT-base, and the cased English BERT-base using all the sets of facts of DLAMA-v1. Moreover, a monolingual Arabic BERT-base model **arBERT** (Abdul-Mageed et al., 2021), a monolingual Korean BERT-base model **KyKim BERT-base** (Kim, 2020), and a monolingual cased Spanish BERTbase model **BETO** (Cañete et al., 2020) are evaluated using the (Arab-West), the (Asia-West), and the (South America-West) pairs respectively. We focus on BERT models to compare our results to those previously reported on mLAMA. ## 5.2 Aggregated Results Precision at the first rank (P@1) is the metric used to evaluate the performance of the models. P@1 is the percentage of triples for which the first prediction of the model matches one of the objects for this triple. In order to quantify the diversity of the objects of a relation predicate for each culture, an entropy score is computed. For each triple of a relation predicate, only the most frequent object among the list of valid objects is considered. The entropy score is computed as Entropy({*objs*}) = Po∈{*objs*} −po ∗ log(po); where po is the probability of object o across the set of objects {*objs*}. The higher the entropy of the objects is, the more diverse the objects are, and thus the harder the predicate would be for a model to randomly achieve high (P@1) scores. Looking at the performance of models on DLAMA indicated in Table 1, (1) we find how the facts' relevance to the probed model's language affects the results. For instance, arBERT and KyKim perform better on non-Western facts than on Western ones. Conversely, the English BERT-base model performs better on Western facts for the (Arab-West) pair. The same observation tends to hold for individual predicates as shown in Table 2. (2) Moreover, arBERT and KyKim achieve lower performance on mLAMA than their performance on DLAMA-v1, while the English BERT-base and BETO models achieve higher P@1 scores on mLAMA than on DLAMA-v1. This is expected given the bias mLAMA has toward facts from Western cultures. ## 5.3 Revisiting The Language Bias Of Plms Kassner et al. (2021) showed that for prompts in English, German, Dutch, and Italian, mBERT is biased toward predicting the language or the country name related to the language of the prompts (e.g., Filling the masked object with *Italy* if the prompt's language is *Italian*). This phenomenon is not a bias if most of the triples in the underlying subset of mLAMA for a language are also biased toward the same label. For DLAMA, looking at the P@1 scores in Table 2 in addition to checking the most common predictions of arBERT and the cased BERT-base models in Table 3 provides a better diagnostic tool for analyzing the models' behavior11. For the P364 predicate, the models perform better on their culturally proximate triples. This can be attributed to the Language bias phenomenon which is indicated by arBERT predicting *Arabic* for 30.8% of Western facts, while BERT-base predicting *English* for 44.6% of Arab facts. On the other hand, both models achieve high P@1 scores for P17 and P103. Even when the models make wrong predictions for triples of these predicates, the predictions can be considered to be educated guesses, as they are still relevant to the culture to which the triples belong. Lastly, the models perform poorly on P495 for being biased toward specific objects irrespective of the culture of the triples (*Japan* for BERT-base, Germany and *France* for arBERT). These three patterns can be noticed thanks to having a contrastive set of facts representing two different cultures. 11A similar analysis for the other two sets of contrasting cultures can be found in §E of the Appendix. Relation ![7_image_0.png](7_image_0.png) ![7_image_1.png](7_image_1.png) ![7_image_2.png](7_image_2.png) ![7_image_3.png](7_image_3.png) Arabic prompts English prompts # facts (entropy) P@1 P@1 Arab West Arab West Arab **West** P17 (Country) 1000 (3.9) 1000 (2.8) **49.9** 47.4 **52.2** 45.6 P19 (Place of birth) 1000 (3.9) 1000 (2.6) **33.7** 22.3 **10.1** 8.8 P20 (Place of death) 1000 (3.8) 1000 (2.7) 21.3 **22.7** 14.2 **17.2** P27 (Country of citizenship) 1000 (3.8) 1000 (2.4) **38.1** 27.9 4.1 **17.5** P30 (Continent) 22 (1.0) 19 (1.0) **45.5** 26.3 **86.4** 84.2 P36 (Capital) 22 (4.5) 19 (4.2) **95.5** 78.9 36.4 **84.2** P37 (Official language) 22 (0.0) 19 (2.5) **90.9** 84.2 95.5 **100.0** P47 (Shares border with) 22 (2.5) 19 (2.7) **27.3** 15.8 68.2 **78.9** P103 (Native language) 1000 (1.0) 1000 (1.7) 61.8 **72.8** 67.7 **74.4** P106 (Occupation) 1000 (2.3) 1000 (2.0) 3.7 3.3 4.8 **14.3** P136 (Genre) 452 (2.7) 1000 (2.6) 6.6 **24.3** 4.0 7.6 P190 (Sister city) 67 (4.9) 468 (7.3) 0.0 **2.6 6.0** 2.8 P264 (Record label) 166 (3.0) 1000 (5.2) 0.0 0.3 4.2 7.5 P364 (Original language of work) 1000 (0.6) 1000 (0.4) **61.2** 48.5 36.1 **88.9** P449 (Original network) 127 (4.5) 1000 (5.3) 0.8 0.4 0.0 **10.8** P495 (Country of origin) 1000 (3.1) 1000 (1.3) **18.6** 8.7 **14.7** 5.5 P530 (Diplomatic relation) 22 (0.0) 19 (0.0) 22.7 **42.1** 31.8 **68.4** P1303 (Instrument) 1000 (0.9) 1000 (1.1) 0.3 0.2 1.9 **27.7** P1376 (Capital of) 24 (4.3) 26 (4.0) **91.7** 84.6 **79.2** 76.9 P1412 (Languages spoken or published) 1000 (0.8) 1000 (1.5) **67.4** 26.1 83.4 **88.7** Aggregated statistics 10946 (2.6) 13589 (2.7) **33.6** 23.0 27.5 **31.3** ## 5.4 Pilot Evaluation For A Large Language Model Given the success of large language models (LLMs) (Brown et al., 2020; Scao et al., 2022), we evaluated the performance of the GPT3.5-turbo model on tuples from the P30, P36, P37, P47, P103, P530, and P1376 predicates of DLAMA-v1 (Arab-West). To probe the model, the Arabic and English templates for these predicates were mapped into questions listed in Table F10. While the model is instructed to only respond with an entity, it sometimes provides a full sentence. Consequently, we consider the model's response to a question to be correct if one of the valid objects of the tuple used to populate the question is a substring of the response. GPT3.5's probing setup is harder than BERT's setup in which an answer is chosen from a set of unique objects for the predicate. Nevertheless, GPT3.5 achieves superior performance compared to the monolingual BERT models as per Table D3. However, GPT3.5 seems to be hallucinating for a lot of the tuples within the P190 (Sister City) predicate (e.g.: The twin city of Nice is Naples.). Such issues might be unnoticed unless benchmarks like DLAMA are used to systematically evaluate the LLMs. ## 6 Conclusion Previous work suggested that English prompts are more capable of recalling facts from multilingual pretrained language models. We show that the facts within the underlying probing benchmark (mLAMA) are skewed toward Western countries, which makes them more relevant to English. Hence, we propose a new framework (DLAMA) that permits the curation of culturally diverse facts directly from Wikidata. Three new sets of facts are released as part of the DLAMA-v1 benchmark containing factual triples representing 20 relation predicates comprising facts from (Arab-Western), (Asian-Western), and (South American-Western) countries, with a more balanced representation between the countries within each pair. The results of probing PLMs on the DLAMA-v1 support that mBERT has a better performance recalling Western facts than non-Western ones irrespective of the prompt's language. Monolingual Arabic and Korean models on the other hand perform better on culturally proximate facts. We believe the probing results are more trustable and fairer when the underlying benchmark is less skewed toward specific countries, languages, or cultures. Moreover, we find that even when the model's prediction does not match any of the correct labels, the model might be making an educated guess relevant to the culture of the underlying facts. This finding augments previous experiments which showed that models tend to have a language bias, by which a model tends to overgenerate a specific prediction for each prompting language irrespective of the triple's subject used to fill in the prompt. Finally, our framework is opensourced for the community to contribute new pairs to the DLAMA benchmark in the future. ![8_image_0.png](8_image_0.png) ## Limitations We acknowledge that the methodology used to build DLAMA-v1 still has limitations related to the information within its relation triples. While directly querying Wikidata as a dynamic source of facts provides the flexibility needed to acquire data that is relevant to different cultures (as opposed to using the static T-REx dump of triples), the diversity of the triples that are compiled depends on the availability of a diverse set of facts on Wikidata in the first place. For instance, the smaller number of relation triples related to Arab countries for the predicates (P136 - Genre), (P190 - Sister city), and (P449 - Original network) in DLAMA-v1 (ArabWest) demonstrates the difficulty of querying the exact number of facts for both cultures despite using exactly the same queries with the only difference being limiting the region to which the triples belong. Another limitation is the inability to enumerate valid and fine-grained subclasses of objects for specific subjects, if these fine-grained objects are not on Wikidata. Steps \#3 and \#5 of DLAMA explained in §4.1 ensure that a possible and more general object is still valid for a specific subject. However, inferring a more specified object from a generic one is impossible. For example, the fact that someone speaks "American English" implies that they speak English as well, but knowing that someone speaks "English" is not enough to speculate about their dialect (i.e.: "American English", "British English", etc.). While the triples within DLAMA are sampled by picking the ones whose subjects have the largest Wikipedia articles' sizes, the infeasibility of manually reviewing the large number of diverse facts within DLAMA-v1 makes it hard to claim that the facts are free of inaccuracies or missing information. More broadly, DLAMA supports relations predicates that are already part of mLAMA to fairly compare the results on DLAMA to those previously reported on mLAMA. Moreover, we make sure that the subjects and the objects of the relation triples are available in the different languages of interest. Having these constraints might imply that some culturally relevant facts might have been dropped out of DLAMA-v1 (e.g., Predicates that are not part of mLAMA, or triples having missing labels in one of the languages of interest). Lastly, we used mLAMA's probing setup in which the models rank a predefined set of objects for each prompt. Their prediction is correct if the top-ranked object is one of the valid labels for the corresponding relation triple used to populate the prompt. Therefore, a model's performance is expected to be higher than that achieved by a generative setup in which the model is asked to generate the most probable completions for the masked tokens. ## Ethics Statement We believe that using a set of countries to represent cultures is just a proxy for acquiring a more diverse set of facts that are less skewed toward a specific culture. More specifically, using the terms Arab cultures, Western cultures, and Asian cultures simplifies the differences between the cultures within the countries that we have used to represent these macro-cultures. On the other hand, we still think that the differences between Asian cultures are less subtle than between them and Western cultures. We also acknowledge that the accuracy and validity of some relation triples queried from Wikidata might be biased by the views of the people who added such information to Wikidata. This might be particularly vibrant for relation triples related to zones with political/ sectarian wars and conflicts. ## Acknowledgments This work was supported in part by the UKRI Centre for Doctoral Training in Natural Language Processing, funded by the UKRI (grant EP/S022481/1) and the University of Edinburgh, School of Informatics. Amr is grateful to Matthias Lindemann for recommending Wikidata, Aida Tarighat for the early discussions about the benchmark, Laurie Burchell, Bálint Gyevnár, and Shangmin Guo for reviewing the manual prompts, Coleman Haley for the multiple discussions about the figures, Anna Kapron-King and Gautier Dagan for proofreading the abstract, and lastly, Dilara Keküllüoglu ˘ and Björn Ross for their valuable reviews of the paper's final draft. ## References Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. *Advances in neural information processing* systems, 33:1877–1901. Amy S Bruckman. 2022. Should You Believe Wikipedia?: Online Communities and the Construction of Knowledge. Cambridge University Press. Boxi Cao, Hongyu Lin, Xianpei Han, Le Sun, Lingyong Yan, Meng Liao, Tong Xue, and Jin Xu. 2021. Knowledgeable or educated guess? revisiting language models as knowledge bases. In *Proceedings* of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1860–1874, Online. Association for Computational Linguistics. José Cañete, Gabriel Chaperon, Rodrigo Fuentes, JouHui Ho, Hojin Kang, and Jorge Pérez. 2020. Spanish pre-trained bert model and evaluation data. In PML4DC at ICLR 2020. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022. Palm: Scaling language modeling with pathways. *arxiv:2204.02311*. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Yanai Elazar, Nora Kassner, Shauli Ravfogel, Amir Feder, Abhilasha Ravichander, Marius Mosbach, Yonatan Belinkov, Hinrich Schütze, and Yoav Goldberg. 2023. Measuring causal effects of data statistics on language model's 'factual' predictions. Yanai Elazar, Nora Kassner, Shauli Ravfogel, Abhilasha Ravichander, Eduard Hovy, Hinrich Schütze, and Yoav Goldberg. 2021. Measuring and improving consistency in pretrained language models. *Transactions of the Association for Computational Linguistics*, 9:1012–1031. Hady Elsahar, Pavlos Vougiouklis, Arslen Remaci, Christophe Gravier, Jonathon Hare, Frederique Laforest, and Elena Simperl. 2018. T-REx: A large scale alignment of natural language with knowledge base triples. In *Proceedings of the Eleventh International* Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA). Constanza Fierro and Anders Søgaard. 2022. Factual consistency of multilingual pretrained language models. In *Findings of the Association for Computational* Linguistics: ACL 2022, pages 3046–3052, Dublin, Ireland. Association for Computational Linguistics. Yoav Goldberg. 2019. Assessing bert's syntactic abilities. *ArXiv*, abs/1901.05287. Ganesh Jawahar, Benoît Sagot, and Djamé Seddah. 2019. What does BERT learn about the structure of language? In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 3651–3657, Florence, Italy. Association for Computational Linguistics. Zhengbao Jiang, Antonios Anastasopoulos, Jun Araki, Haibo Ding, and Graham Neubig. 2020. X-FACTR: Multilingual factual knowledge retrieval from pretrained language models. In *Proceedings of the 2020* Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5943–5959, Online. Association for Computational Linguistics. Nora Kassner, Philipp Dufter, and Hinrich Schütze. 2021. Multilingual LAMA: Investigating knowledge in multilingual pretrained language models. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 3250–3258, Online. Association for Computational Linguistics. Kiyoung Kim. 2020. Pretrained language models for korean. https://github.com/kiyoungkim1/LMkor. Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowledge bases? In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference* on Natural Language Processing (EMNLP-IJCNLP), pages 2463–2473, Hong Kong, China. Association for Computational Linguistics. Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ilic, Daniel Hesslow, Roman ´ Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, et al. 2022. Bloom: A 176bparameter open-access multilingual language model. arXiv preprint arXiv:2211.05100. ## A **Detailed Bias Values Within The Factual** Knowledge Benchmarks Table A1 provides the fine-grained percentages for the distribution of the triples of T-REx, LAMA, and X-FACTR for 21 Western countries as compared to the rest of the world. For most of the relation predicates, triples related to one of the 21 Western countries represent more than 50% of the total triples. We find that this skewness is even larger for LAMA, and X-FACTR than for T-REx. Triples within LAMA are restricted to the ones whose objects are tokenized into a single subword by monolingual language models. This filtering might be responsible for the increased skewness of LAMA toward facts from Western countries. ## B Augmenting The Correct Objects Within Dlama For each relation predicate, a graph is used to model all the subclass-superclass relations between the objects of the queried triples. The edges within the graph are built using Wikidata's **P279 (subclass** of) predicate. All the possible subclass/superclass relations between the list of objects for each relation predicate are queried and then used to form the edges of the graph. Afterward, the list of objects for each subject is augmented by the list of all the possible ancestors (superclasses) of these objects (e.g., The official languages of The United States of America are now set to American English and English instead of just American English). Similarly, we noticed that the level of specificity of places of birth (objects of P19) and places of death (objects of P20) varies between different tuples. Thus, we queried all the territorial entities in which the places of birth and death are located. For instance, Paris Hilton had the place of birth set to {New York City} while Donald Trump had the place of birth set to {Jamaica Hospital Medical Center}. After querying the higher administrative-territorial entities, the set of valid objects for both entities became {New York City, New York, United States of America} and {Jamaica Hospital Medical Center, Queens, New York City, New York, United States of America} respectively. ## C Results On Raw Triples Before The Last Optional Step To demonstrate the impact of the last optional step within DLAMA, we evaluate the PLMs on the triples before augmenting their objects with valid overlapping ones (i.e.: before applying the optional Step \#5 of the framework). It is clear that the performance of the models shown in Table C2 is worse than their performance on the augmented benchmark previously listed in Table 1. ## D Gpt3.5 Performance On A Subset Of Dlama (Arab-West) As mentioned in §5.4, we used OpenAI's API to evaluate the performance of the GPT3.5-turbo model on six predicates of DLAMA-v1 (ArabWest). The accuracy scores of the model for these predicates are reported in Table D3. We plan to extend our evaluation to cover more predicates and include other LLMs. ## E **Model Diagnostics Using The (Asia-West)** And (South America-West) Sets Contrasting KyKim BERT to English BERTbase: We replicate the analysis process done in §5.3 to investigate the behavior of KyKim BERTbase and the English BERT-base models using Tables E4 and E6. We find that the English BERTbase has the same patterns detailed before for P17, P103, P364, and P495. Moreover, since English BERT-base overgenerates *Japan* for the P495 predicate, its performance on the Asian part of DLAMAv1 (Asia-West) is high. This once again shows the importance of having two contrasting sets of facts from the same predicates. Despite the fact that the majority of triples of P495 within the Asian part of DLAMA-v1 (Asia-West) has *Japan* as one of the correct labels, a biased model toward predicting Japan has a significantly low performance on the opposing set of facts. Consequently, the bias can still be detected. Regarding the KyKim BERT-base model, language bias toward overpredicting *Korean* is clear for the P103 and the P364 relation predications. The model also shows a bias toward the *Javanese* label for P1412. This bias can be seen in the model's poor performance on the Western part of the benchmark. P19 is a relation predicate on which the model is generally performing well. The most frequent predictions indicate that the model leans toward selecting *Japan* and *United States of America*. However, the model's predictions change according to the underlying culture of the triples and hence demonstrate an ability to memorize facts from both cultures. | Western countries Rest of the world Western countries Rest of the world Western countries Rest of the world 543 (54.3%) 320 (32.0%) 243 (24.3%) 377 (37.7%) 582 (58.2%) 496 (49.6%) 238 (23.8%) 461 (46.1%) 213 (21.3%) 422 (42.2%) | 448 (44.8%) 817 (81.7%) 520 (52.0%) 315 (31.5%) 458 (45.8%) 362 (36.2%) 333 (33.3%) 113 (50.2%) 807 (80.7%) 193 (19.3%) 478 (52.6%) 838 (83.8%) 162 (16.2%) 367 (36.7%) 836 (83.6%) 164 (16.4%) 801 (80.1%) 199 (19.9%) 388 (38.8%) 703 (70.3%) 272 (27.2%) | | | | | | | | |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------|---------------------------------------|-----------------------------------------| | 90 (9.0%) | | | | | | | | | | T-REx LAMA X-FACTR | 457 (45.7%) 680 (68.0%) 757 (75.7%) 560 (58.1%) 623 (62.3%) 418 (41.8%) 512 (57.4%) 504 (50.4%) 762 (76.2%) 539 (53.9%) 787 (78.7%) 578 (57.8%) 910 (91.0%) 552 (55.2%) 183 (18.3%) 480 (48.0%) 685 (68.5%) 542 (54.2%) 638 (63.8%) 667 (66.7%) | 633 (63.3%) | 612 (61.2%) 297 (29.7%) 728 (72.8%) | | | | | | | 542 (58.4%) | 403 (85.2%) | 155 (66.2%) | | | | | | | | 213 (22.6%) 217 (22.8%) | 267 (38.0%) | 393 (42.6%) 330 (47.5%) 6983 (78.4%) 1926 (21.6%) 778 (79.7%) 198 (20.3%) 327 (34.1%) | P131 (Located in the administrative territorial entity) 264544 (57.7%) 194254 (42.3%) 704 (79.9%) 177 (20.1%) 384 (41.2%) | 283 (29.3%) 470 (47.2%) 403 (42.1%) 201 (21.1%) | 351 (35.2%) 7844 (82.5%) 1663 (17.5%) 782 (83.7%) 152 (16.3%) 6018 (79.7%) 1535 (20.3%) 813 (85.2%) 141 (14.8%) 265 (37.8%) | 247 (25.5%) | | | | 12 (3.1%) | | | | | | | | | | 156579 (68.2%) 73069 (31.8%) 730 (77.4%) 63250 (76.0%) 19962 (24.0%) 734 (77.2%) | P36 (Capital) 4011 (44.8%) 4936 (55.2%) 436 (62.0%) | 29427 (76.6%) 9010 (23.4%) 529 (57.4%) 3396 (54.1%) 2885 (45.9%) 365 (52.5%) | 203644 (58.7%) 143177 (41.3%) 631 (65.9%) 371 (96.9%) | P136 (Genre) 16396 (17.0%) 80156 (83.0%) 547 (58.8%) | 24841 (69.7%) 10824 (30.3%) 683 (70.7%) 2026 (54.1%) 1722 (45.9%) 525 (52.8%) 9239 (65.5%) 4867 (34.5%) 554 (57.9%) 18307 (65.9%) 9482 (34.1%) 751 (78.9%) | 645 (64.8%) | 2397 (60.9%) 1542 (39.1%) 436 (62.2%) | 42318 (71.2%) 17137 (28.8%) 722 (74.5%) | | P17 (Country) 321988 (44.0%) 410512 (56.0%) 386 (41.6%) | 253402 (63.8%) 143837 (36.2%) 404 (41.9%) | 7610 (50.1%) 7581 (49.9%) 380 (42.6%) | 70 (14.8%) | 13832 (80.0%) 3452 (20.0%) 112 (49.8%) 64856 (81.7%) 14518 (18.3%) 430 (47.4%) | 79 (33.8%) | | | | | P140 (Religion) 3344 (45.5%) 4000 (54.5%) | P1376 (Capital of) 1342 (30.4%) 3078 (69.6%) | | | | | | | | | 27119 (91.2%) 2605 (8.8%) | 505 (36.3%) 888 (63.7%) | | | | | | | | | P413 (Position played on team / speciality) | P1412 (Languages spoken or published) | | | | | | | | | P1001 (Applies to jurisdiction) | | | | | | | | | | P159 (Headquarters location) | P740 (Location of formation) | | | | | | | | | P27 (Country of citizenship) | P530 (Diplomatic relation) | | | | | | | | | P47 (Shares border with) | P495 (Country of origin) | | | | | | | | | P103 (Native language) | P937 (Work location) | | | | | | | | | Wikidata predicate | P20 (Place of death) | P101 (Field of work) | | | | | | | | P19 (Place of birth) | P39 (Position held) | P106 (Occupation) | P463 (Member of) | | | | | | | P108 (Employer) | P190 (Sister city) P276 (Location) | | | | | | | | | 6257 | | | | | | | | | Total 1555601 **(57.1%)** 1168235 (42.9%) 13597 **(63.6%)** 7794 (36.4%) 16314 **(62.7%)** 9686 (37.3%)Table A1: The number and percentage of triples belonging to one of the 21 Western countries or to other countries in the T-REx, LAMA, and X-FACTR benchmarks. | Language | Model name | P@1 | P@1 | P@1 | |-----------------------------------|--------------|---------|-------|-------| | of Prompt | Arab | W est | All | | | N=10946 | N=13589 | N=24535 | | | | Arabic | mBERT-base | 11.4 | 12.8 | 12.2 | | arBERT | 26.6 | 19.3 | 22.6 | | | English | mBERT-base | 19.1 | 34.2 | 27.5 | | BERT-base | 24.5 | 29.9 | 27.5 | | | (a) DLAMA-v1 (Arab-West) | | | | | | Language | Model name | P@1 | P@1 | P@1 | | of Prompt | Asia | W est | All | | | N=13479 | N=13588 | N=27067 | | | | Korean | mBERT-base | 15.0 | 22.6 | 18.8 | | KyKim | 16.0 | 11.8 | 13.9 | | | English | mBERT-base | 27.1 | 36.2 | 31.7 | | BERT-base | 36.4 | 30.4 | 33.4 | | | (b) DLAMA-v1 (Asia-West) | | | | | | Language | Model name | P@1 | P@1 | P@1 | | of Prompt | S.America | W est | All | | | N=13071 | N=13586 | N=26657 | | | | Spanish | mBERT-base | 22.3 | 30.4 | 26.4 | | BETO | 15.5 | 25.5 | 20.6 | | | English | mBERT-base | 24.1 | 34.7 | 29.5 | | BERT-base | 24.4 | 29.9 | 27.2 | | | (c) DLAMA-v1 (South America-West) | | | | | Contrasting Spanish BETO to English BERTbase: While similar patterns can be found in Tables E5, and E7, a new subtle bias is that BERTbase predicts Madrid for more than 50% of the South American triples in P19 (Place of Birth), and P20 (Place of Death) predicates. This might be attributed to the fact that South American names are hard to distinguish from Spanish ones. ## F Details Of Dlama Wikipedia sites: For the Arab, Asian, South American, and Western cultures, representative countries from each region are used as a proxy. Table F8 enu- | Relation P30 (Continent) | 22 (1.0) | 19 (1.0) | 63.6 | 89.5* | 100.0* | 89.5 | |----------------------------|------------|------------|--------|---------|----------|--------| | P36 (Capital) | 22 (4.5) | 19 (4.2) | 81.8* | 63.2 | 95.5* | 94.7 | | P37 (Official language) | 22 (0.0) | 19 (2.5) | 100.0* | 89.5 | 100.0* | 100.0* | | P47 (Shares border with) | 22 (2.5) | 19 (2.7) | 100.0* | 100.0* | 95.5* | 89.5 | | P190 (Sister city) | 67 (4.9) | 468 (7.3) | 6.0* | 5.6 | 3.0 | 33.1* | | P530 (Diplomatic relation) | 22 (0.0) | 19 (0.0) | 63.6 | 68.4* | 50.0 | 84.2* | | P1376 (Capital of) | 24 (4.3) | 26 (4.0) | 87.5 | 88.5* | 100.0* | 92.3 | Table D3: The accuracy of the GPT3.5-turbo model for some predicates of the DLAMA-v1 (Arab-West) set. merates the countries representing these cultures and their relevant respective Wikipedia sites. Probing templates: To probe the models' factual knowledge, natural language templates are used to transform the triples into prompts. The template has two fields for the subject [X] and the object [Y ] of the triples. For each triple, the subject fills the subject field while the object field is masked. Models are then fed the prompts and asked to fill in the masked token (i.e., the object). While the templates can affect the predictions of the models, we used the same ones of mLAMA listed in Table F9 to control for the impact that changing the templates might have on the results. In addition to that, we mapped the templates into questions as shown in Table F10 to evaluate the performance of the GPT3.5 model on a subset of DLAMA-v1 (Arab-West). | Relation | Korean prompts | English prompts | | | | | |---------------------------------------|------------------|-------------------|------|------|-------|-------| | # facts (entropy) | P@1 | P@1 | | | | | | Asia | West | Asia | West | Asia | West | | | P17 (Country) | 1000 (2.2) | 1000 (2.8) | 37.8 | 42.1 | 67.1 | 45.3 | | P19 (Place of birth) | 1000 (1.7) | 1000 (2.7) | 63.1 | 55.8 | 24.3 | 11.9 | | P20 (Place of death) | 1000 (2.6) | 1000 (2.8) | 23.0 | 45.8 | 40.4 | 20.7 | | P27 (Country of citizenship) | 1000 (1.5) | 1000 (2.4) | 74.0 | 53.5 | 71.8 | 19.5 | | P30 (Continent) | 13 (0.0) | 19 (1.0) | 76.9 | 31.6 | 100.0 | 84.2 | | P36 (Capital) | 13 (3.7) | 19 (4.2) | 30.8 | 21.1 | 69.2 | 84.2 | | P37 (Official language) | 13 (2.7) | 19 (2.5) | 30.8 | 26.3 | 84.6 | 100.0 | | P47 (Shares border with) | 13 (1.7) | 19 (2.7) | 0.0 | 0.0 | 76.9 | 78.9 | | P103 (Native language) | 1000 (1.6) | 1000 (1.7) | 33.3 | 2.3 | 84.7 | 75.6 | | P106 (Occupation) | 1000 (0.9) | 1000 (1.0) | 17.0 | 9.4 | 1.4 | 15.9 | | P136 (Genre) | 1000 (1.0) | 1000 (2.5) | 0.2 | 0.5 | 0.8 | 6.3 | | P190 (Sister city) | 387 (7.4) | 467 (7.3) | 0.0 | 1.9 | 0.3 | 2.8 | | P264 (Record label) | 1000 (5.3) | 1000 (4.8) | 0.3 | 0.1 | 3.3 | 6.6 | | P364 (Original language of work) | 1000 (0.7) | 1000 (0.3) | 10.5 | 18.5 | 37.7 | 89.1 | | P449 (Original network) | 1000 (4.6) | 1000 (5.0) | 5.1 | 0.2 | 1.1 | 10.7 | | P495 (Country of origin) | 1000 (0.5) | 1000 (1.3) | 29.1 | 19.2 | 79.7 | 4.3 | | P530 (Diplomatic relation) | 13 (0.0) | 19 (0.0) | 7.7 | 5.3 | 46.2 | 68.4 | | P1303 (Instrument) | 1000 (0.5) | 1000 (1.1) | 0.4 | 1.1 | 9.0 | 29.5 | | P1376 (Capital of) | 27 (3.0) | 26 (4.0) | 51.9 | 26.9 | 88.9 | 76.9 | | P1412 (Languages spoken or published) | 1000 (1.3) | 1000 (1.4) | 1.0 | 13.4 | 87.4 | 86.8 | | Aggregated statistics | 13479 (2.1) | 13588 (2.6) | 22.1 | 19.5 | 38.3 | 31.9 | | Relation | Spanish prompts | English prompts | | | | | |---------------------------------------|-------------------|-------------------|------|-----------|-------|-------| | # facts (entropy) | P@1 | P@1 | | | | | | S.America | West | S.America | West | S.America | West | | | P17 (Country) | 1000 (2.8) | 1000 (2.9) | 57.5 | 47.7 | 63.0 | 49.9 | | P19 (Place of birth) | 1000 (2.6) | 1000 (2.5) | 2.0 | 0.9 | 14.6 | 8.3 | | P20 (Place of death) | 1000 (2.8) | 1000 (2.4) | 0.1 | 0.6 | 0.5 | 10.3 | | P27 (Country of citizenship) | 1000 (2.5) | 1000 (2.4) | 19.5 | 4.2 | 28.9 | 14.5 | | P30 (Continent) | 12 (0.0) | 19 (1.0) | 91.7 | 73.7 | 100.0 | 73.7 | | P36 (Capital) | 12 (3.6) | 19 (4.2) | 83.3 | 68.4 | 66.7 | 84.2 | | P37 (Official language) | 12 (1.2) | 19 (2.5) | 75.0 | 84.2 | 75.0 | 100.0 | | P47 (Shares border with) | 12 (1.0) | 19 (2.7) | 83.3 | 68.4 | 91.7 | 78.9 | | P103 (Native language) | 1000 (1.1) | 1000 (1.8) | 34.4 | 78.6 | 58.5 | 74.5 | | P106 (Occupation) | 1000 (2.1) | 1000 (2.5) | 6.8 | 7.8 | 8.3 | 12.0 | | P136 (Genre) | 1000 (2.6) | 1000 (2.4) | 0.3 | 1.7 | 2.4 | 5.5 | | P190 (Sister city) | 144 (6.1) | 465 (7.4) | 4.9 | 1.7 | 3.5 | 3.0 | | P264 (Record label) | 854 (6.1) | 1000 (6.0) | 0.0 | 0.1 | 1.5 | 5.6 | | P364 (Original language of work) | 1000 (1.1) | 1000 (0.6) | 48.5 | 85.1 | 60.5 | 89.5 | | P449 (Original network) | 1000 (4.6) | 1000 (4.7) | 0.3 | 0.7 | 0.4 | 18.7 | | P495 (Country of origin) | 1000 (2.4) | 1000 (1.8) | 6.3 | 60.0 | 27.3 | 10.3 | | P530 (Diplomatic relation) | 12 (0.0) | 19 (0.0) | 66.7 | 68.4 | 58.3 | 68.4 | | P1303 (Instrument) | 1000 (1.2) | 1000 (1.3) | 6.7 | 11.7 | 17.0 | 26.4 | | P1376 (Capital of) | 13 (3.4) | 26 (4.0) | 84.6 | 73.1 | 84.6 | 76.9 | | P1412 (Languages spoken or published) | 1000 (1.2) | 1000 (1.7) | 20.2 | 51.6 | 62.9 | 89.2 | | Aggregated statistics | 13071 (2.4) | 13586 (2.7) | 16.0 | 26.5 | 26.9 | 31.3 | ![15_image_0.png](15_image_0.png) ![16_image_0.png](16_image_0.png) | Cultures | Country | Wikipedia sites used for articles | |-------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------|----------------------------------------| | Arab Cultures | 22 countries of the Arab League | Arabic (ar), English (en), French (fr) | | Australia | English (en) | | | Canada | English (en), French (fr) | | | New Zealand | English (en), Mori (mi) | | | USA | English (en) | | | Andorra | Catalan (ca), English (en) | | | Italy | Italian (it), English (en) | | | Liechtenstein | German (de), English (en) | | | Monaco | French (fr), English (en) | | | Portugal | Portuguese (pt), English (en) | | | San Marino | Italian (it), English (en) | | | Spain | Spanish (es), English (en) | | | Austria | German (de), English (en) | | | Belgium | German (de), French (fr), Dutch (nl), English (en) | | | France | French (fr), English (en) | | | Germany | German (de), English (en) | | | Ireland | Irish (ga), English (en) | | | Luxembourg | Luxembourgish (lb), French (fr), German (de), English (en) | | | Netherlands | Dutch (nl), English (en) | | | Switzerland | German (de), French (fr), Italian (it), Romansh (rm), English (en) | | | UK | English (en) | | | Western Cultures | China | English (en), Chinese (zh) | | Indonesia | English (en), Indonesian (id) | | | Japan | English (en), Japanese (ja) | | | Malaysia | English (en), Malay (ms) | | | Mongolia | English (en), Chinese (zh) | | | Myanmar | English (en), Burmese (my) | | | North Korea | English (en), Korean (ko) | | | Philippines | English (en) | | | Singapore | English (en), Malay (ms) | | | South Korea | English (en), Korean (ko) | | | Taiwan | English (en), Chinese (zh) | | | Thailand | English (en), Thai (th) | | | Vietnam | English (en), Vietnamese (vi) | | | Asian Cultures | Argentina | English (en), Spanish (es) | | Bolivia | English (en), Spanish (es) | | | Brazil | English (en), Portugese (pt) | | | Chile | English (en), Spanish (es) | | | Colombia | English (en), Spanish (es) | | | Ecuador | English (en), Spanish (es) | | | Guyana | English (en) | | | Paraguay | English (en), Spanish (es) | | | Peru | English (en), Spanish (es) | | | Suriname | Dutch (nl), English (en) | | | Uruguay | English (en), Spanish (es) | | | Venezuela | English (en), Spanish (es) | | | South American Cultures Table F8: The list of Countries and their respective Wikipedia sites used for representing the four different cultures. | | | Table F8: The list of Countries and their respective Wikipedia sites used for representing the four different cultures. The English Wikipedia is used for all the countries. | P264 (Record label) [X] is represented by music label [Y] . ]X[ ةيقيسوملاةمالعلااهلثمي[ Y[. [X]는 음악 레이블 [Y]로 표시됩니다. [X] está representado por el sello musical [Y]. P364 (Original language of work) The original language of [X] is [Y] . ـلةيلصألاةغللا[ X[ يه[Y[. [X]의 원래 언어는 [Y]입니다. | 는 | | | |-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------|------------------------------------------------------------------| | [Y]와의 외교 관계를 유지합니다. [X] mantiene relaciones diplomáticas con [Y]. P530 (Diplomatic relation) [X] maintains diplomatic relations with [Y] . ]X[عمةيسامولبدتاقالعميقت[Y[. [X] | | | | | ىلعلصألايف[Y[. [X]는 원래 [Y]에 방영되었습니다. [X] se emitió originalmente en [Y]. | | | | | El idioma original de [X] es [Y]. | | | | | El idioma oficial de [X] es [Y]. | [X] e [Y] son ciudades gemelas. | | | | 는 | | | | | [Y]와 (과) 국경을 공유합니다. [X] comparte frontera con [Y]. P47 (Shares border with) [X] shares border with [Y] . ]X[عمدودحلايفكرتشت[Y[. [X] El idioma nativo de [X] es [Y]. | 는 | | | | [Y]에서 통신하는 데 사용됩니다. [X] solía comunicarse en [Y]. P1412 (Languages spoken or published) [X] used to communicate in [Y] . مدختسي[ X[يفلصاوتلل[Y[. [X] | | | | | [X] es una [Y] de profesión. [X] reproduce música [Y]. | | | | | [X] se encuentra en [Y]. | [X] se encuentra en [Y]. La capital de [X] es [Y]. | [X] es la capital de [Y]. | | | [X] es [Y] ciudadano. | [X] se creó en [Y]. | [X] reproduce [Y]. | | | Predicate English template Arabic template Korean template Spanish template P17 (Country) [X] is located in [Y] . عقي[X[يف[Y[. [X] 는 [Y]에 있습니다. [X] nació en [Y]. [X] murió en [Y]. | 와 | | | | [Y]는 쌍둥이 도시입니다. X] | | | | | 는 | | | | | [Y]에서 태어났습니다. دلو[X[يف[Y[. [X] 는 | | | | | [Y]에서 사망했습니다. P20 (Place of death) [X] died in [Y] . يفوت[X[يف[Y[. [X] | P37 (Official language) The official language of [X] is [Y] . ـلةيمسرلاةغللا[ X[ يه[Y[. [X]의 공식 언어는 [Y]입니다. | 는 | | | [Y] 음악을 재생합니다. P136 (Genre) [X] plays [Y] music . ]X[ىقيسومفزعي[Y[. [X] | 는 | | | | [Y]에 작성되었습니다. P495 (Country of origin) [X] was created in [Y] . ءاشنإمت[ X[يف[Y[. [X] | | | | | P103 (Native language) The native language of [X] is [Y] . ـلةيلصألاةغللا[ X[ يه[Y[. [X]의 모국어는 [Y]입니다. | | | | | ةمصاع[ X[ يه[Y[. [X]의 수도는 [Y]입니다. | X]는 직업 별 [Y]입니다. | 는 | | | [Y]를 재생합니다. P1303 (Instrument) [X] plays [Y] . ]X[بعلي[Y[. [X] 는 | | | | | [Y]의 수도입니다. P1376 (Capital of) [X] is the capital of [Y] . ]X[ ةمصاعيه[ Y[. [X] | | | | | 는 | | | | | [Y] 시민입니다. P27 (Country of citizenship) [X] is [Y] citizen . ]X[ نطاوم[Y[. [X] 는 | | | | | [Y]에 있습니다. P30 (Continent) [X] is located in [Y] . عقي[X[يف[Y[. [X] | P106 (Occupation) [X] is a [Y] by profession . ]X[ يه[Y[ ةنهملابسح.] | P190 (Sister city) [X] and [Y] are twin cities . ]X[ و[ Y[ ناتمأوتناتنيدم.] | ثبمت[X[P449 (Original network) [X] was originally aired on [Y] . | | P36 (Capital) The capital of [X] is [Y] . | | | | | P19 (Place of birth) [X] was born in [Y] . | 6263 | | | ![18_image_0.png](18_image_0.png) Table F9: mLAMA's templates that are also adapted in DLAMA. | Predicate English Question Arabic Question P30 (Continent) Where is "[X]" located in? Reply with a name of a continent only. عقينيأ[" X["طقفةراقمساببجأ؟ P36 (Capital) What is the capital of "[X]"? Reply with the name of the city only. ةمصاعيهام[" X["لةيمسرلاةغللايهامP47 (Shares border with) What is the country that shares border with "[X]"? Reply with a country name only. عماهدودحكرتشتيتلاةلودلايهام[" X["طقفةلودمساببجأ؟ P190 (Sister city) What is the twin city of "[X]"? Reply with the name of the city only. ةنيدملمأوتلاةنيدملايهام[" X["P530 (Diplomatic realtion) What is the country that maintains dimplomatic relations with "[X]"? Reply with a country name only. عمةيسامولبدتاقالعميقتيتلاةلودلايهام[" X["طقفةلودمساببجأ؟ P1376 (Capital of) What is the country of which the capital is "[X]"? Reply with a country name only. اهتمصاعيتلاةلودلايهام[" X["طقفةنيدملامساببجأ؟ ["X["طقفةغلمساببجأ؟ طقفةنيدملامساببجأ؟ طقفةلودمساببجأ؟ | Table F10: The mapping of six of mLAMA's templates to questions that can be used to evaluate the GPT3.5-turbo model. | |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------| | P37 (Official Language) What is the official language of "[X]"? Reply with the language name only. | | ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitations section ✓ A2. Did you discuss any potential risks of your work? Ethical Considerations section ✓ A3. Do the abstract and introduction summarize the paper's main claims? Left blank. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Sections 2, 3, 4 ✓ B1. Did you cite the creators of artifacts you used? Sections 2, 3, 4 B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. ✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? The benchmarks are dumps of factual triples from Wikidata. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Sections 4, Appendix - Section E ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Sections 4, Appendix - Section E ## C ✓ **Did You Run Computational Experiments?** Section 5 C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Not applicable. Left blank. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 5 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 5 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? The codebase is linked to in the Introduction section D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
yang-etal-2023-self
Self-adaptive Context and Modal-interaction Modeling For Multimodal Emotion Recognition
https://aclanthology.org/2023.findings-acl.390
The multimodal emotion recognition in conversation task aims to predict the emotion label for a given utterance with its context and multiple modalities. Existing approaches achieve good results but also suffer from the following two limitations: 1) lacking modeling of diverse dependency ranges, i.e., long, short, and independent context-specific representations and without consideration of the different recognition difficulty for each utterance; 2) consistent treatment of the contribution for various modalities. To address the above challenges, we propose the Self-adaptive Context and Modal-interaction Modeling (SCMM) framework. We first design the context representation module, which consists of three submodules to model multiple contextual representations. Thereafter, we propose the modal-interaction module, including three interaction submodules to make full use of each modality. Finally, we come up with a self-adaptive path selection module to select an appropriate path in each module and integrate the features to obtain the final representation. Extensive experiments under four settings on three multimodal datasets, including IEMOCAP, MELD, and MOSEI, demonstrate that our proposed method outperforms the state-of-the-art approaches.
# Self-Adaptive Context And Modal-Interaction Modeling For Multimodal Emotion Recognition Haozhe Yang†, Xianqiang Gao†**, Jianlong Wu**§∗ Tian Gan†, Ning Ding‡, Feijun Jiang‡**, Liqiang Nie**§ † Shandong University, § Harbin Institute of Technology (Shenzhen), ‡ Alibaba Group [email protected], [email protected], [email protected], [email protected], {yuji.dn, feijun.jiangfj}@alibaba-inc.com, [email protected] ## Abstract The multimodal emotion recognition in conversation task aims to predict the emotion label for a given utterance with its context and multiple modalities. Existing approaches achieve good results but also suffer from the following two limitations: 1) lacking modeling of diverse dependency ranges, i.e., long, short, and independent context-specific representations and without consideration of the different recognition difficulty for each utterance; 2) consistent treatment of the contribution for various modalities. To address the above challenges, we propose the Self-adaptive Context and Modalinteraction Modeling (SCMM) framework. We first design the context representation module, which consists of three submodules to model multiple contextual representations. Thereafter, we propose the modal-interaction module, including three interaction submodules to make full use of each modality. Finally, we come up with a self-adaptive path selection module to select an appropriate path in each module and integrate the features to obtain the final representation. Extensive experiments under four settings on three multimodal datasets, including IEMOCAP, MELD, and MOSEI, demonstrate that our proposed method outperforms the state-of-the-art approaches. ## 1 Introduction Emotion is a crucial part of human conversation. The emotion recognition in conversation task is to analyze each utterance in a conversation and give the corresponding emotion. This task has recently received more and more attention from researchers in both NLP and multimodal fields because of its potential applications, such as humancomputer interaction and opinion mining in social media (Chatterjee et al., 2019; Majumder et al., 2020). Traditional emotion recognition in conversation paradigms is either based on unrelated utterances in a dialogue or a single modality, such ∗ Jianlong Wu is the corresponding author. ![0_image_0.png](0_image_0.png) 8IBUBTIMJHIU [Neutral] **8IBUBTIMJHIU </FVUSBM>** as text. However, in many cases, people's emotions are elusive and cannot be delivered well by just one utterance or a single modality. As multimodality is closer to real-world application scenarios, multimodal emotion recognition in conversation is gaining increasing research attention in recent years. To identify emotions more accurately, DialogueRNN (Majumder et al., 2019) first designs an RNN-based model which includes four GRUs to model both intra- and inter-speaker relations. DialogueGCN (Ghosal et al., 2019) then uses a graph neural network to model conversations. Later, MMGCN (Hu et al., 2021) proposes a graph-based method under the additional multimodal setting. Although pioneer research studies have achieved promising progress, they mainly ignore the varying difficulty of each utterance for the model to recognize and multimodal interaction in conversation, which leads to the following two limitations. First, existing methods treat all samples equally without considering their specific characteristic or difficulty for recognition. For example, they lack detailed modeling of diverse dependency ranges, i.e., long, short, and independent context-specific representations for each utterance. As illustrated in Figure 1, some utterances in a conversation require a long-range dependency, while others only require a short-range dependency or can determine the emotion on their own. Existing methods do not consider respectively modeling these varying dependency ranges. Second, current approaches regard the contribution of each modality equally and simply concatenate the features of different modalities. However, the contribution of each modality varies and it is of great importance to investigate the correlation and interaction among different modalities. In particular, Figure 1 illustrates the different contributions among modalities for different utterances, where the primary modality for recognition varies from case to case. We argue the necessity to explore the modality-specific contributions. Towards the above issues, we propose the Selfadaptive Context and Modal-interaction Modeling (SCMM) method for multimodal emotion recognition. First, to model different ranges of context dependency, we design the context representation module, which consists of three submodules, including global, local, and direct mapping. Second, towards the different contributions of various modalities, we propose the modal-interaction module, which also contains three submodules, including full, partial, and biased interaction, to investigate the correlation among them. Thereafter, faced with multiple outputs from each module, we come up with the self-adaptive path selection strategy to adaptively select an appropriate path to obtain the final representation for each utterance. We also put forward a contrastive learning loss to learn more discriminative representations. Finally, we conduct extensive experiments to validate the effectiveness of our approach. Our main contributions are four-fold: - We propose a novel SCMM framework for multimodal emotion recognition in conversation. A new contextual representation module is designed to model different kinds of relation dependency, including long, short, and independent dependency. - To capture the specific contribution of each modality, we design the modal-interaction module, which consists of three submodules, including full, partial, and biased interactions, to full investigate the correlation among different modalities. - We come up with the self-adaptive path selection strategy to adaptively select an appropriate path based on module outputs. Moreover, we present a cross-modal contrastive learning loss for discriminative feature learning. - Extensive experiments on three multimodal emotion recognition datasets, including IEMOCAP, MELD, and MOSEI, demonstrate the superiority of our method. Specifically, on the IEMOCAP dataset under both two different settings, the absolute improvement over state-of-the-art methods is higher than 4.0%. ## 2 Related Work 2.1 Emotion Recognition In Conversation Recent years have witnessed growing research interest in Emotion Recognition in Conversation (ERC) due to its wide range of potential applications (Sebe et al., 2005; Yalamanchili et al., 2021). With the development of streaming services, many ERC datasets such as IEMOCAP (Busso et al., 2008), MELD (Poria et al., 2019), and MOSEI (Bagher Zadeh et al., 2018) provide a new platform for ERC researchers. To tackle the ERC task, DialogueRNN (Majumder et al., 2019) first proposes an RNN-based model which consists of four GRUs: Global, Speaker, Party, and Emotion, to keep track of the individual and global contextual states in the conversation simultaneously. Following that, DialogueGCN (Ghosal et al., 2019) presents a graph-based model that uses a context window to capture local contextual information. Later, DAGERC (Shen et al., 2021b) applies GNN to construct directed acyclic graphs in conversations and RNN to model local contextual representations. Moreover, COGMEN (Joshi et al., 2022) and MMGCN (Hu et al., 2021) adopt graph-based methods in the same period to model local and global contextual representations, respectively. Previous work in ERC can be roughly divided into unimodal (Yu et al., 2019; Shen et al., 2021a; Wang et al., 2020) and multimodal approaches (Datcu and Rothkrantz, 2015; Wöllmer et al., 2010). The former uses a single textual modality in experiments, whereas the latter considers acoustic, textual, and visual modalities at the same time. We focus on the multimodal setting. ## 2.2 Multimodal Fusion Multimodal fusion aims to make full use of the information in various modalities to improve the recognition results (Atrey et al., 2010; Bramon et al., 2011). This strategy is simple and effective, which has drawn many researchers' attention. For example, in ERC scenarios, DialogueRNN (Majumder et al., 2019) first conducts experiments with single text modality settings but also concatenates multimodal features as an additional experiment. Furthermore, COGMEN (Joshi et al., 2022) follows the setting of concatenating modality in DialogueRNN and designs a GNN model based on this setting. Moreover, MMGCN (Hu et al., 2021) and EmoCaps (Li et al., 2022) concatenate each modality together after passing it through a simple LSTM or linear layer. However, the multimodal interactions of the existing efforts are still very simple and inevitably lead to suboptimal performance. For example, COGMEN and MMGCN simply concatenate the features of different modalities. We argue that the contribution of different modalities varies and should be treated separately. It is of vital importance to exploit the modal-interaction. ## 3 Method 3.1 Problem Formulation In ERC, a conversation is defined as a sequence of utterances C = {u1, u2*, . . . , u*n}, where n is the number of utterances. Each utterance ui can be labeled by a discrete value yi, where yi ∈ S and S is the emotion labels set. This task aims to predict the emotion label yi for a given query utterance ut based on the dialogue context u1 to un and the corresponding speaker identity. Each conversation dataset D contains N dialogues and can be denoted as D = {Cj |j = 1*, . . . , N*}. In a general multimodal setting, each utterance ui consists of three modalities, including audio, text, and video, so ui can be further expressed as ui = {u a i , ut i , uv i}, where u a i , ut i , uv i denote the acoustic, textual, and visual features of the i-th utterance with dimension d a, dt, dv, respectively. The whole conversation feature of each modality can be denoted as U a, Ut, Uv. ## 3.2 Overview Of The Proposed Scmm As illustrated in Section 1, existing methods do not consider the specific characteristic of diverse dependency ranges for different samples and simply concatenate multimodal features, leading to undesirable results. Therefore, we propose Self-adaptive Context and Modal-interaction Modeling (SCMM) for Multimodal Emotion Recognition. As shown in Figure 2(a), our model first takes the features of each modality as input and obtains the context representation of each modality after passing through the context representation module. Then, each context-represented modality feature will fully interact and complement the information from each other in the modal-interaction module, after which we use self-adaptive path selection module to select appropriate features to get the multimodal representation for final classification. In the context representation module, we develop three submodules to obtain context representation for utterances with different dependency ranges. First, with the help of the attention mechanism, each utterance can attend to the information of other utterances, so we use a Transformer structure to extract global context representation for a long dependency range. Besides, the GRU structure contains a gate mechanism that can filter out information from long-distance utterances, so we use this unit to obtain the local contextual representation of the utterance for a short dependency range. Finally, for utterances that do not need the assistance of contextual information, we use a linear layer to extract the information. The arrows within each submodule of the context representation module illustrated in left of Figure 2(a) indicate the afflux type of contextual information during the representation process. For multimodal features, we also consider the difficulty of each utterance for the model to recognize and model it by three modality interaction submodules. For simple utterances, e.g., sentences that contain emotional words, we directly concatenate all modality features together and pass through the linear layer. For slightly complex utterances, we use diverse combinations and interactions among modalities. For more difficult utterances, we take the text modality as the primary modality and others as the auxiliary modalities for interaction. An additional Transformer with a local attention mask is applied to leverage more modality information from adjacent utterances in this phase. ![3_image_1.png](3_image_1.png) ![3_image_0.png](3_image_0.png) ## 3.3 Context Representation Module Integrating contextual information into the features of utterances is essential, but the demands to establish dependencies between different utterances vary. These dependencies can be summarized into three basic types: long, short, and independent dependency. Based on these different requirements, we design three submodules to consider each case separately. Global Context Representation: People may discuss several topics in a conversation, and different topics may have different emotional vibes. The current utterance's emotion may be based on another topic raised a relatively long time ago, which is a long-distance emotional dependency relationship. We design the global context representation submodule to model this scenario. With the commonly used attention mechanism, each utterance can attend to other utterances without considering the distance, which ensures effectiveness during long-distance context representation. We use the following multi-head self-attention mechanism to capture global contextual information: $$\mathrm{MultiHead}(Q,K,V)=$$ $$\mathcal{L},\mathbf{R},\,V\,)=$$ $$\mathrm{Concat}(h_{1},h_{2},\ldots,h_{n})W^{K},$$ where *Q, K,* and V are feature matrices, and Q, K, V ∈ R n×d. For the self-attention mechanism, *Q, K* and V are derived from input features with separate linear layers. They will be equally divided into k heads along the feature dimension, the i-th head can be denoted as Qi, Ki, Vi ∈ R n× dk . hi = Attn(Qi, Ki, Vi), and Attn is calculated by Eq. (2) for each head: $$\mathrm{Attn}(Q_{i},K_{i},V_{i})=\quad\sigma(\frac{Q_{i}K_{i}^{T}}{\sqrt{k}}V_{i}),\qquad\mathrm{(2)}$$ $\mathrm{denotes}$ the softmax operator. where σ denotes the softmax operation. For dialogue features U x of different modality, where x ∈ {*a, t, v*} and U x ∈ R n×dx, the intermediate representation obtained by MultiHead is then passed through the commonly used residual concatenation, LayerNorm, and feed-forward layers to obtain the final output U x g of this submodule, i.e., U a g, Ut g , and U v g . Local Context Representation: In multi-turn conversations, the emotion of a speaker's utterance may be influenced by adjacent utterances, which is a short-distance emotional dependency that occurs at a local scale. To handle this scenario, we design the local context representation module. The Gated Recurrent Units (GRU) update mechanism ensures that each utterance will integrate contextual information from closer utterances while forgetting information about farther utterances. Therefore, we use a bidirectional GRU network to obtain the local context representation of each utterance. For any modality input U x, the local context representation feature U x lis computed by: $$U_{l}^{x}=\mathrm{Concat}([\overleftarrow{G R U}(U^{x}),\overleftarrow{G R U}(U^{x})]).\tag{3}$$ We denote the features of each modality obtained by this submodule as U a l , Ut l , and U v l , respectively. Direct Mapping: For the utterances that contain enough information on their own, the process of context representation may introduce additional noise. Therefore, we design the direct mapping submodule to directly extract information for each utterance through a linear layer as follows: $$U_{d}^{x}=U^{x}W_{d}+b_{d}.$$ In this submodule, the output features of each modality are U a d , U t d , and U v d , respectively. ## 3.4 Modal-Interaction Module Given multimodal features U a, U t, and U v, a multimodal interaction module takes these three features as input and outputs a multimodal feature U atv. By effectively exploiting the potential complementarity of information among these modalities, the multimodal features can be more discriminative, allowing the model to perform better than unimodal models. Considering the different difficulties among utterances, we design different interaction submodules to handle simple, more complex, and difficult scenarios, respectively. Full Interaction: For simple utterances and ideal cases where the three modalities U a, Utand U v complement each other, and each modality contains relatively equal information, we design the full interaction submodule, which concatenates three multimodal features directly and uses a linear layer to extract multimodal feature. We denote it as U atv f by linear layer and formulate it as follows: $$U_{f}^{a t v}=\mathrm{Concat}(U^{a},U^{t},U^{v})W_{f}+b_{f}.$$ Partial Interaction: For slightly complex utterances, the contribution of different modalities varies due to the lack of key information or the mixing of noise. In this regard, we design the partial interaction submodule to alleviate this problem through diversified modality interactions. Specifically, we combines U a, U vand U tin pairs to obtain U at, U vt and U av features. For example, $$U^{a t}=\mathrm{Concat}(U^{a},U^{t})W_{a t}+b_{a t}.\qquad(6)$$ Finally, we concatenate all paired features and reduce the dimension by a linear layer. We denote this feature as U atv p. Biased Interaction: For more difficult utterances, we design the biased interaction submodule. In previous work, many experiments have shown that textual modality features are critical to the performance of the final model in predicting emotions, which indicates that the textual modality contains $$(4)$$ the primary information in most cases. Therefore, in this interaction process, we first take the text as the primary modality and others as auxiliary modalities to alleviate the information loss of text. Second, we use a small Transformer with a local attention mask to further leverage more modality information from adjacent utterances. Specifically, the biased interaction submodule first concatenates U ttogether with U aand U vrespectively to obtain U at band U vt b . These two features will be concatenated after passing through their respective linear layers. Later, a Transformer with a local attention mask is applied to incorporate multimodal information from locally scaled multimodal features. Take *Q, K* from the self-attention mechanism, the attention mask can be a binary matrix of dimension R n×n. Mi,j = 1 means Qi can attend to Kj during the attention process. Otherwise, it means not. The operation of masked attention can be formulated as follows: $$\begin{array}{r}{\mathrm{Attn}(Q,K,V,M)=}\\ {\left[\frac{M\odot\exp\left(Q K^{T}/\sqrt{d_{k}}\right)}{\sum_{i}M\odot\exp\left(Q K_{i}^{T}/\sqrt{d_{k}}\right)}\right]V,}\end{array}\tag{7}$$ where ⊙ represents element-wise multiplication. For the local attention mask of this part, we define the parameters wp, wf for length of the dependency context and the binary vector Mi ∈ R n, with the value of the j-th element in Mi being: $$M_{i,j}={\begin{cases}1,&j-i>w_{f}{\mathrm{~or~}}i-j>w_{p},\\ 0,&{\mathrm{otherwise.}}\end{cases}}\quad(8)$$ Eventually, we obtain the local attention mask M = [M1, M2*, ..., M*n], where M ∈ R n×n. The final multimodal feature U atv bis obtained after the Transformer with the local attention mask M. ## 3.5 Self-Adaptive Path Selection To best take advantage of the outputs of submodules obtained in Sections 3.3 and 3.4, we design the self-adaptive path selection module to adaptively select the most appropriate route and integrate them by groups for the next stage. The path selection process is done in a soft way, like an attention mechanism. As illustrated in Figure 2(b), for a given feature X1, X2, X3 with the same dimension, we first calculate the similarity with these features through a trainable parameter Qpto get the score of each feature. Then, the normalized score is used as the weight of each feature. We use the softmax operation as the normalized function. Finally, we take the weighted average of these features as the final output, which can be formulated as follows: $$\begin{split}\text{Select}(X_1,X_2,X_3)&=\\ \sigma(\frac{Q^p[X_1,X_2,X_3]^T}{\sqrt{d^x}})[X_1,X_2,X_3]^T,\end{split}\tag{9}$$ where [...] is the first component of the $\sigma$-function. where [·, ·] is the feature concatenation operation. In the context representation module, we denote the output of each modality's context representation modality through self-adaptive path selection as U a c, Ut c, Uv c . In the modal-interaction module, we obtain the feature U atv all as the final multimodal feature by Select(U atv f, Uatv p, Uatv b). ## 3.6 Cross-Modal Contrastive Learning We obtain the final prediction by passing U atv all through a linear layer, and the final emotion label Yˆ of the input dialogue U can be calculated by softmax (denoted by σ) and arg max operations: $$P=\sigma(U_{a l l}^{a t v}W_{2}+b_{2}),\eqno(10)$$ $$\hat{Y}=\arg\max(P).$$ We first define the following classification loss: $$-\frac{1}{\sum_{s=1}^{N}c(s)}\sum_{i=1}^{N}\sum_{j=1}^{c(i)}\log p_{i,j}\left[y_{i,j}\right],\,\,\,(11)$$ $${\mathcal{L}}_{c l s}=$$ Lcls = − where N is the number of dialogues, c(i) is the number of utterances in the i-th dialogue, pi,j is the probability distribution of utterance j in the i-th dialogue, and yi,j is the expected class label of utterance j in the i-th dialogue. In order to improve the discriminability of multimodal features we introduce supervised crossmodal contrastive loss in the modal-interaction module. In this stage, all dialogues within the batch are flattened into utterance feature sequences. For any two features of the same dimension X1, X2 ∈ R C×d, where C denoting the number of utterances in the current batch, the supervised cross-modal contrastive loss is calculated as: $$l_{i}=-\frac{1}{|M_{i}|}\text{log}\frac{\sum_{j,y_{j}=y_{i}}^{C}\exp(\text{sim}(x_{1,i},x_{2,j}/)\tau)}{\sum_{k,y_{k}\neq y_{i}}^{C}\exp(\text{sim}(x_{1,i},x_{2,k})/\tau)},\tag{12}$$ where $|M|$ denotes the number of samples which where |Mi| denotes the number of samples which have the same emotion label as the i-th sample, τ denotes the temperature defined in the original contrastive loss, and sim(x1,i, x2,i) is used to calculate | Dataset | Number of dialogues(utterances) train valid test | | | |-----------|----------------------------------------------------|-----------|-----------| | IMOECAP-4 | 120(3600) | 31(943) | | | IEMOCAP-6 | 120(5810) | 31(1623) | | | MELD | 1152(11098) | 280(2610) | | | MOSEI | 2247(16261) | 300(1868) | 675(4640) | ![5_image_0.png](5_image_0.png) the cosine similarity of the two vectors. The crossmodal contrastive loss Lcc between two feature set X1, X2 is calculated by: $$L_{c c}(X_{1},X_{2})={\frac{\sum_{i}^{C}l_{i}}{C}}.\qquad\qquad(13)$$ We set text as the primary modality and assign the cross-modal contrastive loss to these three interaction submodules to get the following six parts: $$\begin{array}{c}{{L_{c c}=}L_{c c}(U^{a},U^{t})+L_{c c}(U^{v},U^{t})+}}\\ {{L_{c c}(U^{a v},U^{v t})+L_{c c}(U^{a v},U^{a t})+}}\\ {{L_{c c}(U_{b}^{v t},U_{b}^{a t})+L_{c c}(U_{b}^{a t},U_{b}^{v t}).}}\end{array}\tag{14}$$ Then we get the overall training objective: $$\operatorname*{min}_{\theta}{\mathcal{L}}={\mathcal{L}}_{c l s}+\beta{\mathcal{L}}_{c c},\qquad\qquad(15)$$ where β is a constant to control the loss weight. ## 4 Experiments And Results 4.1 Experimental Settings Dataset We evaluated our method on three benchmark datasets, including IEMOCAP (Busso et al., 2008), MELD (Poria et al., 2019), and MOSEI (Bagher Zadeh et al., 2018), all of which are multimodal datasets with aligned acoustic, textual, and visual information for each utterance in a conversation. In literature, two IEMOCAP settings are used, one with four emotions (IEMOCAP-4) and one with six emotions (IEMOCAP-6), so there are four benchmarks to be compared. For the train/validation/test splits of the dataset, following previous work, we split IEMOCAP and MOSEI according to the setting in (Joshi et al., 2022), and MELD according to the setting in (Hu et al., 2021). Statistics for these three datasets are summarized in Table 1. For more information, please refer to Appendix A.1. | IEMOCAP | MELD | MOSEI | | |-----------|--------|---------|-----| | Acoustic | 100 | 300 | 300 | | Visual | 100 | 600 | 74 | | Textual | 512 | 342 | 35 | Table 2: Feature dimensions of each dataset. ## Feature Extraction We extracted uniform features to ensure a fair comparison. For IEMOCAP, audio and video features are obtained in the same way as COGMEN (Joshi et al., 2022), and text features are re-extracted by sBERT. For MELD, audio features (size 300) are extracted by OpenSmile toolkit with IS10 configuration (Schuller et al., 2011), video features (size 600) are extracted by DenseNet (Huang et al., 2017) in the same way as MMGCN (Hu et al., 2021), text features are extracted by sBERT. For MOSEI, audio features (size 640) are extracted using librosa 1 with 640 filter banks, video features (size 35) are extracted by Facets, and text features are extracted by sBERT. We presented the dimensions of the final extracted features for each dataset in Table 2. ## Compared Baselines We compared both unimodal and multimodal methods proposed in the emotion recognition field to verify the effectiveness of our model. For unimodal methods, our model was compared with three baselines, including DialogueRNN (Majumder et al., 2019), DialogueGCN (Ghosal et al., 2019) and DAG-ERC (Shen et al., 2021b). For multimodal baselines, our model was compared with MMGCN (Hu et al., 2021), COGMEN (Joshi et al., 2022) and EMOCAPs (Li et al., 2022). We reimplemented all these methods under the same experimental settings for fair comparison. The BERT structure in the transformers (Wolf et al., 2020) library is adopted as the Transformer structure used in SCMM, and scipy (Virtanen et al., 2020) is used to calculate the F1-score value. For more information, please refer to Appendix A.2. ## Implement Details Our architecture trained on the IEMOCAP dataset has 304 million parameters and takes around 3 minutes to train for 55 epochs on one 2080Ti GPU. We fixed the random seed for all experiments to ensure the reproducibility of our experiments. We trained our network using the Adam Optimizer with a learning rate of 1e-4. The length of the dependency context wf and wp are set to 5 for IEMOCAP and 2 for MELD and MOSEI. In the biased interaction submodule, the Transformer layers used for IEMOCAP, MELD, and MOSEI are 6, 2, and 2, respectively. β is set to 0.2 for MOSEI, and 1 for other datasets. The above optimal parameters are learned based on the grid-search strategy. Following previous work (Hazarika et al., 2018; Majumder et al., 2019; Ghosal et al., 2019), we used weighted average F1-score for evaluation. ## 4.2 Main Results Table 3 shows the results of our model compared with other models on several multimodal emotion conversation datasets. We have the following observations. On the one hand, our method achieves significant improvement over existing state-of-theart methods. Specifically, our results are 6.84%, 4.44%, 2.36%, and 1.25% absolutely higher than the second best result on IEMOCAP-6, IEMOCAP4, MELD, and MOSEI, respectively, demonstrating the superiority of our method SCMM. On the other hand, by comparing the results of last two lines, we can see that the cross-modal contrastive learning loss can bring consistent improvement on all these datasets, where the average improvement is about 0.8%. The reason is that the proposed contrastive loss can benefit the learning of discriminative features and make the margin between different classes more clear. ## 4.3 Ablation Study And Analysis Effect Of Submodules We compared the effects of different context representation submodules and modal-interaction submodules. We divided these submodules into three parts based on their complexity, including the direct mapping with the full interaction, the local context representation with the partial interaction, and the global context representation with the biased interaction. We then tested the effectiveness of these three parts. The results are shown in Table 4, where the absence of different modules (w/o U x d and U avt f, w/o U x land U avt pand w/o U x g and U avt b) exhibits some performance loss on these datasets. Among them, the absence of the global context representation with the biased interaction submodule causes the largest performance loss on all compared datasets. Moreover, we can see that by removing Models IEMOCAP-6 IEMOCAP-4 MELD MOSEI Happy Sad Neutral Angry Excited Frustrated Average DialogueRNN 36.43 67.34 49.62 59.55 63.93 49.35 54.74 74.11 52.44 48.40 DialogueGCN **56.85** 72.17 48.47 54.17 74.16 50.86 58.68 75.15 57.08 48.40 DAGERC 50.17 73.25 56.55 56.41 66.28 58.27 60.69 73.38 51.01 48.45 MMGCN 33.18 66.96 56.03 63.90 68.14 58.51 59.29 74.81 56.30 59.92 COGMEN 52.31 73.39 53.55 58.97 71.48 53.85 60.38 77.62 55.43 50.50 EmoCaps 22.22 67.27 46.27 56.99 67.86 56.37 54.78 75.14 55.92 48.40 SCMM (w/o Lcc) 53.23 **79.42 63.63 66.84** 75.17 60.11 66.73 80.82 58.79 60.70 SCMM (ours) 45.37 78.76 63.54 66.05 **76.70 66.18 67.53 82.06 59.44 61.17** Table 3: F1-score comparison on IEMOCAP, MELD, MOSEI datasets. Lcc is the cross-modal contrastive loss. Table 4: Ablation study of our method. one or two modalities except U v, especially U t, the performance will decrease significantly. Above results can also verify that the text is the primary modality for this task. ## Effect Of Self-Adaptive Path Selection The self-adaptive path selection is designed for the integration of features in different modules. To demonstrate that this module plays a key role in our model, we replaced it with an alternative implementation, where the input features are directly concatenated and then reduced in dimension by a linear layer, which we call the linear selection module. Table 5 shows that replacing our self-adaptive path selection module with the linear selection module leads to performance losses on all datasets, suggesting that the self-adaptive path selection can yield better features. We also illustrated the weights of each path from several samples to gain deep insights. As shown in Figure 3, in the context representation module, the global context representation submodule is the most important one. In the modal-interaction module, all the cases show that the biased and par- ![7_image_0.png](7_image_0.png) | Methods | IEMOCAP-6 | IEMOCAP-4 | MOSEI | | |------------------------------|-------------|-------------|---------|----| | linear selection | 65.66 | 81.39 | 61.14 | | | self-adaptive path selection | 67.53 | 82.06 | 61.17 | | | Methods | IEMOCAP-6 | IEMOCAP-4 | MOSEI | | | t | 48.90 | 69.48 | 54.47 | | | w/o U w/o U a | 64.29 | 77.64 | 61.09 | | | w/o U v | 66.08 | 80.39 | 60.09 | | | a and U t | 39.49 | 48.91 | 53.63 | | | w/o U w/o U v and U t | 50.84 | 66.37 | 48.48 | | | w/o U a and U v | 64.83 | 77.28 | 59.82 | | | w/o U x and U f | 64.76 | 80.28 | 60.83 | | | avt | | | | | | d | | | | | | w/o U x and U p | 66.14 | 79.75 | 59.93 | | | avt | | | | | | l | | | | | | w/o U g and U x b | 55.49 | 72.72 | 59.86 | | | avt | | | | | | Ours(w/o Lcc) | 66.73 | 80.82 | 60.70 | | | Ours | 67.53 | 82.06 | 61.17 | Table 5: Comparison of experimental results using the self-adaptive path selection module and the linear selection module.  5IFDPVQMFKVTU  DFMFCSBUFEUIFJS  ZFBSBOOJWFSTBSZ UPHFUIFS  ' 1  4XFFUIFBSU *WF HPU UP UFMM ZPV TPNFUIJOH     ' 1  'FBUVSFT %JSFDU.BQQJOH -PDBM$POUFYU (MPCBM$POUFYU 'VMM*OUFSBDUJPO 1BSUJBM*OUFSBDUJPO             )BQQZOFTT ' 1 # #JBTFE*OUFSBDUJPO .PEBMJOUFSBDUJPO .PEVMF | tial interaction submodules are the most important, which implies that the modal-interaction requires more diverse interaction strategies rather than directly concatenating multimodal features. ## Influence Of Feature Extractor For the results in Table 3, we reimplemented all compared baseline methods and used the same extracted features to ensure a fair comparison, which may result in different results than those reported in the paper. To demonstrate the generalization ability of our method, we also conducted additional experiments on IEMOCAP-6 based on the features extracted by COGMEN (Joshi et al., 2022) and EmoCaps (Li et al., 2022). The detailed difference between features can be found in Appendix. The results are shown in Table 6. We can see that our SCMM still achieves much better performance than them under their settings, validating our superiority and robustness. | Methods | F1-score | Methods | F1-score | |----------------|------------|----------------|------------| | COGMEN | 62.28 | EmoCaps | 71.16 | | SCMM (w/o Lcc) | 68.50 | SCMM (w/o Lcc) | 73.70 | | SCMM | 69.08 | SCMM | 75.18 | Table 6: Results comparison under other methods' feature extraction settings on IEMOCAP-6. ![8_image_0.png](8_image_0.png) ## Parameter Sensitivity Analysis According to the training objective in Eq. (15), there is mainly one parameter β, which controls the contribution of cross-modal contrastive learning loss. In experiments, we find the optimal value for β by grid searching. We present the results of our method on IEMOCAP-4 with respect to different β in Figure 4. We can observe that our method is relatively stable when β varies in the range of [0.8, 1.2], which show that SCMM is insensitive to this parameter in a certain range. ## 5 Conclusion In this paper, for the task of multimodal emotion recognition, we propose the self-adaptive contextual and modal-interaction modeling method. We first come up with the context representation module with global, local modeling and direct mapping to solve the issue of long, short, and independent dependency. Then the modal-interaction consists of full, partial, and bias interactions to fully investigate the correlation and potential complementarity among different modalities. Then we propose the self-adaptive path selection module for better combination and cross-modal contrastive learning loss for discriminative feature learning. Extensive experiments on three datasets under four settings have demonstrated the effectiveness and superiority of our proposed method. ## 6 Limitations Our proposed method is an offline system in which the input is a dialogue containing all utterances rather than a single utterance input in chronological order. An online system for emotion recognition can be applied in real-time conference systems or human-computer interaction, so the online system has potential value for future research. Our method can be built into online systems by creating buffer systems such as history windows. However, all the baseline methods in the past are offline systems, such as COGMEN, DialogueRNN, etc. In addition, the form of datasets also leads us to construct an offline system for training and testing. On the other hand, the offline system also has application scenarios such as analyzing emotions of posted videos, opinion mining in social media, etc. Therefore, our method only builds an offline system under the offline experimental setting that can be compared and evaluated. Besides, the input of our method is feature-based. The original text, audio, and video files will first pass through feature extractors to obtain multimodal features, which may cause information loss and hurt performance. We focus on feature-based training methods because training based on the original files costs a lot. For example, training a video encoder generally requires several V100 GPUs and days of training time. Therefore, we, including the baseline methods we compare, adapt the feature-based training methods. When the cost permits, training based on source files is worth exploring in future work. With feature-based training methods, different baseline methods use feature extractors to obtain features, leading to a lack of fairness in method comparison. In this regard, we reimplemented all open-source methods and compared them using a unified feature file to ensure the fairness of the experimental results. At the same time, we also conducted evaluations with different signature files to verify the generalization of the method. ## Acknowledgements This work is supported in part by the National Natural Science Foundation of China (grant nos. 62006140, 62176137, and 62236003), in part by the Shandong Provincial Natural Science Foundation (grant no. ZR2020QF106), and in part by the CCF-Alibaba Innovative Research Funds for Young Scholars. ## References Pradeep K Atrey, M Anwar Hossain, Abdulmotaleb El Saddik, and Mohan S Kankanhalli. 2010. Multimodal fusion for multimedia analysis: a survey. Multimedia Systems, 16:345–379. AmirAli Bagher Zadeh, Paul Pu Liang, Soujanya Poria, Erik Cambria, and Louis-Philippe Morency. 2018. Multimodal language analysis in the wild: CMUMOSEI dataset and interpretable dynamic fusion graph. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 2236–2246. Roger Bramon, Imma Boada, Anton Bardera, Joaquim Rodriguez, Miquel Feixas, Josep Puig, and Mateu Sbert. 2011. Multimodal data fusion based on mutual information. *IEEE Transactions on Visualization and* Computer Graphics, 18:1574–1587. Carlos Busso, Murtaza Bulut, Chi-Chun Lee, Abe Kazemzadeh, Emily Mower, Samuel Kim, Jeannette N Chang, Sungbok Lee, and Shrikanth S Narayanan. 2008. Iemocap: Interactive emotional dyadic motion capture database. *Language Resources and Evaluation*, 42:335–359. Ankush Chatterjee, Kedhar Nath Narahari, Meghana Joshi, and Puneet Agrawal. 2019. Semeval-2019 task 3: Emocontext contextual emotion detection in text. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 39–48. Dragos Datcu and Leon JM Rothkrantz. 2015. Semantic audiovisual data fusion for automatic emotion recognition. pages 411–435. Deepanway Ghosal, Navonil Majumder, Soujanya Poria, Niyati Chhaya, and Alexander Gelbukh. 2019. DialogueGCN: A graph convolutional neural network for emotion recognition in conversation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing and the International Joint Conference on Natural Language Processing, pages 154–164. Devamanyu Hazarika, Soujanya Poria, Rada Mihalcea, Erik Cambria, and Roger Zimmermann. 2018. ICON: Interactive conversational memory network for multimodal emotion detection. In *Proceedings of the* Conference on Empirical Methods in Natural Language Processing, pages 2594–2604. Jingwen Hu, Yuchen Liu, Jinming Zhao, and Qin Jin. 2021. MMGCN: Multimodal fusion via deep graph convolution network for emotion recognition in conversation. In *Proceedings of the Annual Meeting of* the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing, pages 5666–5675. Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. 2017. Densely connected convolutional networks. In *Proceedings of the IEEE* Conference on Computer Vision and Pattern Recognition, pages 4700–4708. Abhinav Joshi, Ashwani Bhat, Ayush Jain, Atin Singh, and Ashutosh Modi. 2022. COGMEN: COntextualized GNN based multimodal emotion recognitioN. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4148–4164. Zaijing Li, Fengxiao Tang, Ming Zhao, and Yusen Zhu. 2022. EmoCaps: Emotion capsule based model for conversational emotion recognition. In *Proceedings* of the Findings of the Association for Computational Linguistics, pages 1610–1618. Navonil Majumder, Pengfei Hong, Shanshan Peng, Jiankun Lu, Deepanway Ghosal, Alexander Gelbukh, Rada Mihalcea, and Soujanya Poria. 2020. MIME: MIMicking emotions for empathetic response generation. In *Proceedings of the Conference on Empirical Methods in Natural Language Processing*, pages 8968–8979. Navonil Majumder, Soujanya Poria, Devamanyu Hazarika, Rada Mihalcea, Alexander Gelbukh, and Erik Cambria. 2019. Dialoguernn: An attentive rnn for emotion detection in conversations. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6818–6825. Soujanya Poria, Devamanyu Hazarika, Navonil Majumder, Gautam Naik, Erik Cambria, and Rada Mihalcea. 2019. MELD: A multimodal multi-party dataset for emotion recognition in conversations. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 527–536. Björn Schuller, Anton Batliner, Stefan Steidl, and Dino Seppi. 2011. Recognising realistic emotions and affect in speech: State of the art and lessons learnt from the first challenge. *Speech Communication*, 53:1062–1087. Nicu Sebe, Ira Cohen, Theo Gevers, and Thomas S Huang. 2005. Multimodal approaches for emotion recognition: a survey. volume 5670, pages 56–67. Weizhou Shen, Junqing Chen, Xiaojun Quan, and Zhixian Xie. 2021a. Dialogxl: All-in-one xlnet for multiparty conversation emotion recognition. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 35, pages 13789–13797. Weizhou Shen, Siyue Wu, Yunyi Yang, and Xiaojun Quan. 2021b. Directed acyclic graph network for conversational emotion recognition. In *Proceedings* of the Annual Meeting of the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing, pages 1551–1560. Pauli Virtanen, Ralf Gommers, Travis E. Oliphant, Matt Haberland, Tyler Reddy, David Cournapeau, Evgeni Burovski, Pearu Peterson, Warren Weckesser, Jonathan Bright, Stéfan J. van der Walt, Matthew Brett, Joshua Wilson, K. Jarrod Millman, Nikolay Mayorov, Andrew R. J. Nelson, Eric Jones, Robert Kern, Eric Larson, C J Carey, ˙Ilhan Polat, Yu Feng, Eric W. Moore, Jake VanderPlas, Denis Laxalde, Josef Perktold, Robert Cimrman, Ian Henriksen, E. A. Quintero, Charles R. Harris, Anne M. Archibald, Antônio H. Ribeiro, Fabian Pedregosa, Paul van Mulbregt, and SciPy 1.0 Contributors. 2020. SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. *Nature Methods*, pages 261–272. Yan Wang, Jiayu Zhang, Jun Ma, Shaojun Wang, and Jing Xiao. 2020. Contextualized emotion recognition in conversation as sequence tagging. In Proceedings of the Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 186–195. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In *Proceedings of the Conference on Empirical Methods in* Natural Language Processing: System Demonstrations, pages 38–45, Online. Martin Wöllmer, Angeliki Metallinou, Florian Eyben, Björn Schuller, and Shrikanth S. Narayanan. 2010. Context-sensitive multimodal emotion recognition from speech and facial expression using bidirectional lstm modeling. In *Proceedings of the Annual Conference of the International Speech Communication* Association. Bhanusree Yalamanchili, Keerthana Dungala, Keerthi Mandapati, Mahitha Pillodi, and Sumasree Reddy Vanga. 2021. Survey on multimodal emotion recognition systems. In *Machine Learning Technologies* and Applications, pages 319–326. Zhou Yu, Jun Yu, Yuhao Cui, Dacheng Tao, and Qi Tian. 2019. Deep modular co-attention networks for visual question answering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6281–6290. ## A Appendix A.1 Datasets And Feature Extraction We summarized the statistics for these three datasets in Table 1. All used datasets are commonly used for emotion recognition in the English language. The ids of the data are anonymized by sequential ids or random hash values. IEMOCAP: IEMOCAP is a multimodal dataset that contains approximately 12 hours of videos for human emotion recognition analysis. Each video consists of a single dyadic dialogue, and every utterance in a conversation is annotated with an emotion label from six categories: happy, sad, neutral, angry, excited, and frustrated. IEMOCAP has two settings, one for four emotion recognition tasks (angry, sad, happy, neutral) and one for six emotion recognition tasks (happy, sad, neutral, angry, excited, and frustrated). We conducted experiments on both of these settings. The IEMOCAP dataset uses the license written by itself, and we have obtained the authorization of The Signal Analysis and Interpretation Laboratory required for accessing and using the IEMOCAP dataset. MELD: MELD is a large-scale multimodal and multi-speaker emotional dialog dataset collected from the Friends TV series. There are more than 1.4k dialogues in the dataset, and the dialogues are participated by multiple speakers instead of only two. Each utterance in a conversation is annotated with an emotion label from seven categories: anger, disgust, sadness, joy, neutral, surprise, and fear. It uses the GNU (General Public License) v3.0 license. MOSEI: MOSEI is an emotional recognition dataset made up of 23k sentence utterance video clips taken from YouTube. Specifically, unlike multi-speaker datasets such as IEMOCAP and MELD, MOSEI has only one speaker in a video clip. Each utterance is annotated with an emotion label from six categories: happiness, sadness, disgust, fear, surprise, and anger. CMU-MOSEI also uses a license written by itself, which declaims that the dataset is free for anyone. We extracted uniform features to ensure a fair comparison. For IEMOCAP, audio and video features are obtained in the same way as COGMEN (Joshi et al., 2022), and text features are re-extracted by sBERT. For MELD, audio features (size 300) are extracted by OpenSmile toolkit with IS10 configuration (Schuller et al., 2011), video features (size 600) are extracted by DenseNet (Huang et al., 2017) in the same way as MMGCN (Hu et al., 2021), text features are extracted by sBERT. For MOSEI, audio features (size 640) are extracted using librosa 2 with 640 filter banks, video features (size 35) are extracted by Facets, and text features are extracted by sBERT. The distribution of the data used in our evaluation may have some bias. For example, IEMOCAP comes from the performance of some actors, and MELD is obtained from the TV series Friends. In real-world scenarios, conversations may be more complex, such as the position of the camera may be more variable, the types of emotions may be more, the modality of the collected data may be missing, etc. However, all baselines we compared are evaluated on these datasets. In the future, datasets in the wild or collected from natural scenes can be considered to verify the effectiveness of our algorithms. ## A.2 Baselines And Implementation DialogueGCN (Ghosal et al., 2019): it leverages self and inter-speaker dependency based on a graph convolutional network. Each node of the graph represents individual utterance features encoded by bi-LSTM, and the edges between a pair of nodes are constructed relying on the dependency between speakers within a sliding window. Due to only the text modality being used in DialogueGCN, we simply concatenated the features of three modalities for DialogueGCN to make it comparable to SCMM. DialogueRNN (Majumder et al., 2019): it employs four gated recurrent units(GRU), global GRU, party GRU, and emotion GRU to model the speaker, the context, and the emotion of the preceding utterances. Specifically, the global, party, and speaker GRU update the context, party state, and speaker state, respectively. The emotion GRU is used to model the emotionally relevant representations. DAG-ERC (Shen et al., 2021b): it models the conversation context through a directed acyclic graph with constraints on speaker identity and positional relations. Furthermore, DAG-ERC gathers contextual information for utterances in a single layer based on a directed acyclic graph neural network. COGMEN (Joshi et al., 2022): it leverages both local information in a dialogue based on GNN, and the GraphTransformers are used to fuse multiple modalities. However, instead of exploiting the in2https://librosa.org/doc/latest/index.html ![12_image_0.png](12_image_0.png) trinsic connections between features of different modalities, COGMEN simply concatenates them and does not enhance much in multimodal settings. MMGCN (Hu et al., 2021): it utilizes both multimodal and long-distance contextual information based on a graph convolutional network. In addition, MMGCN constructs graphs in each modality and builds edges between nodes corresponding to the same utterance across multiple modalities. Though good results were achieved on IEMOCAP and MELD, it still treats different modalities in nearly the same way, which somewhat reduces the performance on multimodal tasks. EmoCaps (Li et al., 2022): it designs a model named Emoformer based on Transformer for feature extraction. After feature extraction, the three modality features are concatenated. Finally, a model based on bi-LSTM layers is applied for emotion prediction. We used PyTorch to reimplement all these methods and SCMM. The BERT structure in the transformers (Wolf et al., 2020) library is adopted as the Transformer structure used in SCMM, and scipy (Virtanen et al., 2020) is used to calculate the F1-score value. Our architecture trained on the IEMOCAP dataset has 304 million parameters and takes around 3 minutes to train for 55 epochs on one 2080Ti GPU. We fixed the random seed for all experiments to ensure the reproducibility of our experiments. ## A.3 Visualization Of Contrastive Learning Features We adopted the t-SNE to visualize feature maps before and after adding the cross-modal contrastive learning loss. As shown in Figure 5, our contrastive learning loss widens the gap among different classes, leading to more discriminative feature representations. ![12_image_1.png](12_image_1.png) ## A.4 Error Analysis After analyzing the dataset, we found that the error predictions of our model mainly came from the error identification of similar emotions. As shown in Figure 6, where most of the error samples in happy are classified as excited and most of the error samples in frustration are classified as anger, etc. These problems also exist in DialogueRNN, COGMEN, and DAGERC. Even though our final results show some improvement compared to previous work, the model still cannot avoid such prediction bias. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 6. ✓ A2. Did you discuss any potential risks of your work? Appendix A ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1. ✗ A4. Have you used AI writing assistants when working on this paper? No AI writing assistants are used. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3. ✓ B1. Did you cite the creators of artifacts you used? section 4 and Appendix A. ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Appendix A. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Appendix A. ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Appendix A. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? No response. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Appendix A. ## C ✓ **Did You Run Computational Experiments?** Section 4. ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix A. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4. ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 4. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Appendix A. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
du-etal-2023-structure
Structure-Discourse Hierarchical Graph for Conditional Question Answering on Long Documents
https://aclanthology.org/2023.findings-acl.391
Conditional question answering on long documents aims to find probable answers and identify conditions that need to be satisfied to make the answers correct over long documents. Existing approaches solve this task by segmenting long documents into multiple sections, and attending information at global and local tokens to predict the answers and corresponding conditions. However, the natural structure of the document and discourse relations between sentences in each document section are ignored, which are crucial for condition retrieving across sections, as well as logical interaction over the question and conditions. To address this issue, this paper constructs a Structure-Discourse Hierarchical Graph (SDHG) and conducts bottom-up information propagation. Firstly we build the sentence-level discourse graphs for each section and encode the discourse relations by graph attention. Secondly, we construct a section-level structure graph based on natural structures, and conduct interactions over the question and contexts. Finally different levels of representations are integrated into jointly answer and condition decoding. The experiments on the benchmark ConditionalQA shows our approach gains over the prior state-of-the-art, by 3.0 EM score and 2.4 F1 score on answer measuring, as well as 2.2 EM score and 1.9 F1 score on jointly answer and condition measuring.
# Structure-Discourse Hierarchical Graph For Conditional Question Answering On Long Documents Haowei Du1,2,3, Yansong Feng1, Chen Li 5, Yang Li5**, Yunshi Lan**6 Dongyan Zhao**1,2,3,4,**∗ 1Wangxuan Institute of Computer Technology, Peking University 2Center for Data Science, Peking University 3Institute for Artificial Intelligence, Peking University 4 State Key Laboratory of Media Convergence Production Technology and Systems 5 Ant Group 6 East China Normal University [email protected], {fengyansong,zhaodongyan}@pku.edu.cn [email protected], [email protected] [email protected] ## Abstract Conditional question answering on long documents aims to find probable answers and identify conditions that need to be satisfied to make the answers correct over long documents. Existing approaches solve this task by segmenting long documents into multiple sections, and attending information at global and local tokens to predict the answers and corresponding conditions. However, the natural structure of the document and discourse relations between sentences in each document section are ignored, which are crucial for condition retrieving across sections, as well as logical interaction over the question and conditions. To address this issue, this paper constructs a Structure-Discourse Hierarchical Graph (SDHG) and conducts bottomup information propagation. Firstly we build the sentence-level discourse graphs for each section and encode the discourse relations by graph attention. Secondly, we construct a section-level structure graph based on natural structures, and conduct interactions over the question and contexts. Finally different levels of representations are integrated into jointly answer and condition decoding. The experiments on the benchmark ConditionalQA shows our approach gains over the prior state-of-the-art, by 3.0 EM score and 2.4 F1 score on answer measuring, as well as 2.2 EM score and 1.9 F1 score on jointly answer and condition measuring. Our code will be provided on https: //github.com/yanmenxue/ConditionalQA. ## 1 Introduction Conditional question answering (QA) aims to answer questions from contexts where conditions are used to distinguish answers as well as to provide additional information to support them (Saeidi et al., 2018; Gao et al., 2020; Ouyang et al., 2021a; Sun ∗Corresponding Author ![0_image_0.png](0_image_0.png) et al., 2022). Recently more interest of the community has been put on conditional QA over long documents like government policies which are close to reality scenes (Sun et al., 2022). These approaches based on transformer framework encode each document segment respectively, and proceed interaction between the question as well as different levels of contexts. The integrated token representations are used to predict the answer span and corresponding conditions. However, the natural document structure, i.e., section levels and discourse relations (Jia et al., 2018; Shi and Huang, 2019; Yu et al., 2022) between sentences within the document segment (section) are ignored, which are crucial for conditions retrieving across sections, as well as logical interaction over the question and conditions. We take an example from the ConditionalQA dataset in Figure 1. The document discusses the gender recognition certificate in UK and the question asks for the eligibility to apply. Section 1.1 and 1.2 are two child sections (subsections) of section 1, so they describe two parallel and relevant aspects about the contents in the parent section. We name the sections sharing the same parent section as sibling sections, like section 1.1 and 1.2. Section 1.1 and 1.2 elaborate two different routes to apply, each coupled with a group of conditions to satisfy. As long as the question satisfies one group of conditions, the answer will be "Yes". The structural relations among section 1, 1.1 and 1.2 enables the model to reason from "2 different ways to get a gender recognition certificate" to the "standard route" and "oversea route" in two subsections. Moreover, the discourse relation *condition* between "apply by the standard route if all *· · ·* " and "you have been diagnosed *· · ·* ", as well as "apply by the overseas route if *· · ·* " and "you must be 18 or over" helps the model locate the relevant conditions. The question applies to the second route, satisfying the condition "gender has been legally accepted" and the unsatisfied condition "you must be 18 or over" needs to be outputted with the answer. It shows natural document structure and discourse information enhance the ability to retrieve relevant conditions across different sections and logically reason for the answer. To capture the natural structure among sections and the discourse relations between sentences, we propose our structure-discourse hierarchical graph (SDHG). We design a hierarchical and heterogeneous graph, which includes a section-level structure graph and a set of sentence-level discourse graphs. In the structure graph, each node denotes a section in the document, where the parent-child and sibling relations between sections are used to build the edges. We utilize GAT (Velickovi ˇ c et al. ´ , 2017) to propagate information on the structure graph to encode the information that the child sections elaborate parallel and relevant aspects about the contents in their parent section. Each section has its corresponding sentence-level discourse graph, where each node denotes a sentence in this section or a discourse relation between 2 text spans. Similarly we apply GAT to incorporate the logical discourse relations among the sentences. We apply bottom-up encoding process in our hierarchical framework, where the sentence representations from pretrained language model (PLM) (Raffel et al., 2020; Lewis, 2022) pass through respective sentence-level discourse graph to introduce the discourse relations, and the integrated representations go through the section-level structure graph to enhance the document structural information. We conduct the experiments on the benchmark dataset ConditionalQA, and significantly outperforms the existing approaches by 3.0 EM score and 2.4 F1 score for answer evaluation, and 2.2 EM score and 1.9 F1 score for jointly answer-condition evaluation. 1. We are the first to incorporate natural document structure information and discourse relations between sentences to enhance the answer and condition retrieving across sections, as well as logical reasoning over the question and conditions for conditional QA on long documents. 2. Our approach outperforms existing methods on the benchmark dataset of this field, becoming the new state-of-the-art. ## 2 Related Work Conditional QA requires finding the probable answers and identifying their unsatisfied conditions (Sun et al., 2022). E3(Zhong and Zettlemoyer, 2019) extracts a set of decision rules from the context and reasons about the entailment. DISCERN (Gao et al., 2020) splits the document into elementary discourse units (EDU) (Schauer, 2000) and predicts whether each EDU is entailed. DGM (Ouyang et al., 2021b) constructs the explicit and implicit graphs of EDU to capture the interactions among contexts and questions with the support of tagged discourse relationship. However, these models ignore the natural structure of documents, and the EDU-based discourse graph undermines the informational continuity of sentences. Moreover, simply concatenating the question with full context into a single input and encoding it with a Transformer model with O(N2) complexity make it not scalable to longer contexts. ETC (Ainslie et al., 2020) introduces attention mechanism between global tokens and regular input tokens to scale input length and encode structured inputs. DocHopper (Sun et al., 2021) utilizes the structural information that paragraphs and sentences contain different levels of information, and perform evidence retrieval at both sentence and section levels. To efficiently aggregate and combine long documents information, FID (Izacard and Grave, 2021) concatenates the representations of different document sections produced by the encoder independently and performs fusion in the decoder only. To enhance interaction between different levels of text segments, CGSN (Nie et al., 2022) propagates information on the global and local graph composed of nodes for tokens, sentences as well as document sections. However, these models ignore the hierarchical structure of the document and discourse relations between sentences within each document section, which brings difficulty to condition locating across sections and logical reasoning for answers. HIBRIDS (Cao and Wang, 2022) injects learned biases in attention weights calculation to incorporate hierarchical document structure and produces better summaries for long documents. It shows the importance of hierarchical document structure for long document understanding. However, we highlight the section-level structural relations such as parent-child and sibling, instead of token-level path lengths and level differences on the document structure graph. ## 3 Preliminary We study the task of conditional QA over long documents (LDCQA), where the answers are only applicable when certain conditions apply. The model learns to find answers to the question from the long context and additionally performs logical reasoning over the conditions to check whether the answers are eligible. If the answers require additional conditions to be satisfied, the model identifies these unsatisfied conditions as well. Formally, the input to the model includes a question q = [q1; q2; *· · ·* ; qm] coupled with a document d = [d1; d2; *· · ·* ; dn], where m and n denotes the length of the question and context. In our LDCQA setting, the length n can be larger than 10K. The model outputs a list of answers coupled with corresponding conditions {(a1, {c (1) 1; *· · ·* ; c (1) k1}); *· · ·* ; (ai, {c (i) 1 ; *· · ·* ; c (i) ki}); · · · ; (aL, {c (L) 1; *· · ·* ; c (L) kL})}, where L ≥ 0 denotes the number of answers and ki ≥ 0 denotes the number of conditions for i-th answer. ## 4 Methodology As shown in Figure 2, our approach includes 4 modules: PLM based contextual encoder, sentencelevel discourse graph encoder, section-level structure graph encoder, fusion and decoding. First we encode each document section respectively to obtain contextual representations. Then we proceed sentence interaction using parsed discourse relations for each section. Then we conduct information propagation on the structure graph. Finally, we integrate 3 levels of section representations with token representations to jointly generate the answers and conditions. ![2_image_0.png](2_image_0.png) ![2_image_1.png](2_image_1.png) Figure 3: Discourse graph of section 1.1 in Figure 1. There are 3 types of nodes: sentence node, relation node and question node. 4.1 Pre-processing Document Segmentation We segment the document into different levels of sections by the heading tags in the document. Each pair of headings such as "<h1>" and "</h1>", embraces the section title and is followed by a continuous chunk of contexts until the next pair of headings. We concatenate the title and the context chunk as the contents of one section. The hierarchy of headings is applied to build the document structure graph. Specifically, we add edges between parent sections and their child sections, as well as between sibling sections. Discourse Parsing Considering the ground-truth discourse tree is not provided, we utilize a pretrained discourse parser (Yu et al., 2022) for each section to decide the dependencies between sentences and the corresponding relation types. The parser does discourse parsing based on rhetorical structure theory (RST) (Mann and Thompson, 1988; Taboada and Mann, 2006) and utilizes 18 simplified coarse-grained relations such as elaboration, circumstance, *condition*, and etc (Carlson and Marcu, 2001; Zhang et al., 2021; Yu et al., 2022). The discourse tree contains 2 types of nodes: relation nodes and leaf nodes. In our setting, the leaf node denotes a sentence in the section and the relation node identifies the relation type between two continuous text spans. We add a global node connected with every sentence node, so the discourse tree is converted into a discourse graph for each document section. The discourse graph of section 1.1 in Figure 1 is shown in Figure 3, which includes 4 relation types: elaboration, condition, *joint*. ## 4.2 Contextual Encoder Our generative model for conditional QA is based on a sequence-to-sequence pretrained language model such as T5 or BART. The model takes the concatenation of question and context as the input and derives the contextual representations. Specifically, each document section is concatenated with the question and processed independently from other sections by the encoder. We add special tokens "[QUE]" and "[CON]" before the question and context, as well as "[SEP]" to separate each sentence in the document section. Because we encode one section at a time, our approach is scalable to long documents with many sections. ## 4.3 Sentence-Level Discourse Graph Encoder For each document section, we build a discourse relation graph to incorporate discourse relational information between sentences within the section. We utilize RST discourse parser to derive the discourse graph and add a global node to represent the full section. We add edges between the global node and the leaf nodes, i.e., sentence nodes to increment the information flow among sentences. For a section composing n sentences, we initial the representation of global node as the hidden state h0 corresponding to "[CON]" from contextual encoder, and the representation of t-th sentence node as the hidden state corresponding to the t-th "[SEP]" token ht, 1 ≤ t ≤ n. We apply GAT to do information propagation and derive the discourse relation enhanced representations: aij = MLP([hi: hj]) (1) αij = exp(LeakyRelu(aij)) Pj′∈N(i) exp(LeakyRelu(aij′)) (2) hˆi = σ( X j∈N(i) αijWhi) (3) 6285 c where N(i) denotes the neighbour nodes of node i, 1 ≤ i ≤ n and σ denotes activating function. We take the final representation of global node as the section representation that incorporates the discourse relational information. ## 4.4 Section-Level Document Graph Encoder We construct a node for each document section and add the question node in the structure graph. Section nodes are connected with their parent section and child sections. These parent-child edges encode the information that the child section depicts a specific aspect about the parent section. Section nodes at the same level which share the same parent section are connected and these sibling edges incorporate the information that these sections elaborate parallel and relevant aspects of the parent. Additionally, we connect the question node with each section node to enhance the information flow between the question and contexts. We initial the question representation q as the hidden state corresponding to the "[QUE]" obtained in the contextual encoder module. The initialization of section nodes comes from the representations of global nodes in corresponding discourse graphs. Similarly, we produce information transmission on the structure graph by GAT network and obtain the structure-aware section representations: [q′; h′1 ; h′2 ; *· · ·* ; h′N] = GAT([q; hˆ1; hˆ2; *· · ·* ; hˆN]), where N denotes the number of sections in the document. ## 4.5 Fusion And Decoding Considering that {hi, 1 ≤ i ≤ n}, {hˆi, 1 ≤ i ≤ n}, {h′i , 1 ≤ i ≤ n} respectively contain the contextual information, discourse relational information and document structure information, we concatenate the token representations of question and contexts, as well as 3 levels of section representations sequentially as follows: [t q 1 ; t q 2 ; *· · ·* ; t q Q; t c1 ; t c2 ; *· · ·* ; t c C; h1; h2; *· · ·* ; hN; hˆ1; hˆ2; *· · ·* ; hˆN; h′1 ; h′2 ; *· · ·* ; h′N], where Q, C, N denote the number of question tokens, context tokens and document sections. Then we pass them into PLM decoder to generate the sequence shaped as "*· · ·* [ANS] ai [CON] c (i) 1*· · ·* [CON] (i) N(i)*· · ·* " where ai and c (i) jdenote the i-th answer and the j-th condition of the i-th answer, "[ANS]" and "[CON]" are special tokens added into PLM tokenizer. Our model is optimized by the crossentropy loss between the predicted sequence and | train | dev | test | all | | |-----------|-------|--------|-------|------| | documents | 436 | 59 | 139 | 652 | | questions | 2338 | 285 | 804 | 3427 | | length | 2050 | 2142 | 2324 | 2093 | | Type | Number | | |-------------------------|-------------|------| | Answer | yes / no | 1751 | | type | extractive | 1527 | | Condition deterministic | 2475 | | | type | conditional | 803 | | Answer | single | 2526 | | number | multiple | 752 | | not answerable | 149 | | ground truth: $$\begin{array}{l l}{{L=-\log p(r|q,C)}}&{{\qquad\qquad(4)}}\\ {{}}&{{=-\sum_{i=1}^{L}\log p(r_{i}|q,C,r_{<i})}}&{{\qquad(5)}}\end{array}$$ * [16] A. A. K. where r = {(ai, ci)} L i=1, ai and ci denote the i-th answer and condition. ## 5 Experiments 5.1 Dataset ConditionalQA dataset is a challenging benchmark on conditional QA over long documents (Sun et al., 2022). There are 3427 questions in ConditionalQA and the average length of documents is larger than 2K by Table 1. Table 2 shows it contains different types of questions such as yes/no questions, freeform extractive questions, questions with multiple answers and not-answerable questions. Many questions in ConditionalQA are deterministic where the conditions needed have been satisfied in the question. It poses difficulties for the model to locate the conditions needed to answer the question and check the satisfaction of these conditions. ## 5.2 Evaluation Metrics The predictions are evaluated using two sets of metrics: EM/F1 and conditional EM/F1. EM/F1 are the traditional metrics that measure the predicted answer spans. The ConditionalQA dataset introduced another metric, conditional EM/F1, that jointly measures the accuracy of the answer span and the unsatisfied conditions. As defined in the original paper (Sun et al., 2022), the conditional EM/F1 is the product of the original answer EM/F1 and the EM/F1 of the predicted unsatisfied conditions. The conditional EM/F1 is 1.0 if and only if the predicted answer span is correct and all unsatisfied conditions are found. If there is no unsatisfied condition, the model should predict an empty set. ## 5.3 Baselines We compare our approach with 3 strong baselines on LDCQA. ETC (Ainslie et al., 2020) applies global-local attention mechanism between global and local tokens, and enables the model scale to long inputs. However, the fully connected topology of token graphs cannot capture the natural structure of the document. DocHopper (Sun et al., 2021) highlights the structural information that a passage contains consecutive and relevant information, and retrieves information by jointly sentence and passage level. However, the natural structural information between passages is ignored, FID (Izacard and Grave, 2021) independently encodes different passages and concatenates the representations in the decoder only, which decreases calculation cost and improves performance for QA on long documents. However, the natural structure of documents and discourse information in each section are neglected. ## 5.4 Experimental Details Following FID (Izacard and Grave, 2021), we utilize pretrained model T5-base as our backbone. The information propagation step for discourse graphs and the document structure graph are set to 2. We optimize all models with Adam optimizer, where the initial learning rate is set to 1e-4 and the dropout rate is set to 0.1. The nuclearity of discourse relations distinguishes the different logical roles of two spans (Carlson and Marcu, 2001), so we add the nuclearity label produced by our discourse parser to each relation node in the discourse graphs. We focus more on formal texts on websites such as news, policies and articles (Huang | Yes/No | Extractive | Conditional | Overall | | | | | | |-----------|--------------|---------------|-------------|-------------|-------------|-----------|-------------|-------------| | EM / F1 | w/ conds | EM / F1 | w/ conds | EM / F1 | w/ conds | EM / F1 | w/ conds | | | majority | 62.2 / 62.2 | 42.8 / 42.8 | - / - | - / - | - / - | - / - | - / - | - / - | | ETC | 63.1 / 63.1 | 47.5 / 47.5 | 8.9 / 17.3 | 6.9 / 14.6 | 39.4 / 41.8 | 2.5 / 3.4 | 35.6 / 39.8 | 26.9 / 30.8 | | DocHopper | 64.9 / 64.9 | 49.1 / 49.1 | 17.8 / 26.7 | 15.5 / 23.6 | 42.0 / 46.4 | 3.1 / 3.8 | 40.6 / 45.2 | 31.9 / 36.0 | | FID | 64.2 / 64.2 | 48.0 / 48.0 | 25.2 / 37.8 | 22.5 / 33.4 | 45.2 / 49.7 | 4.7 / 5.8 | 44.4 / 50.8 | 35.0 / 40.6 | | SDHG | 67.4 / 67.4 | 50.2 / 50.2 | 29.2 / 42.0 | 25.4 / 37.0 | 48.3 / 52.3 | 5.9 / 7.6 | 47.4 / 53.2 | 37.2 / 42.5 | ## 5.5 Results The results of different approaches are presented in Table 3. Our approach outperforms all the existing methods on ConditionalQA, achieving the new state-of-the-art. It is efficient to introduce natural document structure and discourse relations into conditional QA on long documents. We outperform the strong baseline FID by 3.0 EM score and 2.4 F1 score in answer measuring, 2.2 EM score and 1.9 F1 score in joint answer-condition measuring. On different types of questions, such as yes/no questions and free-form extractive questions, our model outperforms FID by over 3.2 EM and F1 score in answer measuring, as well as over 2.2 EM and F1 score in jointly answer and condition measuring. It demonstrates the robust improvement of our structure and discourse aware framework in different types of questions on both answer and condition measuring. ## 6 Analysis In this part, we do 3 ablation studies to evaluate the efficiency of 3 levels of section representations in section 4.5. Then we probe our performance on long and complex documents. Moreover, we explore the role of accurate document structures and discourse relations in document sections. Finally, we take an example from ConditionalQA dataset to show the efficiency of our structure and discourse aware hierarchical framework. ## 6.1 Ablation Study In our fusion module, we concatenate three levels of section representations: original contextual | Overall | | | |-------------|-------------|-------------| | EM / F1 | w/ conds | | | -contextual | 48.2 / 55.6 | 37.9 / 45.3 | | -discourse | 45.0 / 53.7 | 36.4 / 44.5 | | -structure | 47.8 / 53.6 | 40.2 / 45.6 | | SDHG | 47.9 / 56.6 | 38.3 / 46.6 | Table 4: Ablation Results on development set of ConditionalQA by overall EM and F1 metrics for answer and condition prediction. representations, discourse-aware representations, and document structure-aware representations with the token representations. In this part, we conduct 3 ablation experiments on the development set of ConditionalQA to evaluate their respective efficiency. Do contextual representations of sections matter? To evaluate the efficiency of contextual representations in SDHG, we remove the list of section representations {hi, 1 ≤ i ≤ N} from section 4.5. By Table 4, the performance of this ablation will gain 0.3 EM score and drop 1.0 F1 score on measuring answers, meanwhile drop 0.4 EM score and 1.3 F1 score on jointly measuring answers and conditions. It shows the original representations of document sections from PLM contain contextual section-level information which are important to our model for conditional QA on long documents. ## Do Discourse Relations Between Sentences In Each section matter? In this ablation, we remove the discourse graph for each document section from our hierarchical framework. Specifically, we take the original contextual representations of sections from PLM to initial the normal node in document structure graph, and concatenate the contextual and document structural representations to the decoder. As shown in Table 4, this ablation drops 2.9 EM score and 2.9 F1 score on answer measuring, as well as 1.9 EM score and 2.1 F1 score on jointly | Conditional w/ conds | | | |------------------------|-----------|-----------| | Group | 1 | 2 | | Avg. len. | 1182 | 3094 | | FID | 6.1 / 8.4 | 0.7 / 5.2 | | SDHG | 6.5 / 7.6 | 5.2 / 7.1 | | Conditional w/ conds | | | |------------------------|-----------|-----------| | Group | 1 | 2 | | Avg. # sect. | 12 | 30 | | FID | 6.4 / 8.7 | 0.7 / 5.0 | | SDHG | 6.7 / 7.8 | 5.1 / 6.7 | Table 6: Performance on 2 groups of cases in ConditionalQA development set classified by section number , the row "Avg. \# sect." denotes the average number of document sections for different case groups. answer and condition measuring. It demonstrates the discourse relations information between sentences enhance the logical interaction between the question and relevant conditions for our model. ## Does Document Structural Information Matter? In this ablation, we remove the document structure graph from our model to probe the efficiency of natural structural information. Concretely, we only concatenate the contextual representations and discourse-aware representations with token representations to the decoder. As shown in Table 4, this ablation drops 0.1 EM score and 3.0 F1 score on answer measuring, as well as gains 1.9 EM score and drops 1.0 F1 score on jointly answer and condition measuring. The natural structural information that sibling sections elaborate parallel and relevant aspects of the parent section helps our model locate relevant conditions across different document sections, thus improving the prediction of answers and unsatisfied conditions. ## 6.2 Capacity For Long And Complex Documents In this part, we classify the cases of ConditionalQA development set into 2 groups respectively based on the quantile of 3 metrics: the length of the document, the number of document sections, the number of sentences in the document. The larger number of document sections reflects the larger size and complexity of document structure graph, while the larger number of sentences in the document embodies the larger size and complexity of discourse graphs. We focus on the cases with unsatisfied conditions, so we choose to evaluate by conditional jointly answer and condition measuring. As shown in Table 6, our model gains 0.3 EM score on group 1 cases and gains 4.4 EM score on group 2 cases compared with baseline FID. It shows with more document sections, the structure graph contains more structural information between document sections, which enhances our capacity to retrieve the answers and conditions across sections. As shown in Table 7, our gains 0.3 EM score on group 1 cases and gains 4.2 EM score on group 2 cases compared with FID. With larger size of discourse graph, the more abundant discourse relations between sentences stimulate the logical interaction between the question and conditions, which helps our model understand the context and predict the unsatisfied conditions. As shown in Table 5, our model gains 0.4 EM score on group 1 and gains 4.5 EM score on group 2. With longer documents, our model incorporates richer information from the document structure and discourse relations into conditional QA. It demonstrates the capacity of our model for long and complex documents. ## 6.3 Role Of Accurate Structure Graph And Discourse Relations In this part, we explore if our model architecture can truly distinguish the information of document natural structure and discourse relations in each section. In exploration 1, we flatten the hierarchical document structure and consider all the sections at the same level. In this way, the structure graph is fully connected and all the nodes propagate information with each other. In exploration 2, we disrupt the discourse relations between sentences in each document section. Considering "*elaboration*" is the most discourse relation in the dataset, we disturb the discourse graph by assigning all relation nodes to be "*elaboration*". In this way, the model treats all the sentences as progressive elaborations and ignores the original logical relations between sentences. As shown in Table 8, with flattened document structure, the structural information that child sections describe parallel and relevant aspects of the parent section is lost. As a result, this exploration drops 2.6 EM score and 4.1 F1 score on answer measuring, as well as 0.1 EM score and 1.8 ![7_image_1.png](7_image_1.png) | Overall | | | |-------------------|-------------|-------------| | EM / F1 | w/ conds | | | flatten structure | 45.3 / 52.5 | 38.2 / 44.8 | | all elaboration | 46.4 / 53.9 | 38.3 / 45.3 | | SDHG | 47.9 / 56.6 | 38.3 / 46.6 | Table 8: Two explorations for our perceptual ability to document structure and discourse relations. F1 score on jointly answer and condition measuring. Furthermore, compared with ablation model 3, which abandons the whole document structure information, this exploration drops 1.4 EM score and 1.1 F1 score on answer measuring, as well as 2.0 EM score and 0.8 F1 score on jointly answer and condition measuring. It demonstrates the fully connected structure graph (Nie et al., 2022) connect many irrelevant document sections, which introduces noisy information chaos into the model and undermines the overall performance. As shown in Table 8, with all the discourse relations between sentences set to "*elaboration*", the logical information of other discourse relations such as "*condition*" and "*joint*" are abandoned, this exploration drops 1.5 EM score and 2.7 F1 score on answer measuring, as well as 1.3 F1 score on jointly answer and condition measuring. The comprehensive discourse relations contain abundant logical information between sentences, which improves the condition locating and reasoning for our model. Moreover, compared with ablation model 2, which removes all the discourse information, this exploration gains 1.4 EM score and 0.2 F1 score on answer measuring, as well as 1.9 EM score and 0.8 F1 score in jointly answer and condition measuring. Because "elaboration" accounts for the largest proportion in discourse relations, the discourse graph encoder helps this exploration better understand progressive sentences, improving the prediction for answers and conditions. It demonstrates that our model has the ability to capture correct discourse ![7_image_0.png](7_image_0.png) Figure 4: An example from ConditionalQA dataset, where we obtain the correct answers and conditions but the baseline FID fails. relational information into answer and condition prediction. Considering the pretrained discourse parser we used does not provide the golden parsing result, our model shows promising better performance with more efficient parsing techniques. ## 6.4 Case Study In this part, we take an example from ConditionalQA dataset to show the efficiency to incorporate natural document structure and discourse relations between sentences. As shown in Figure 4, the document discusses the private renting in UK and the question asks for the approach to do the repairs. Section 2.1 and 2.2 are two child sections of section 2, and they describe two different but relevant ways to ask for repairs mentioned in section 2. The structural relations among section 2, 2.1, 2.2 allow the model to reason from "heating and hot water" in section 2 to the two routes to ask for repairs in section 2.1 and 2.2, retrieving different answers and corresponding conditions across sections. Moreover, the discourse relation "*condition*" between "your property needs repairs" and "contact your landlord", as well as "repairs are not done" and "contact the environmental health department", enable our model to locate different conditions corresponding to each answer. Because the question satisfies the condition "property needs repairs", the answer "contact your landlord" has no unsatisfied conditions, but the answer "contact the environmental health department" has to be outputted with its corresponding condition. However, without document structure information, the baseline FID only retrieves section 2.2, ignoring the parallel section 2.1; without discourse relations, FID neglects the condition corresponding to the answer "contact the environmental health department". Therefore, the above demonstrates the efficiency of our structurediscourse hierarchical graph reasoning framework. ## 7 Conclusion In this paper, we propose a novel and efficient framework with hierarchical section-level structure graph and sentence-level discourse graph for conditional QA on long documents. We incorporate the natural document structure and logical discourse relations to locate answers as well as unsatisfied conditions by cross-sections retrieving and logical reasoning. We conduct experiments on the benchmark dataset in this field and our approach outperforms all the existing methods. ## Limitations We showed that our model is efficient in handling conditional QA on long documents with hierarchical reasoning framework. However, our discourse graphs for each document section are constructed based on the prediction of the pretrained discourse parser. There is promising improvement for our approach by use of more efficient discourse parsers. ## References Joshua Ainslie, Santiago Ontanon, Chris Alberti, Vaclav Cvicek, Zachary Fisher, Philip Pham, Anirudh Ravula, Sumit Sanghai, Qifan Wang, and Li Yang. 2020. Etc: Encoding long and structured inputs in transformers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 268–284. Shuyang Cao and Lu Wang. 2022. Hibrids: Attention with hierarchical biases for structure-aware long document summarization. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 786–807. Lynn Carlson and Daniel Marcu. 2001. Discourse tagging reference manual. *ISI Technical Report ISI-TR545*, 54(2001):56. Sangwoo Cho, Kaiqiang Song, Xiaoyang Wang, Fei Liu, and Dong Yu. 2022. Toward unifying text segmentation and long document summarization. *arXiv* preprint arXiv:2210.16422. Yifan Gao, Chien-Sheng Wu, Jingjing Li, Shafiq Joty, Steven CH Hoi, Caiming Xiong, Irwin King, and Michael Lyu. 2020. Discern: Discourse-aware entailment reasoning network for conversational machine reading. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2439–2449. Luyang Huang, Shuyang Cao, Nikolaus Parulian, Heng Ji, and Lu Wang. 2021. Efficient attentions for long document summarization. arXiv preprint arXiv:2104.02112. Gautier Izacard and Édouard Grave. 2021. Leveraging passage retrieval with generative models for open domain question answering. In *Proceedings of the 16th* Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 874–880. Yanyan Jia, Yuan Ye, Yansong Feng, Yuxuan Lai, Rui Yan, and Dongyan Zhao. 2018. Modeling discourse cohesion for discourse parsing via memory network. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 438–443. Armanda Lewis. 2022. Multimodal large language models for inclusive collaboration learning tasks. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop, pages 202–210, Hybrid: Seattle, Washington + Online. Association for Computational Linguistics. William C Mann and Sandra A Thompson. 1988. Rhetorical structure theory: Toward a functional theory of text organization. *Text-interdisciplinary Journal for the Study of Discourse*, 8(3):243–281. Yuxiang Nie, Heyan Huang, Wei Wei, and Xian-Ling Mao. 2022. Capturing global structural information in long document question answering with compressive graph selector network. arXiv preprint arXiv:2210.05499. Siru Ouyang, Zhuosheng Zhang, and Hai Zhao. 2021a. Dialogue graph modeling for conversational machine reading. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021*, pages 3158–3169, Online. Association for Computational Linguistics. Siru Ouyang, Zhuosheng Zhang, and Hai Zhao. 2021b. Dialogue graph modeling for conversational machine reading. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021*, pages 3158–3169. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(140):1–67. Marzieh Saeidi, Max Bartolo, Patrick Lewis, Sameer Singh, Tim Rocktäschel, Mike Sheldon, Guillaume Bouchard, and Sebastian Riedel. 2018. Interpretation of natural language rules in conversational machine reading. *arXiv preprint arXiv:1809.01494*. Holger Schauer. 2000. From elementary discourse units to complex ones. In *1st SIGdial Workshop on Discourse and Dialogue*, pages 46–55. Zhouxing Shi and Minlie Huang. 2019. A deep sequential model for discourse parsing on multi-party dialogues. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 7007–7014. Haitian Sun, William Cohen, and Ruslan Salakhutdinov. 2022. Conditionalqa: A complex reading comprehension dataset with conditional answers. In *Proceedings of the 60th Annual Meeting of the Association for* Computational Linguistics (Volume 1: Long Papers), pages 3627–3637. Haitian Sun, William W Cohen, and Ruslan Salakhutdinov. 2021. End-to-end multihop retrieval for compositional question answering over long documents. Maite Taboada and William C Mann. 2006. Applications of rhetorical structure theory. *Discourse studies*, 8(4):567–588. Petar Velickovi ˇ c, Guillem Cucurull, Arantxa Casanova, ´ Adriana Romero, Pietro Lio, and Yoshua Bengio. 2017. Graph attention networks. *arXiv preprint* arXiv:1710.10903. Nan Yu, Meishan Zhang, Guohong Fu, and Min Zhang. 2022. Rst discourse parsing with second-stage edulevel pre-training. In *Proceedings of the 60th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4269–4280. Longyin Zhang, Fang Kong, and Guodong Zhou. 2021. Adversarial learning for discourse rhetorical structure parsing. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3946–3957. Victor Zhong and Luke Zettlemoyer. 2019. E3: Entailment-driven extracting and editing for conversational machine reading. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2310–2320. ## A Appendix A.1 Reproducibility Checklist Did you discuss any potential risks of your work? The methods in this work do not pose any ethical or security related risks. Did you discuss the license or terms for use and/or distribution of any artifacts? ConditionalQA is distributed under a CC BY-SA 4.0 License (https://creativecommons.org/ licenses/by-sa/4.0/). Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? We use conditionalQA following the instructions of its creator (https: //haitian-sun.github.io/conditionalqa/). Did you discuss the steps taken to check whether the data that was collected/used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? By the creator of the dataset, this dataset does not include any privacy information. Did you provide documentation of the artifacts? The document of the dataset can be found in https: //haitian-sun.github.io/conditionalqa/. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? There are about 252 million parameters in our model. We run experiments on one Tesla v100 gpu and the training time is about 5 hours. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? We use F1 for jointly answer and condition measuring on the development set to choose the hyperparameter. The specific values are in section 5.4 Experimental Details. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? We the existing packages Pytorch and NLTK to implement our model. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? In section Limitations. ✓ A2. Did you discuss any potential risks of your work? In the subsection Reproducibility Checklist of section Appendix. ✓ A3. Do the abstract and introduction summarize the paper's main claims? In section Abstract and Introduction. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** In Section 5.1 Dataset. ✓ B1. Did you cite the creators of artifacts you used? In section 5.1 Dataset. ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? In the subsection Reproducibility Checklist of section Appendix. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? In the subsection Reproducibility Checklist of section Appendix. ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? In the subsection Reproducibility Checklist of section Appendix. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? In the subsection Reproducibility Checklist of section Appendix. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. We report the relevant statistics in section 5.1 Dataset. ## C ✓ **Did You Run Computational Experiments?** In Section 5 Experiments. ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? In the subsection Reproducibility Checklist of section Appendix. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? In the subsection Reproducibility Checklist of section Appendix. ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? In section 5 Experiments. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? In the subsection Reproducibility Checklist of section Appendix. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
zhou-etal-2023-cobra
{COBRA} Frames: Contextual Reasoning about Effects and Harms of Offensive Statements
https://aclanthology.org/2023.findings-acl.392
Warning: This paper contains content that may be offensive or upsetting. Understanding the harms and offensiveness of statements requires reasoning about the social and situational context in which statements are made. For example, the utterance {``}your English is very good{''} may implicitly signal an insult when uttered by a white man to a non-white colleague, but uttered by an ESL teacher to their student would be interpreted as a genuine compliment. Such contextual factors have been largely ignored by previous approaches to toxic language detection. We introduce COBRA frames, the first context-aware formalism for explaining the intents, reactions, and harms of offensive or biased statements grounded in their social and situational context. We create COBRACORPUS, a dataset of 33k potentially offensive statements paired with machine-generated contexts and free-text explanations of offensiveness, implied biases, speaker intents, and listener reactions. To study the contextual dynamics of offensiveness, we train models to generate COBRA explanations, with and without access to the context. We find that explanations by context-agnostic models are significantly worse than by context-aware ones, especially in situations where the context inverts the statement{'}s offensiveness (29{\%} accuracy drop). Our work highlights the importance and feasibility of contextualized NLP by modeling social factors.
# Cobra **Frames:** Contextual Reasoning About Effects And Harms Of Offensive Statements Xuhui Zhou♡ Hao Zhu♡ Akhila Yerukola♡ **Thomas Davidson**♠ Jena D. Hwang♣ Swabha Swayamdipta♢ **Maarten Sap**♡♣ ♡Language Technologies Institute, Carnegie Mellon University ♠Department of Sociology, Rutgers University ♢Thomas Lord Department of Computer Science, University of Southern California ♣Allen Institute for AI \# [email protected] cobra.xuhuiz.com ## Abstract Warning: *This paper contains content that may* be offensive or upsetting. Understanding the harms and offensiveness of statements requires reasoning about the social and situational context in which statements are made. For example, the utterance "*your English is very good*" may implicitly signal an insult when uttered by a white man to a nonwhite colleague, but uttered by an ESL teacher to their student would be interpreted as a genuine compliment. Such contextual factors have been largely ignored by previous approaches to toxic language detection. We introduce COBRA frames, the first context-aware formalism for explaining the intents, reactions, and harms of offensive or biased statements grounded in their social and situational context. We create COBRACORPUS, a dataset of 33k potentially offensive statements paired with machine-generated contexts and free-text explanations of offensiveness, implied biases, speaker intents, and listener reactions. To study the contextual dynamics of offensiveness, we train models to generate COBRA explanations, with and without access to the context. We find that explanations by context-agnostic models are significantly worse than by contextaware ones, especially in situations where the context inverts the statement's offensiveness (29% accuracy drop). Our work highlights the importance and feasibility of contextualized NLP by modeling social factors. ## 1 Introduction Humans judge the offensiveness and harms of a statement by reasoning about its pragmatic implications with respect to the social and interactional context (Cowan and Hodge, 1996; Cowan and Mettrick, 2002; Nieto and Boyer, 2006; Khurana et al., 2022). For example, when someone says "*I'm impressed that your English is so good!*", while they ![0_image_0.png](0_image_0.png) Figure 1: Pragmatic reasoning about the offensiveness and harms of statements requires taking interactional context into account. We introduce COBRA , a formalism to distill seven types of pragmatic implications of possibly offensive statements grounded in the situational and social context. As illustrated here, COBRA enables counterfactual reasoning about contexts that invert the statements' offensiveness. likely intended "*to give a compliment*", the implications and effects could drastically vary depending on the context. A white person saying this to a non-white person is considered a microaggression (Kohli et al., 2018), because it implies that "*nonwhite people are not native English speakers*" (Figure 1). 1 Unfortunately, most NLP work has simplified toxic language understanding into a classification problem (e.g., Davidson et al., 2017; Founta et al., 2018; Jiang et al., 2021), ignoring context 1While social biases and power dynamics are culturally dependent (Fiske, 2017), in this work, we operate from the U.S.-centric sociocultural perspective. 6294 | Statement | Context | Example | Human Verif. | | | | |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------|-------------------------------------------|-------------------------------|----------------------------------|-----|----| | Situation | Speaker | Listener | (Dimension) Explanation | Rating Suggest. | | | | All Asian moms are | Asian culture | White | Professor | (Int.) make a joke | 1/3 | i | | slutty | class | student | | | | | | Black people make good | History | Teacher | Student | (Tgr.) black people | 3/3 | | | laborers | class | | | | | | | What is this going to | Debate about | White | Black | (Pow.) racial power dynamic | 3/3 | | | solve? | defunding police | person | person | | | | | America is a melting pot hence why Chinese are here | History | Teacher | Students | (Imp.) implies that Chinese | | | | class | immigrants are only here because of a "melting pot" of cultures 2/3 | ii | | | | | | There are a lot of | Online conversation | | | | | | | radfems in the USA | in a forum | | | | | | | about feminism | Radical | Other | (Imp.) implies that those who | | | | | feminists | people | do not support radical feminism are wrong | 0/3 | iii | | | | You are a waste of | Argument | Man | Woman | (Emo.) hurt, invalidated, angry, | 3/3 | | | oxygen | or ashamed | | | | | | | All Asian people are | High school | Teacher | Asian | (Cog.) feeling like they have | 3/3 | | | great at maths | students | to work harder | | | | | | Suggested corrections: i insult (to) Asian women; iiimplies that Chinese immigrants move to the US only because of multi-culture; iiiUS has many radical feminism supporters | | | | | | | Table 1: Examples of statements with GPT-3.5-generated contexts and explanations along different dimensions (see §2), as well as human verification ratings and suggestions. The rating indicates how many annotators (out of three) think the explanation is likely; if deemed unlikely, annotators could provide suggested corrections. and the different pragmatic implications, which has resulted in non-explainable methods that can backfire by discriminating against minority populations (Sap et al., 2019b; Davidson et al., 2019). We introduce COBRA **Frames**, 2a formalism to capture and explain the nuanced contextdependent pragmatic implications of offensive language, inspired by frame semantics (Fillmore, 1976) and the recently introduced Social Bias Frames (Sap et al., 2020). As shown in Figure 1, a COBRA frame considers a *statement*, along with its free-text descriptions of *context* (social roles, situational context; Figure 1; left). Given the context and statement, COBRA distills free-text explanations of the implications of offensiveness along seven different dimensions (Figure 1) inspired by theories from social science and pragmatics of language (e.g., speaker intent, targeted group, reactions; Grice, 1975; Nieto and Boyer, 2006; Dynel, 2015; Goodman and Frank, 2016). Our formalism and its free-text representations have several advantages over previous approaches to detecting offensiveness language. First, our free-text descriptions allow for rich representations of the relevant aspects of context (e.g., situational roles, social power dynamics, etc.), in contrast to modeling specific contextual features alone (e.g., user network features, race or dialect, conversational history; Ribeiro et al., 2017; Sap et al., 2019b; Zhou et al., 2021; Vidgen et al., 2021a; Zhou et al., 2022). Second, dimensions with freetext representations can capture rich types of social knowledge (social commonsense, social norms; Sap et al., 2019a; Forbes et al., 2020), beyond what purely symbolic formalisms alone can (Choi, 2022). Finally, as content moderators have called for more explanation-focused AI solutions (Gillespie et al., 2020; Bunde, 2021), our free-text explanations offer an alternative to categorical flagging of toxicity (e.g., Davidson et al., 2017; Waseem et al., 2017; Founta et al., 2018, etc.) or highlighting spans in input statements (Lai et al., 2022) that is more useful for nuanced offensiveness (Wiegreffe et al., 2021) and more interpretable to humans (Miller, 2019). To study the influence of contexts on the understanding of offensive statements, we create COBRACORPUS, containing 32k COBRA contextstatement-explanation frames, generated with a large language model (GPT-3.5; Ouyang et al., 2022) with the help of human annotators (Table 1). Following recent successes in high-quality machine dataset creation (West et al., 2022; Kim et al., 2022a; Liu et al., 2022), we opt for machine generations for both the likely contexts for statements ![2_image_0.png](2_image_0.png) (as no corpora of context-statement pairs exist) and explanations, as relying solely on humans for explanations is costly and time-consuming. To explore the limits of context-aware reasoning, we also generate a challenge set of *counterfactual contexts* (COBRACORPUS-CF) that invert the offensiveness of statements (Fig. 1). To examine how context can be leveraged for explaining offensiveness, we train CHARM, a Context-aware Harm Reasoning Model, using COBRACORPUS. Through context-aware and context-agnostic model ablations, we show performance improvements with the use of context when generating COBRA explanations, as measured by automatic and human evaluations. Surprisingly, on the challenging counterfactual contexts (COBRACORPUS-CF), CHARM surpasses the performance of GPT-3.5—which provided CHARM's training data—at identifying offensiveness. Our formalism and models show the promise and importance of modeling contextual factors of statements for pragmatic understanding, especially for socially relevant tasks such as explaining the offensiveness of statements. ## 2 Cobra **Frames** We draw inspiration from "interactional frames" as described by Fillmore (1976), as well as more recent work on "social bias frames" (Sap et al., 2020) to understand how context affects the interpretation of the offensiveness and harms of statements. We design COBRA frames (S, C, E), an approach that takes into account a Statement in Context (§2.1) and models the harms, implications, etc (§2.2) with free-text Explanations. ## 2.1 Contextual Dimensions There are many aspects of context that influence how someone interprets a statement linguistically and semantically (Bender and Friedman, 2018; Hovy and Yang, 2021). Drawing inspiration from sociolinguistics on registers (Gregory, 1967) and the rational speech act model (Monroe and Potts, 2015), Context includes the situation, speaker identity, and listener identity for statements. The **situation** is a short (2-8 words) free-text description of the situation in which the statement could likely be uttered (e.g., "Debate about defunding police", "online conversation in a forum about feminism"). The **speaker identity** and **listener identity** capture likely social roles of the statement's speaker and the listener (e.g., "a teacher", "a doctor") or their demographic identities (e.g., "queer man", "black woman"), in free-text descriptions. ## 2.2 Explanations Dimensions We consider seven explanation dimensions based on theories of pragmatics and implicature (Grice, 1975; Perez Gomez, 2020) and social psychology of bias and inequality (Nieto and Boyer, 2006; Nadal et al., 2014), expanding the reasoning dimensions substantially over prior work which only capture the targeted group and biased implication (Sap et al., 2020; ElSherief et al., 2021).3 We represent all explanations as free text, which is crucial to capture the nuances of offensiveness, increase the trust in models' predictions, and assist content moderators (Sap et al., 2020; Gabriel et al., 2022; Miller, 2019). Intent (Int.) captures the underlying communicative intent behind a statement (e.g., "to give a compliment"). Prior work has shown that intent can influence pragmatic implications related to biases and harms (Kasper, 1990; Dynel, 2015) and aid in hate speech detection (Holgate et al., 2018). Target Group (TG) describes the social or demographic group referenced or targeted by the post (e.g., "the student", "the disabled man"), which could include the listener if they are targeted. This dimension has been the focus of several prior works (Zampieri et al., 2019; Sap et al., 2020; Vidgen et al., 2021b), as it is crucial towards understanding the offensiveness and harms of the statement. 3While Social Bias Frames contain seven variables, only two of those are free-text explanations (the others being categorical; Sap et al., 2020). ![3_image_0.png](3_image_0.png) Power (Pow.) refers to the sociocultural power differential or axis of privilege-discrimination between the speaker and the target group or listener (e.g., "gender differential", "racial power differential"). As described by Nieto and Boyer (2006), individuals have different levels of power and privilege depending on which identity axis is considered, which can strongly influence the pragmatic interpretations of statements (e.g., gay men tend to hold more privilege along the gender privilege spectrum, but less along the sexuality one). Impact (Imp.) explain the biased, prejudiced, or stereotypical meaning implied by the statement, similar to Sap et al. (2020). This implication is very closely related to the received meaning from the listener's or targeted group's perspective and may differ from the speaker's intended meaning (e.g., for microaggressions; Sue, 2010). Emotional and Cognitive **Reactions (Emo.** & Cog.) capture the possible negative effects and harms that the statement and its implied meaning could have on the targeted group. There is an increasing push to develop content moderation from the perspective of the harms that content engenders (Keller and Leerssen, 2020; Vaccaro et al., 2020). As such, we draw from Nadal et al. (2014) and consider the perceived emotional and cognitive reactions of the target group or listener. The emotional reactions capture the short-term emotional effects or reactions to a statement (e.g., "anger and annoyance", "worthlessness") On the other hand, the cognitive reactions focus on the lessons someone could draw, the subsequent actions someone could take, or on the long-term harms that repeated exposure to such statements could have. Examples include "not wanting to come into work anymore," "avoiding a particular teacher," etc. ![3_image_1.png](3_image_1.png) | Unique # | Avg. # words | | |------------------------|----------------|-------| | Statements | 11,648 | 14.34 | | Situation | 23,577 | 6.90 | | Speakers | 10,683 | 3.11 | | Listeners | 13,554 | 4.05 | | Intents | 29,895 | 14.97 | | Target group | 11,126 | 3.48 | | Power dynamics | 12,766 | 10.46 | | Implication | 30,802 | 19.66 | | Emo. Reaction | 28,429 | 16.82 | | Cog. Reaction | 29,826 | 22.06 | | Offensiveness | 2,527 | 2.09 | | Total # in COBRACORPUS | 32,582 | | Offensiveness (Off.) captures, in 1-3 words, the type or degree of offensiveness of the statement (e.g., "sexism", "offensive generalization"). We avoid imposing a categorization or cutoff between offensive and harmless statements and instead leave this dimension as free-text, to preserve nuanced interpretations of statements and capture the full spectrum of offensiveness types (Jurgens et al., 2019). ## 3 Collecting Cobrac**Orpus** To study the contextual dynamics of the offensiveness of statements at scale, we create COBRACOR-PUS using a three-stage data generation pipeline with human verification, shown in Figure 2. Given that no available corpus contains statements with their contexts and explanations,4 we prompt a large language model (GPT-3.5; Ouyang et al., 2022) to generate contexts and explanations, following (Hartvigsen et al., 2022; West et al., 2022; Kim et al., 2022b,a). Specifically, we first generate multiple plausible contexts for statements, then generate the explanations for each context separately, using GPT-3.5 with in-context examples. Please refer to Appendix C for examples of our prompts. To ensure data quality, we design a set of crowdsourcing tasks to verify the generated contexts and explanations and collect suggestions. For all tasks, we pre-select crowd workers based on a qualification task that judged their understanding of each dimension. Please refer to Appendix A for the details of all crowd-sourcing experiments. ## 3.1 Collecting Statements We draw our statements from Toxigen (Hartvigsen et al., 2022), a dataset of GPT3-generated statements that are subtly or implicitly toxic, offensive, prejudiced, or biased against various demographic groups. Specifically, since we focus on the dynamics of offensiveness, we analyze a sample of 13,000 Toxigen statements tagged as "offensive". ## 3.2 Generating Likely Contexts Following work demonstrating that LLMs can generate realistic social situations related to majority and minority groups (Park et al., 2022), we use GPT-3.5 to construct plausible or *likely contexts* (i.e., situation, speaker identity, listener identity) in which a statement could be made. Specifically, we manually curate fifty statement-context pairs, out of which we sample five for each statement as in-context examples. Conditioned on the in-context examples, we then sample three contexts from GPT3.5 for each statement. The examples of prompts for plausible context generation are presented in Appendix C. Verifying Contexts We randomly sample 500 statement-context pairs and ask three annotators to rate the plausibility of the contexts (see Appendix A.2 for the exact questions).5 Of the 500 pairs, only 1% were marked as completely implausible or gibberish. 92% of the scenarios were marked as plausible by at least two workers, and some were marked as unlikely but technically plausible (e.g., A mayor in the public saying *"Black people are not* humans.") We retain these contexts since such rare situations could still happen, making them helpful for our analyses and modeling experiments. ## 3.3 Generating Cobra **Explanations** Similar to context generation, we make use of GPT-3.5's ability to produce rich explanations of social commonsense (West et al., 2022) to generate explanations along our seven dimensions. For each context-statement pair, we generate one full COBRA frame, using three randomly sampled incontext examples from our pool of six manually curated prompts. As shown in Table 2, this process yields a COBRACORPUS containing 32k full (context-statement-explanation) COBRA frames. Verifying Explanations To ensure data quality, we randomly sampled 567 (statement, context, explanations) triples and asked three annotators to rate how likely the explanations fit the statements in context. Inspired by prior work (Aguinis et al., 5On this context verification task, the agreement was moderately high, with 75.37% pairwise agreement and freemarginal multi-rater κ=0.507 (Randolph, 2005). | Friends Strangers Workplace Family | Other | | | | | |--------------------------------------|---------|--------|--------|--------|--------| | more off. | 5.28% | 43.09% | 27.54% | 2.85% | 21.24% | | less off. | 60.06% | 16.6% | 5.79% | 11.38% | 6.17% | Table 3: Percentage of contexts occurring under each category/scenario in COBRACORPUS-CF. Row 1 indicates statements that are more offensive due to their contexts vs Row 2 indicates those which are lesser offensive in comparison 2021; Clark et al., 2021; Liu et al., 2022), we also asked annotators to provide corrections or suggestions for those they consider unlikely. 97% of explanations were marked as likely by at least two annotators (majority vote) and 84% were marked as likely by all three annotators (unanimous).6 As illustrated in Table 1, humans tend to have better explanations of the implications of statements, whereas machines sometimes re-use words from the statement. This might explain the gap between the majority vote and unanimously approved examples, as the annotators might have different standards for what constitutes a good explanation. Analyzing COBRAC**ORPUS** We present some basic statistics of the COBRACORPUS in Table 2. The average length shows illustrates the level of nuance in some of the explanations (e.g., 22 words for cognitive reaction). Additionally, we analyze the distribution of target groups, finding that minority or marginalized groups like LGBTQIA+, people with disabilities, and women are among the most frequently targeted groups (see Figure 3a). Analyzing the distribution of the free-text offensiveness types, we find that microaggressions are the most frequent type of offensiveness (see Figure 3b). ## 4 Cobracorpus-Cf**: Generating** Counterfactual Contexts To examine the limits of context-aware explanations of offensiveness, we generate COBRACORPUS-CF, a challenge set of *counterfactual* context pairs that invert the offensiveness of statements, inspired by adversarial and counterfactual test sets in NLP (Gardner et al., 2020; Li et al., 2020; Chang et al., 2021). Illustrated in Figure 1, our motivating question asks, how does the toxicity of a statement change with a different context? Creating COBRAC**ORPUS**-CF One of the difficulties of collecting counterfactual data is finding 6Our annotation agreement is moderately high, on average, with 89.10% pairwise agreement and κ=0.782. ![5_image_0.png](5_image_0.png) statements that are contextually ambiguous and can be interpreted in different ways. Statements such as microaggressions, compliments, criticism, and offers for advice are well-suited for this, as their interpretation can be highly contextual (Sue, 2010; Nadal et al., 2014). We scraped 1000 statements from a crowdsourced corpus of microaggressions,7including many contextually ambiguous statements. Following a similar strategy as in §3.2, we manually craft 50 (statement, offensive context, harmless context) triples to use as in-context examples for generating counterfactual contexts. Then, for each microaggression in the corpus, we generated both a harmless and offensive context with GPT-3.5, prompted with five randomly sampled triples as in-context examples. This process yields 982 triples, as GPT3.5 failed to generate a harmless context for 18 statements. Human Verification We then verify that the counterfactual contexts invert the offensiveness of the statements. Presented with both contexts, the annotators (1) rate the offensiveness of the statement under each context (*Individual*) and, (2) choose the context that makes the statement more offensive (*Forced Choice*). We annotate all of the 982 triples in this manner. When we evaluate models' performance on COBRACORPUS-CF (§5.2), we use the *Individual* ratings. In our experiments, we use the 344 (statement, context) pairs where 7https://www.microaggressions.com/ all three annotators agreed on the offensiveness, to ensure the contrastiveness of the contexts.8 Analyzing Counterfactual Contexts To compare with our likely contexts, we examine the types of situations that changed perceptions of toxicity using our human-verified offensive and harmless counterfactual contexts. We use the aforementioned *Forced Choice* ratings here. We detect and classify the category of the situation in the counterfactual context pairs as conversations occurring between friends, among strangers in public, at a workplace, and between members of a family, using keyword matching. We observe that contexts involving conversations occurring among strangers in public and at the workplace are perceived as more offensive than those which occur between friends (see Table 3). This aligns with previous literature showing that offensive, familiar, or impolite language might be considered more acceptable if used in environments where people are more familiar.(Jay and Janschewitz, 2008; Dynel, 2015; Kasper, 1990). Ethnographic research shows how crude language, including the use of offensive stereotypes and slurs, is often encouraged in informal settings like sports (Fine, 1979) or social clubs (Eliasoph and Lichterman, 2003). But such speech is generally considered less acceptable in a broader public sphere including in public and at the workplace. 8We have high average annotation agreement in this task (κ = 0.73). Intent Target group Power Dynamics Implication Emotional React. Cognitive React. Offensiveness Average ![6_image_0.png](6_image_0.png) BLEU ROUGE BLEU ROUGE BLEU ROUGE BLEU ROUGE BLEU ROUGE BLEU ROUGE BLEU ROUGE BLEU ROUGE Small 46.3 58.1 20.2 52.6 51.7 67.2 29.5 37.9 22.9 28.8 17.1 24.2 30.9 48.8 31.2 45.4 Base 48.7 60.3 22.8 55.8 52.3 67.2 31.3 40.2 20.4 29.2 18.5 25.3 31.9 48.3 32.3 46.6 Large 52.3 63.2 29.2 59.3 55.9 **70.3** 35.1 43.1 23.0 31.9 **19.4** 26.8 32.2 **50.2** 35.3 49.2 XL 54.6 64.7 32.5 60.4 54.5 70.2 36.3 44.2 23.0 31.5 18.7 26.8 30.2 48.8 35.7 49.5 XXL 55.6 65.3 36.1 **61.2** 54.0 69.9 36.7 44.7 23.2 **32.6** 18.3 **27.1** 29.8 47.5 36.2 **49.8** ## 5 Experiments We investigate the role that context plays when training models to explain offensive language on both COBRACORPUS and COBRACORPUS-CF. Although GPT-3.5's COBRA explanations are highly rated by human annotators (§3.3), generating them is a costly process both from a monetary9and energy consumption perspective (Strubell et al., 2019; Taddeo et al., 2021; Dodge et al., 2022). Therefore, we also investigate whether such high-quality generations can come from more efficient neural models. We train CHARM (§5.1), with which we first empirically evaluate the general performance of our models in generating COBRA explanations. We then investigate the need for context in generating COBRA explanations. Finally, we evaluate both GPT-3.5's and our model on the challenging COBRACORPUS-CF context-statement pairs. ## 5.1 Cobra **Model: Ch**Arm We introduce CHARM, a FLAN-T5 model (Chung et al., 2022) finetuned on COBRACORPUS for predicting COBRA frames. Given a context-statement pair (C, S), CHARM is trained to generate a set of explanations E along all seven COBRA dimensions. Note that while there is a range of valid model choices when it comes to modeling COBRA, we choose FLAN-T5 based on its strong reasoning abilities in many language generation tasks. As illustrated in Fig. 4, both the source and the target are linearized sequences of CO-BRA frame elements. The source sequence concatenates the situation, speaker, listener, and statement into a sequence in the following format: "This is a conversation between [speaker] and [listener] in [situation]: [statement]", and the target 9Each COBRA explanation costs approximately $0.01 when using GPT-3.5. sequence is a concatenation of tagged explanation dimensions, e.g., "<intent> [intent] ![6_image_1.png](6_image_1.png) </intent>", "<targetGroup> [targetGroup] </targetGroup>.". We train the model with the standard cross-entropy loss. We randomly split COBRACORPUS into training (31k), and evaluation sets (1k) ensuring that no statement is present in multiple splits, with COBRACORPUS-CF serving as an additional evaluation set (we use the small-scale, highly curated 172 statement-context pairs in §4). We train different variants of CHARM, namely, they are CHARM Small (80M), Base (250M), Large (780M), XL (3B), XXL (11B), to explore how the model's explanation generation abilities differ across sizes. We use the same hyperparameters across different modeling variants. Unless otherwise mentioned, CHARM refers to XL, which we use as our default based on the combination of competitive performance and efficiency. During inference, we use beam search decoding with beam_size=4. Additional experimental details are provided in Appendix B.1. ## 5.2 Evaluation We evaluate our models in the following ways. For automatic evaluation of explanation generation, we use BLEU-2 and Rouge-L to capture the word overlap between the generations and references (Hashimoto et al., 2019). For human evaluation, we use the same acceptability task as in §3.3, using the unanimous setting (i.e., rated likely by all three annotators). For the counterfactual automatic evaluation, we convert the offensiveness dimension into a binary label based on the existence of certain phrases (e.g., "not offensive", "none", "harmless"). How good are different CHARM **models?** As shown in Table 4, we observe all variants of our model have relatively high BLEU and ROUGE scores. As the model size increases, the average performance increases accordingly. It is interesting | Statement | Situation | Speaker | Listener | GPT-3.5 | CHARM | Human | |-------------------------------------------------------|--------------------------|-----------------------|-----------------|-----------|---------|---------| | A conversation over the phone | Caller | Recipient of the call | | | | | | CAN YOU HEAR ME? | A group of people having | Male | subordinate | þ | þ | þ | | Female | | | | | | | | a meeting in person | manager | | | | | | | At a party | Male | Female stranger | þ | þ | þ | | | stranger | | | | | | | | Bounce, shine, and softness are the key to sexy hair. | Advice on how to | Female | Female | | | | | style their hair | friend | þ | | | | | | friend | | | | | | | | Professional setting | White | Black person | þ | þ | þ | | | person | | | | | | | | Wow! You're so articulate!! | Class | Professor | Student | þ | | | | Praising their daughter | Parent | Daughter | þ | þ | | | | Thats a good girl. | A professional setting | Male boss | Female employee | þ | þ | þ | Table 5: Examples of GPT-3.5 and CHARM's prediction on COBRACORPUS-CF. = harmless, þ = toxic. to see that CHARM (Large) achieves the best performance in the power dynamics and offensiveness dimension, which indicates that increasing modeling size does not guarantee improvement in every explanation dimension in COBRA. | Training | Inference w/ context | BLEU | ROUGE | Human* | |--------------|------------------------|--------|---------|----------| | w/ context × | × | 33.0 | 47.6 | 66.54 | | ✓ | × | 31.0 | 45.0 | 70.82 | | ✓ | ✓ | 35.7 | 49.5 | 75.46 | Table 6: Automatic and human evaluations of contextaware and context-agnostic versions of CHARM (XL). Human evaluations are done on the same random subset (100) on all three variations. **Takeaway:** context significantly improves CHARM both in training and inference on COBRACORPUS. How important context is for CHARM? We examine how context influences CHARM's ability to generate explanations. In context-agnostic model setups, the source sequence is formatted as "This is a statement: [statement]", omitting the speaker, listener, and situation. As shown in Table 6, incorporating context at training and inference time improves CHARM's performance across the automatic and human evaluation. This is consistent with our hypothesis that context is important for understanding the toxicity of statements. How well do models adapt to counterfactual contexts? We then investigate how well our model, as well as GPT-3.5 ,10 identifies the offensiveness 10text-davinci-003 Jan 13th 2022 of statements when the context drastically alters the implications. We then compare different models' ability to classify whether the statement is offensive or not given the counterfactual context in COBRACORPUS-CF. Surprisingly, although our model is only trained on the GPT-3.5-generated COBRACORPUS, it outperforms GPT-3.5 (in a few-shot setting as described in §3.3) on COBRACORPUS-CF (Table 7). Table 5 shows some example predictions on the counterfactual context pairs. GPT-3.5 tends to "over-interpret" the statement, possibly due to the information in the prompts. For example, for the last statement in Table 5, GPT-3.5 infers the implication as "It implies that people of color are not typically articulate", while such statement-context pair contains no information about people of color. In general, counterfactual contexts are still challenging even for our best-performing models. | Accuracy | Recall | Precision | F1 | | |------------|----------|-------------|------|------| | All Toxic | 50.0 | 100.0 | 50.0 | 67.8 | | GPT-3.5 | 55.2 | 99.4 | 52.7 | 68.9 | | XL WoC | 50.0 | 72.3 | 50.0 | 59.1 | | XL | 66.5 | 98.84 | 60.0 | 74.7 | | XXL | 71.4 | 96.5 | 64.2 | 77.1 | ## 6 Conclusion & Discussion We introduce COBRA frames, a formalism to distill the context-dependent implications, effects, and harms of toxic language. COBRA captures seven explanation dimensions, inspired by frame semantics (Fillmore, 1976), social bias frames (Sap et al., 2020), and psychology and sociolinguistics literature on social biases and prejudice (Nieto and Boyer, 2006; Nadal et al., 2014). As a step towards addressing the importance of context in content moderation, we create COBRACORPUS, a novel dataset of toxic comments populated with contextual factors as well as explanations. We also build COBRACORPUS-CF, a small-scale, curated dataset of toxic comments paired with counterfactual contexts that significantly alter the toxicity and implication of statements. We contribute CHARM, a new model trained with COBRACORPUS for producing explanations of toxic statements given the statement and its social context. We show that modeling without contextual factors is insufficient for explaining toxicity. CHARM also outperforms GPT-3.5 in COBRACORPUS-CF, even though it is trained on data generated by GPT-3.5. We view COBRA as a vital step towards addressing the importance of context in content moderation and many other social NLP tasks. Potential *future* applications of COBRA include automatic categorization of different types of offensiveness, such as hate speech, harassment, and microaggressions, as well as the development of more robust and fair content moderation systems. Furthermore, our approach has the potential to assist content moderators by providing free-text explanations. These explanations can help moderators better understand the rationale behind models' predictions, allowing them to make more informed decisions when reviewing flagged content (Zhang et al., 2023). This is particularly important given the growing calls for transparency and accountability in content moderation processes (Bunde, 2023). Besides content moderation, COBRA also has the potential to test linguistic and psychological theories about offensive statements. While we made some preliminary attempts in this direction in §3 and §4, more work is needed to fully realize this potential. For example, future studies could investigate the differences in in-group and out-group interpretations of offensive statements, as well as the role of power dynamics, cultural background, and individual sensitivities in shaping perceptions of offensiveness. ## Limitations & Ethical And Societal Considerations We consider the following limitations and societal considerations of our work. Machine-generated Data Our analysis is based on GPT-3 generated data. Though not perfectly aligned with real-world scenarios, as demonstrated in Park et al. (2022), such analysis can provide insights into the nature of social interactions. However, this could induce specific biases, such as skewing towards interpretations of words aligned with GPT-3.5's training domains and potentially overlooking more specialized domains or minority speech (Bender et al., 2021; Bommasani et al., 2021). The pervasive issue of bias in offensive language detection and in LLMs more generally requires exercising extra caution. We deliberately generate multiple contexts for every statement as an indirect means of managing the biases. Nevertheless, it is a compelling direction for future research to investigate the nature of biases latent in distilled contexts for harmful speech and further investigate their potential impact. For example, it would be valuable to collect human-annotated data on CO-BRA to compare with the machine-generated data. However, we must also recognize that humans are not immune to biases (Sap et al., 2019b, 2022), and therefore, such investigations should be carefully designed. Limited Contextual Variables Although CO-BRACORPUS has rich contexts, capturing the full context of statements is challenging. Future work should explore incorporating more quantitative features (e.g., the number of followers of the speaker) to supplement contextual variables such as social role and power dynamics. In this work, we focus on the immediate context of a toxic statement. However, we recognize that the context of a toxic statement can be much longer. We have observed significant effects even in relatively brief contexts, indicating the potential for improved performance when more extended contexts are present. We believe that future research could explore the influence of richer contexts by including other modalities (e.g., images, videos, etc.). Limited Identity Descriptions Our work focused on distilling the most salient identity characteristics that could affect the implications of toxicity of statements. This often resulted in generic identity labels such as "a white person" or "A Black woman" being generated without social roles. This risks essentialism, i.e., the assumption that all members of a demographic group have inherent qualities and experiences, which can be harmful and perpetuate stereotypical thinking (Chen and Ratliff, 2018; Mandalaywala et al., 2018; Kurzwelly et al., 2020). Future work should explore incorporating more specific identity descriptions that circumvent the risk of essentializing groups. English Only We only look at a US-centric perspective in our investigation. Obviously, online hate and abuse is manifested in many languages (Arango Monnar et al., 2022), so we hope future work will adapt our frames to different languages and different cultures. Subjectivity in Offensiveness Not everyone agrees that things are offensive, or has the same interpretation of offensiveness (depending on their own background and beliefs; Sap et al., 2022). Our in-context prompts and qualification likely make both our machine-generated explanations and human annotations prescriptive (Röttger et al., 2021), in contrast to a more descriptive approach where we would examine different interpretations. We leave that up for future work. Dual Use We aim to combat the negative effects and harms of discriminatory language on already marginalized people (Sap et al., 2019b; Davidson et al., 2019). It is possible however that our frames, dataset, and models could be used to perpetuate harm against those very people. We do not endorse the use of our data for those purposes. Risk of Suppressing Speech Our frames, dataset, and models are built with content moderation in mind, as online spaces are increasingly riddled with hate and abuse and content moderators are struggling to sift through all of the content. We hope future work will examine frameworks for using our frames to help content moderators. We do not endorse the use of our system to suppress speech without human oversight and encourage practitioners to take non-censorship-oriented approaches to content moderation (e.g., counterspeech (Tekiroglu ˘ et al., 2022)). Harms of Exposing Workers to Toxic Content The verification process of COBRACORPUS and COBRACORPUS-CF is performed by human annotators. Exposure to such offensive content can be harmful to the annotators (Liu et al., 2016). We mitigated these by designing minimum annotation workload, paying workers above minimum wage ($7-12), and providing them with crisis management resources. Our annotation work is also supervised by an Institutional Review Board (IRB). ## Acknowledgements First of all, we thank our workers on MTurk for their hard work and thoughtful responses. We thank the anonymous reviewers for their helpful comments. We also thank Shengyu Feng and members of the CMU LTI COMEDY group for their feedback, and OpenAI for providing access to the GPT-3.5 API. This research was supported in part by the Meta Fundamental AI Research Laboratories (FAIR) "*Dynabench Data Collection* and Benchmarking Platform" award "ContExTox: Context-Aware and Explainable Toxicity Detection," and CISCO Ethics in AI award "*ExpHarm: Socially Aware, Ethically Informed, and ExplanationCentric AI Systems*." ## References Herman Aguinis, Isabel Villamor, and Ravi S. Ramani. 2021. Mturk research: Review and recommendations. Journal of Management, 47(4):823–837. Ayme Arango Monnar, Jorge Perez, Barbara Poblete, Magdalena Saldaña, and Valentina Proust. 2022. Resources for multilingual hate speech detection. In Proceedings of the Sixth Workshop on Online Abuse and Harms (WOAH). Emily M. Bender and Batya Friedman. 2018. Data statements for natural language processing: Toward mitigating system bias and enabling better science. Transactions of the Association for Computational Linguistics. Emily M. Bender, Timnit Gebru, Angelina McMillanMajor, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big? In *Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency*, FAccT '21. Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. 2021. On the opportunities and risks of foundation models. Enrico Bunde. 2021. AI-Assisted and explainable hate speech detection for social media Moderators–A design science approach. In Proceedings of the 54th Hawaii International Conference on System Sciences. Enrico Bunde. 2023. AI-Assisted and Explainable Hate Speech Detection for Social Media Moderators. https:// scholarspace.manoa.hawaii.edu/items/ f21c8b34-5d62-40d0-919f-a4e07cfbbc32. [Accessed 13-May-2023]. Kai-Wei Chang, He He, Robin Jia, and Sameer Singh. 2021. Robustness and adversarial examples in natural language processing. In *Proc. of EMNLP*. Jacqueline M Chen and Kate A Ratliff. 2018. Psychological essentialism predicts intergroup bias. Social Cognition, 36(3):301–323. Yejin Choi. 2022. The curious case of commonsense intelligence. *Daedalus*. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. 2022. Scaling instruction-finetuned language models. ArXiv preprint. Elizabeth Clark, Tal August, Sofia Serrano, Nikita Haduong, Suchin Gururangan, and Noah A. Smith. 2021. All that's 'human' is not gold: Evaluating human evaluation of generated text. In *Proc. of ACL*. Gloria Cowan and Cyndi Hodge. 1996. Judgments of Hate Speech: The Effects of Target Group, Publicness, and Behavioral Responses of the Target. *Journal of Applied Social Psychology*. Gloria Cowan and Jon Mettrick. 2002. The effects of Target Variables and Settting on Perceptions of Hate Speech1. *Journal of Applied Social Psychology*. Thomas Davidson, Debasmita Bhattacharya, and Ingmar Weber. 2019. Racial bias in hate speech and abusive language detection datasets. In *Proceedings* of the Third Workshop on Abusive Language Online. Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated Hate Speech Detection and the Problem of Offensive Language. In Proceedings of the 11th International Conference on Web and Social Media (ICWSM). Jesse Dodge, Taylor Prewitt, Remi Tachet des Combes, Erika Odmark, Roy Schwartz, Emma Strubell, Alexandra Sasha Luccioni, Noah A Smith, Nicole DeCario, and Will Buchanan. 2022. Measuring the carbon intensity of ai in cloud instances. In 2022 ACM Conference on Fairness, Accountability, and Transparency. Marta Dynel. 2015. The landscape of impoliteness research. *Journal of Politeness Research*. Nina Eliasoph and Paul Lichterman. 2003. Culture in Interaction. *American Journal of Sociology*. Mai ElSherief, Caleb Ziems, David Muchlinski, Vaishnavi Anupindi, Jordyn Seybolt, Munmun De Choudhury, and Diyi Yang. 2021. Latent hatred: A benchmark for understanding implicit hate speech. In *Proc.* of EMNLP. Charles J. Fillmore. 1976. Frame semantics and the nature of language*. *Annals of the New York Academy* of Sciences. Gary Alan Fine. 1979. Small Groups and Culture Creation: The Idioculture of Little League Baseball Teams. *American Sociological Review*. Susan T Fiske. 2017. Prejudices in cultural contexts: Shared stereotypes (gender, age) versus variable stereotypes (race, ethnicity, religion). Perspectives on psychological science. Maxwell Forbes, Jena D. Hwang, Vered Shwartz, Maarten Sap, and Yejin Choi. 2020. Social chemistry 101: Learning to reason about social and moral norms. In *Proc. of EMNLP*. Antigoni-Maria Founta, Constantinos Djouvas, Despoina Chatzakou, Ilias Leontiadis, Jeremy Blackburn, Gianluca Stringhini, Athena Vakali, Michael Sirivianos, and Nicolas Kourtellis. 2018. Large scale crowdsourcing and characterization of Twitter abusive behavior. In *ICWSM*. Saadia Gabriel, Skyler Hallinan, Maarten Sap, Pemi Nguyen, Franziska Roesner, Eunsol Choi, and Yejin Choi. 2022. Misinfo reaction frames: Reasoning about readers' reactions to news headlines. In *Proc.* of ACL. David Jurgens, Libby Hemphill, and Eshwar Chandrasekharan. 2019. A just and comprehensive strategy for using NLP to address online abuse. In *Proc.* of ACL. Matt Gardner, Yoav Artzi, Victoria Basmov, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hannaneh Hajishirzi, Gabriel Ilharco, Daniel Khashabi, Kevin Lin, Jiangming Liu, Nelson F. Liu, Phoebe Mulcaire, Qiang Ning, Sameer Singh, Noah A. Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, Ally Zhang, and Ben Zhou. 2020. Evaluating models' local decision boundaries via contrast sets. In *Findings of the Association for* Computational Linguistics: EMNLP 2020. Daphne Keller and Paddy Leerssen. 2020. Facts and where to find them: Empirical research on internet platforms and content moderation. In Social Media and Democracy. Cambridge University Press. Urja Khurana, Ivar Vermeulen, Eric Nalisnick, Marloes Van Noorloos, and Antske Fokkens. 2022. Hate speech criteria: A modular approach to task-specific hate speech definitions. In Proceedings of the Sixth Workshop on Online Abuse and Harms (WOAH). Tarleton Gillespie, Patricia Aufderheide, Elinor Carmi, Ysabel Gerrard, Robert Gorwa, Ariadna MatamorosFernandez, Sarah T Roberts, Aram Sinnreich, and Sarah Myers West. 2020. Expanding the debate about content moderation: Scholarly research agendas for the coming policy debates. *Internet Policy Review*. Hyunwoo Kim, Jack Hessel, Liwei Jiang, Ximing Lu, Youngjae Yu, Pei Zhou, Ronan Le Bras, Malihe Alikhani, Gunhee Kim, Maarten Sap, and Yejin Choi. 2022a. Soda: Million-scale dialogue distillation with social commonsense contextualization. *ArXiv* preprint. Noah D Goodman and Michael C Frank. 2016. Pragmatic language interpretation as probabilistic inference. *Trends in cognitive sciences*. Hyunwoo Kim, Youngjae Yu, Liwei Jiang, Ximing Lu, Daniel Khashabi, Gunhee Kim, Yejin Choi, and Maarten Sap. 2022b. Prosocialdialog: A prosocial backbone for conversational agents. *ArXiv preprint*. Michael Gregory. 1967. Aspects of varieties differentiation. *Journal of Linguistics*. Rita Kohli, Nallely Arteaga, and Elexia R McGovern. 2018. "compliments" and "jokes": Unpacking racial microaggressions in the K-12 classroom. In *Microaggression Theory Influence and Implications*. John Wiley & Sons. Thomas Hartvigsen, Saadia Gabriel, Hamid Palangi, Maarten Sap, Dipankar Ray, and Ece Kamar. 2022. Toxigen: Controlling language models to generate implied and adversarial toxicity. In ACL. Jonatan Kurzwelly, Hamid Fernana, and Muhammad Elvis Ngum. 2020. The allure of essentialism and extremist ideologies. Anthropology Southern Africa, 43(2):107–118. Tatsunori B. Hashimoto, Hugh Zhang, and Percy Liang. 2019. Unifying human and statistical evaluation for natural language generation. In *Proc. of NAACLHLT*. Vivian Lai, Samuel Carton, Rajat Bhatnagar, Q Vera Liao, Yunfeng Zhang, and Chenhao Tan. 2022. Human-AI collaboration via conditional delegation: A case study of content moderation. In CHI. Eric Holgate, Isabel Cachola, Daniel Preo¸tiuc-Pietro, and Junyi Jessy Li. 2018. Why swear? analyzing and inferring the intentions of vulgar expressions. In Proc. of EMNLP. Chuanrong Li, Lin Shengshuo, Zeyu Liu, Xinyi Wu, Xuhui Zhou, and Shane Steinert-Threlkeld. 2020. Linguistically-informed transformations (LIT): A method for automatically generating contrast sets. In Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP. Dirk Hovy and Diyi Yang. 2021. The importance of modeling social factors of language: Theory and practice. In *Proceedings of the 2021 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Alisa Liu, Swabha Swayamdipta, Noah A. Smith, and Yejin Choi. 2022. WANLI: Worker and AI collaboration for natural language inference dataset creation. In *In proc. of Findings of EMNLP*. Association for Computational Linguistics. Liwei Jiang, Jena D. Hwang, Chandra Bhagavatula, Ronan Le Bras, Jenny Liang, Jesse Dodge, Keisuke Sakaguchi, Maxwell Forbes, Jon Borchardt, Saadia Gabriel, Yulia Tsvetkov, Oren Etzioni, Maarten Sap, Regina Rini, and Yejin Choi. 2021. Can machines learn morality? the delphi experiment. Chia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How NOT to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In *Proc. of EMNLP*. Gabriele Kasper. 1990. Linguistic politeness:: Current research issues. *Journal of Pragmatics*. Special Issue on Politeness. Herbert P Grice. 1975. Logic and conversation. In Speech acts. Brill. Timothy Jay and Kristin Janschewitz. 2008. The pragmatics of swearing. *Journal of Politeness Research*. Tara M Mandalaywala, David M Amodio, and Marjorie Rhodes. 2018. Essentialism promotes racial prejudice by increasing endorsement of social hierarchies. *Social Psychological and Personality Science*, 9(4):461–469. Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. *Artificial intelligence*. Will Monroe and Christopher Potts. 2015. Learning in the rational speech acts model. *ArXiv preprint*. Kevin L Nadal, Kristin C Davidoff, Lindsey S Davis, and Yinglee Wong. 2014. Emotional, behavioral, and cognitive reactions to microaggressions: Transgender perspectives. Psychology of Sexual Orientation and Gender Diversity. Leticia Nieto and Margot Boyer. 2006. Understanding oppression: Strategies in addressing power and privilege. *Colors NW*. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. Joon Sung Park, Lindsay Popowski, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, and Michael S. Bernstein. 2022. Social simulacra: Creating populated prototypes for social computing systems. Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology. Javiera Perez Gomez. 2020. Verbal microaggressions as hyper-implicatures. *J. Polit. Philos.* Justus J Randolph. 2005. Free-Marginal multirater kappa (multirater k[free]): An alternative to fleiss' Fixed-Marginal multirater kappa. In *Proceedings of* JLIS. Manoel Horta Ribeiro, Pedro H. Calais, Yuri A. Santos, Virgílio A. F. Almeida, and Wagner Meira Jr. 2017. Characterizing and Detecting Hateful Users on Twitter. In *Proceedings of the International AAAI Conference on Web and Social Media*. ArXiv: 1801.00317. Paul Röttger, Bertie Vidgen, Dong Nguyen, Zeerak Waseem, Helen Margetts, and Janet Pierrehumbert. 2021. HateCheck: Functional tests for hate speech detection models. In *Proc. of ACL*. Maarten Sap, Ronan Le Bras, Emily Allaway, Chandra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A. Smith, and Yejin Choi. 2019a. ATOMIC: an atlas of machine commonsense for if-then reasoning. In *The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The* Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019. Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A. Smith. 2019b. The risk of racial bias in hate speech detection. In *Proc. of ACL*. Maarten Sap, Saadia Gabriel, Lianhui Qin, Dan Jurafsky, Noah A. Smith, and Yejin Choi. 2020. Social bias frames: Reasoning about social and power implications of language. In *Proc. of ACL*. Maarten Sap, Swabha Swayamdipta, Laura Vianna, Xuhui Zhou, Yejin Choi, and Noah A. Smith. 2022. Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Emma Strubell, Ananya Ganesh, and Andrew McCallum. 2019. Energy and policy considerations for deep learning in NLP. In *Proc. of ACL*. Derald Wing Sue. 2010. *Microaggressions in everyday* life: Race, gender, and sexual orientation. John Wiley & Sons. Mariarosaria Taddeo, Andreas Tsamados, Josh Cowls, and Luciano Floridi. 2021. Artificial intelligence and the climate emergency: Opportunities, challenges, and recommendations. *One Earth*. Rachael Tatman. 2020. What I won't build. Widening NLP Workshop. Serra Sinem Tekiroglu, Helena Bonaldi, Margherita ˘ Fanton, and Marco Guerini. 2022. Using pre-trained language models for producing counter narratives against hate speech: a comparative study. In *Findings of the Association for Computational Linguistics:* ACL 2022. Kristen Vaccaro, Christian Sandvig, and Karrie Karahalios. 2020. "at the end of the day facebook does what itwants": How users experience contesting algorithmic content moderation. *Proc. ACM Hum.-* Comput. Interact. Bertie Vidgen, Dong Nguyen, Helen Margetts, Patricia Rossini, and Rebekah Tromble. 2021a. Introducing CAD: the contextual abuse dataset. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Bertie Vidgen, Tristan Thrush, Zeerak Waseem, and Douwe Kiela. 2021b. Learning from the worst: Dynamically generated datasets to improve online hate detection. In *Proc. of ACL*. Zeerak Waseem, Thomas Davidson, Dana Warmsley, and Ingmar Weber. 2017. Understanding abuse: A typology of abusive language detection subtasks. In Proceedings of the First Workshop on Abusive Language Online. Peter West, Chandra Bhagavatula, Jack Hessel, Jena Hwang, Liwei Jiang, Ronan Le Bras, Ximing Lu, Sean Welleck, and Yejin Choi. 2022. Symbolic knowledge distillation: from general language models to commonsense models. In *Proceedings of the* 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Sarah Wiegreffe, Ana Marasovic, and Noah A. Smith. ´ 2021. Measuring association between labels and free-text rationales. In *Proc. of EMNLP*. Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar. 2019. Predicting the type and target of offensive posts in social media. In *Proc. of NAACL-HLT*. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with BERT. In *Proc. of ICLR*. Yiming Zhang, Sravani U. Nanduri, Liwei Jiang, Tongshuang Wu, and Maarten Sap. 2023. "thinking slow" in toxic language annotation with explanations of implied social biases. *arXiv*. Jingyan Zhou, Jiawen Deng, Fei Mi, Yitong Li, Yasheng Wang, Minlie Huang, Xin Jiang, Qun Liu, and Helen Meng. 2022. Towards identifying social bias in dialog systems: Frame, datasets, and benchmarks. Xuhui Zhou, Maarten Sap, Swabha Swayamdipta, Yejin Choi, and Noah Smith. 2021. Challenges in automated debiasing for toxic language detection. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. | Intent | Target group | Power Dynamics | Implication | Emotional React. | Cognitive React. | Offensiveness | Average | | |----------|----------------|------------------|---------------|--------------------|--------------------|-----------------|-----------|-------| | Small | 0.936 | 0.929 | 0.932 | 0.900 | 0.886 | 0.877 | 0.889 | 0.907 | | Base | 0.939 | 0.933 | 0.932 | 0.907 | 0.892 | 0.880 | 0.890 | 0.910 | | Large | 0.944 | 0.939 | 0.938 | 0.916 | 0.898 | 0.887 | 0.897 | 0.917 | | XL | 0.947 | 0.940 | 0.938 | 0.917 | 0.897 | 0.886 | 0.899 | 0.918 | | XXL | 0.948 | 0.939 | 0.937 | 0.918 | 0.898 | 0.887 | 0.895 | 0.917 | ## A Crowd-Sourcing On Mturk In this paper, human annotation is widely used in §3.2, §3.3, §4, §4, §5.2, and §5.2. We restrict our worker candidates' location to U.S. and Canada and ask the workers to optionally provide coarse-grained demographic information. Among 300 candidates, 109 workers pass the qualification tests. Note that we not only give the workers scores based on their accuracy in our tests, but also manually verify their provided suggestions for explanations. Annotators are compensated $12.8 per hour on average. The data collection procedure was approved by our institution's IRB. ## A.1 Annotator Demographics Due to the subjective nature of toxic language (Sap et al., 2022), we aim to collect a diverse set of annotators. In our final pool of 109 annotators, the average age is 36 (ranging from 18 to 65). For political orientation, we have 64/21/24 annotators identified as liberal/conservative/neutral, respectively. For gender identity, we have 61/46/2 annotators identify as man/woman/non-binary, respectively. There are also 40 annotators that self-identified as being part of a minority group. ## A.2 Annotation Interface And Instructions As recommended by (Aguinis et al., 2021), we design the MTurk interface with clear instructions, examples with explanations. The annotation snippet of collecting plausible scenarios (§3.2) is in Figure 5. The annotation snippet of collecting explanations (§3.3) is in Figure 6. The annotation snippet of collecting adversarial examples (§4) is in Figure 7. ## B Cha**Rm Experiment Details** B.1 Training Details With the HuggingFace's Transformers library11, different variants of FLAN-T5, small, base, large, XL and XXL, are finetuned on the COBRA training set for two epochs with AdamW optimizer with a learning rate of 1e−4and batch size of 16. We use beam search as the decoding algorithm and all reported results are based on a single run. We also train a XL model using the same architecture and hyperparameters but without the context information. The sizes of CHARM range from 80M to 11B, the largest of which takes 10 hours to train in FP32 on 5 A6000 GPUs with NVLink, and can do inference in FP16 on a single A6000 GPU. We used HuggingFace evaluate package to evaluate the BLEU-2 and ROUGE-L scores. ## B.2 Evaluation Details See Table 8 for the BERTScore metrics across different model sizes. ## C Gpt-3 Prompts Used In This Paper The example prompts for generating likely contexts are in Figure 8. The example prompts for generating adversarial contexts are in Figure 9. The example prompts for generating the likely explanations are in Figure 10. ![15_image_0.png](15_image_0.png) 3. Does the listener identity seem plausible/realistic? Listener identity ${listenerIdentity} very implausible/unrealistic ❍ ❍ ❍ very plausible/realistic Figure 5: The annotation snippet of collecting plausible scenarios (§ 3.2) ![16_image_0.png](16_image_0.png) Read the statement, two scenarios in the table and answer the questions below | Statement : ${statement} | | |-------------------------------------------------------------|----------------------------------------------| | Scenario A | Scenario B | | Situational context : ${situationalContext1} | Situational context : ${situationalContext2} | | Speaker identity : ${speaker2} Listener identity : ${listener2} | | | Speaker identity : ${speaker1} Listener identity : ${listener1} | | | 1. Rate the scenarios | | | a. Does the scenario A seem plausible/realistic? | | | very implausible/unrealistic ❍ ❍ ❍ very plausible/realistic | | b. Does the scenario B seem plausible/realistic? very implausible/unrealistic ❍ ❍ ❍ very plausible/realistic | 2. Which scenario makes the statement potentially more problematic/offensive/biased/toxic? | |----------------------------------------------------------------------------------------------| | - Scenario A - Scenario B | 3. Which scenario makes the statement harmless? ❍ Scenario A | ❍ Scenario A | |-----------------------------------------------------------| | - Scenario B - None, the statement is offensive under the two scenarios | | - Both, the statement is harmless under the two scenarios | 4. Which scenario makes the statement problematic/offensive/biased/toxic ? ❍ Scenario A - Scenario B - None, the statement is harmless under the two scenarios ❍ Both, the statement is offensive under the two scenarios Figure 7: The annotation snippet of collecting adversarial examples (§4) | statement | speechContext | speakerIdentity listenerIdentity | | |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------|------------------------------------|-----------------| | We shouldn't lower our standards just hiring conversation between to hire more women coworkers | man | other coworkers | | | Mexico isn't sending its best, it's | white American | general audience | | | sending rapists and murderers | political speech or rant | person | of US voters | | white American | non-White | | | | person or | American or | | | | English speaker English speaker | | | | | I'm impressed that your English is so | workplace interaction between | | | | good | strangers or acquaintances initial interaction in a workplace setting between customer and worker | man | woman | | Excuse me, are any of your bosses here? | initial interaction in a workplace setting between customer and worker | white person | black person | | They have been bred to be good at sports and entertainment, but not | casual conversation between friends | | | | much else | while watching the NBA show | white person | white person | | Excuse me, are any of your bosses here? | older | younger | | | straight/nonqueer person | queer/LGBTQ+ person | | | | I had a chude moment-I didn't know if office chat during the new student | cisgender school other cisgender | | | | it was a chick or a dude. | coming season | administrator | coworkers | | I would never, ever hire someone with a "black" name on their resume. I wouldn't even interview them. I just wish you'd save your conversation between parent/elder experimenting for college family member to teenager conversation between the coworkers in a successful financial company | white person | white person | | | I don't think this is "adultist". I work with teens. They don't know shit. That' s why people like me are paid with | people who | | | | work with them" and "Adultist isn't a | conversation on social media | person who | believe in | | thing. | between strangers | works with teens adultism | | | Saw your picture in the company | conversation in front of the office | | | | newsletter. WOW WOW WOW! | bulletin board | male boss | female employee | Figure 8: The example prompts for generating likely contexts | statement | harmless_situationalCont ext harmless_speakerId entity harmless_listenerI dentity offensive_situationalCo ntext offensive_spea kerIdentity offensive_listen erIdentity In an English language A workplace conversation | | | | | | |--------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------|-------------------------|-----------------|--------------|-----------| | Wow, your English is | classroom after an oral | where two co-workers are meeting White speaker | Non-white | | | | | really good! | exam | English teacher | Student | colleague | | | | I can't tell any of you | A conversation at a | A random conversation | | | | | | apart, you all look the | performance of a dance | between strangers on a | | | | | | same! | troupe | Audience member | Dancer in uniform plane | White person | Asian person | | | Conversation between two | Conversation between | | | | | | | How are you doin' my | people who have been | two strangers who just | | | | | | nigga? | friends for years | Black man | Black man | met | White man | Black man | | An initial interaction | | | | | | | | An initial interaction between | between a retail customer | | | | | | | Excuse me, can you go a retail customer and a shop | and someone from the shop | Man customer | Woman shop | | | | | get the owner? | assistant | Male customer | Male shop assistant | owner | | | | A family conversation about | A family conversation | | | | | | | I'm sure it's just a phase, a teenager's interest in a | about a teenager's | | | | | | | hopefully it'll blow over! specific video games | Parent | Teenager | sexuality | Straight parent | Queer child | | | a debate on whether or not to defund the police after a young black man being abused by the police | one who is against | one who is for | | | | | | defunding the | defunding the | | | | | | | police | police | | | | | | | two people arguing about a person who does not person who | | | | | | | | what is this going to | potential solution to a | think the solution will proposed the | | | | | | solve? | problem | work | solution | | | | Figure 9: The example prompts for generating adversarial contexts ![19_image_1.png](19_image_1.png) ![19_image_0.png](19_image_0.png) statement speechContext speakerIdentity listenerIdentity intent targetGroup implication PowerDynamics EmotionalReaction CognitiveReaction **offensiveness** ![19_image_2.png](19_image_2.png) ![19_image_3.png](19_image_3.png) | implies that women candidates cannot be qualified as men, and that trying to hire women is antithetical to good candidates power differential between speaker and targeted group (women) due to gender differential and hiring-vscandidate relationship women could feel angry that their qualifications are seemingly dismissed, disappointed that their gender is brought up, or less worthy about their own qualifications | | | |--------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------| | The speaker is trying to avoid hiring underqualified candidates and worried that gender quotas will lead to that women, women candidates | will feel less welcome and be less likely to want to work there, will want to avoid those kinds of jobs with men like the speaker microaggression | | | implies that immigrants resort to criminal behavior, and cannot earn an honest living, and therefore immigration is bad | Mexican immigrants could feel angry that the entire country of Mexico is being framed as criminal, afraid that people will believe that stereotype | | | The speaker is trying to express distrusts against immigrants from Mexico, calling them all rapists and murderers Mexican immigrants | ethnicity and citizenship power differential between speaker and targeted group | Mexican immigrants could likely be more feared, and will feel less welcome in the offensive country they are in generalization | ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? an unnumbered Limitations& Ethical and Societal Considerations section after conclusion ✓ A2. Did you discuss any potential risks of your work? an unnumbered Limitations& Ethical and Societal Considerations section after conclusion ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3 ✓ B1. Did you cite the creators of artifacts you used? Section 3 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? an unnumbered Limitations& Ethical and Societal Considerations section after conclusion ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? an unnumbered Limitations& Ethical and Societal Considerations section after conclusion ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? an unnumbered Limitations& Ethical and Societal Considerations section after conclusion ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? an unnumbered Limitations& Ethical and Societal Considerations section after the conclusion as well as section 3 and 4 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 5 ## C ✓ **Did You Run Computational Experiments?** Section 5 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix B The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 5 and Appendix B ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 5 and Appendix B ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 5 and Appendix B D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Section 3, 4, 5 and Appendix A ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Appendix A ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Appendix A ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Appendix A ✓ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Appendix A ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Appendix A
li-caragea-2023-distilling
Distilling Calibrated Knowledge for Stance Detection
https://aclanthology.org/2023.findings-acl.393
Stance detection aims to determine the position of an author toward a target and provides insights into people{'}s views on controversial topics such as marijuana legalization. Despite recent progress in this task, most existing approaches use hard labels (one-hot vectors) during training, which ignores meaningful signals among categories offered by soft labels. In this work, we explore knowledge distillation for stance detection and present a comprehensive analysis. Our contributions are: 1) we propose to use knowledge distillation over multiple generations in which a student is taken as a new teacher to transfer knowledge to a new fresh student; 2) we propose a novel dynamic temperature scaling for knowledge distillation to calibrate teacher predictions in each generation step. Extensive results on three stance detection datasets show that knowledge distillation benefits stance detection and a teacher is able to transfer knowledge to a student more smoothly via calibrated guiding signals. We publicly release our code to facilitate future research.
# Distilling Calibrated Knowledge For Stance Detection ## Yingjie Li Cornelia Caragea University of Illinois at Chicago {yli300,cornelia}@uic.edu ## Abstract Stance detection aims to determine the position of an author toward a target and provides insights into people's views on controversial topics such as marijuana legalization. Despite recent progress in this task, most existing approaches use hard labels (one-hot vectors) during training, which ignores meaningful signals among categories offered by soft labels. In this work, we explore knowledge distillation for stance detection and present a comprehensive analysis. Our contributions are: 1) we propose to use knowledge distillation over multiple generations in which a student is taken as a new teacher to transfer knowledge to a new fresh student; 2) we propose a novel dynamic temperature scaling for knowledge distillation to calibrate teacher predictions in each generation step. Extensive results on three stance detection datasets show that knowledge distillation benefits stance detection and a teacher is able to transfer knowledge to a student more smoothly via calibrated guiding signals. We publicly release our code to facilitate future research.1 ## 1 Introduction The stance detection task aims to identify the position of a user toward a specific target (Mohammad et al., 2016b; Küçük and Can, 2020; AlDayel and Magdy, 2020). The target is usually a controversial topic (Stab et al., 2018; Glandt et al., 2021), a public figure (Sobhani et al., 2017; Li et al., 2021a) or a claim that could be a rumor's post (Derczynski et al., 2017; Gorrell et al., 2019). For example, for the sentence in Table 1, we can infer that the author is against the marijuana legalization implied by the presence of the words "illegal drugs" and "disproportionate share of violence". Even though impressive progress has been made in stance detection, most previous works rely on onehot annotation labels in which meaningful signals | Sentence: Illegal drugs such as marijuana are responsible for a disproportionate share of violence and social decline in America. Target: Marijuana Legalization Stance: Against | |---| Table 1: An example of stance detection. among different categories are ignored during training. Knowledge distillation (KD) (Hinton et al., 2015) transfers knowledge from a teacher model to a student model by training the student model to imitate the teacher's prediction logits (which we call soft labels). Recent work has started to investigate knowledge distillation in the context of stance detection. Li et al. (2021b) evaluated knowledge distillation on stance detection datasets and proposed an adaptive knowledge distillation method (AKD) that applies less temperature scaling to the samples with larger confidence obtained from teacher predictions. However, the improvement brought by AKD could be limited if the teacher model is poorly calibrated (over-confident in its predictions). Model miscalibration can be widely observed in modern neural networks (Guo et al., 2017; Yang and Song, 2021; Guo et al., 2021). If a model is well calibrated, then the probability associated with the predicted label should reflect its ground-truth correctness. According to our empirical observations, teacher models of AKD trained on stance detection datasets are not well-calibrated, producing peaked distributions of confidence in stance detection. Yang et al. (2019) showed that the teacher that provides less peaked training signals makes it possible for the student to learn better signals from interclass similarity and can potentially reduce overfitting. Therefore, we associate the performance of knowledge distillation with the calibration of teacher models in stance detection and propose to further improve the task performance by calibrating teacher predictions. Specifically, we recalibrate 1https://github.com/chuchun8/CKD teacher predictions using a post-processing method called temperature scaling (Platt, 1999; Guo et al., 2017) and the student model is trained based on both hard labels and calibrated teacher predictions. Further, *born-again networks* (Furlanello et al., 2018), in which the teacher and student models have identical model architectures, have achieved additional improvements with multiple students generations. At each consecutive step, a student model is taken as a new teacher to transfer knowledge to a new fresh student model. In this paper, we explore the born-again networks for stance detection and propose to calibrate teacher predictions in each generation. Extensive experiments on stance detection datasets show that a teacher can transfer knowledge to a student more smoothly via calibrated soft labels generated by the teacher and training student models over multiple generations helps improve the task performance. Our contributions are summarized as follows: - We investigate knowledge distillation in generations for stance detection and observe performance gains over multiple generations. - We explore the connection between knowledge distillation and calibration in stance detection and propose a Calibration-based Knowledge Distillation method (which we call CKD) that dynamically updates the temperature used in knowledge distillation in each generation. - Our CKD consistently outperforms strong baselines of stance detection, indicating that transferring knowledge from a well-calibrated teacher is more beneficial to the stance detection task. ## 2 Related Work Previous works for stance detection mainly focus on the in-target setting (Mohammad et al., 2016b; Du et al., 2017; Sobhani et al., 2017; Siddiqua et al., 2019; Li and Caragea, 2019, 2021a) where the test target has always been seen in the training stage. Recently, cross-target stance detection (Augenstein et al., 2016; Xu et al., 2018; Zhang et al., 2020; Liang et al., 2021) and zero-shot stance detection (Allaway and McKeown, 2020; Allaway et al., 2021; Liu et al., 2021a; Liang et al., 2022a,b; Li et al., 2023) have also attracted a lot of attention. In this paper, we mainly focus on the in-target stance detection. An ad-hoc training strategy that trains one model for one target has been widely used in previous works (Mohammad et al., 2017; Du et al., 2017; Sun et al., 2018; Wei et al., 2018; Li and Caragea, 2019, 2021b). However, it only considers one target during training and thus it fails to exploit the potential of all training data. Recent works (Schiller et al., 2021; Li and Caragea, 2021a) have shown that multi-target training that trains one model on all targets of a dataset can benefit the stance detection task. In our work, we adopt this multitarget training strategy and conduct extensive experiments in the multi-target training setting on three stance detection datasets (Stab et al., 2018; Glandt et al., 2021; Li et al., 2021a). Knowledge distillation, initially proposed by Hinton et al. (2015), has been widely used in natural language processing to distill external knowledge into a model (Kim and Rush, 2016; Sun et al., 2019; Aguilar et al., 2020; Tong et al., 2020; Currey et al., 2020; Jiao et al., 2020; Hosseini and Caragea, 2021; Zhao and Caragea, 2021). Interestingly, despite that recent works (Furlanello et al., 2018; Clark et al., 2019; Yang et al., 2019; Mobahi et al., 2020; Liu et al., 2021b) have made impressive progress in knowledge distillation in which the teacher and student have the same model architecture, not much attention has been paid to using knowledge distillation for stance detection. One exception is the work by Li et al. (2021b) who proposed an adaptive knowledge distillation method (AKD) that applies instance-specific temperature scaling to the teacher predictions in one generation. In contrast to this prior work which uses only a single generation step for knowledge distillation, we explore born-again networks on three stance datasets and observe further improvements in performance of stance detection models when they are trained over multiple generations. Motivated by recent advances in calibration of neural networks (Guo et al., 2017; Desai and Durrett, 2020; Guo et al., 2021; Park and Caragea, 2022a,b; Hosseini and Caragea, 2022), we hypothesize that the miscalibration of teacher predictions has a negative impact on the student model and further test this hypothesis by calibrating teacher's predictions in born-again networks. In this work, we conclude that calibrating teacher's predictions in each generation benefits the student model by providing smoother supervision signals in stance detection task. ![2_image_0.png](2_image_0.png) ## 3 Approach 3.1 Knowledge Distillation Suppose a given training stance dataset of size n is Dtr = {(xi, yi)} n i=1, where xiincludes an input sentence and a corresponding target and yiis the hard label. The goal of stance detection is to predict the stance label given an input sentence and a target. Standard supervised learning aims to minimize the cross-entropy loss of training data: $$L_{CE}=\sum_{(x_{i},y_{i})\in D^{tr}}l_{CE}(p(x_{i}),y_{i})\tag{1}$$ where lCE(.) represents the cross-entropy, p(xi) = σ(z(xi)) denotes softmax predictions of the model. In knowledge distillation, a student model is trained based on two signals: the hard labels and the teacher estimates, which are known as soft labels. Knowledge distillation (Hinton et al., 2015) transfers knowledge from a teacher model to a student model by minimizing LKD, which is the sum of the cross-entropy loss between the student predictions and hard labels and the distance loss between the predictions of the student and those of the teacher: $$L_{K D}=(1-\lambda)L_{C E}+\lambda L_{K L}\qquad\qquad(2)$$ $$L_{K L}=\sum_{x_{i}\in D^{t r}}l_{K L}(p_{s}^{\tau}(x_{i}),p_{t}^{\tau}(x_{i}))\qquad(3)$$ where LKL is Kullback-Leibler (KL) divergence loss, and p τ t(x) = σ(zt(x)/τ ) and p τs(x) = σ(zs(x)/τ ) denote the softmax outputs of the teacher and student models (respectively) with temperature scaling τ , where τ is the fixed temperature used to scale the model predictions, σ(.) is the softmax function, zt(x) and zs(x) denote the output logits of the teacher and student models respectively; λ is a coefficient to balance the importance of the two loss functions and it could be optimized using teacher annealing (Clark et al., 2019) that dynamically mixes the teacher prediction with the ground-truth label during training. The student model learns more from teacher predictions at the early training stage, and mostly relies on the ground-truth labels at the end of training. Model performance can be further improved with *multiple students generations* (Furlanello et al., 2018) in which a student is taken as a new teacher to transfer knowledge to a new student. In this paper, we explore the knowledge distillation over multiple generations on stance detection (see Figure 1 for training over multiple generations). ## 3.2 Calibration Of Neural Networks Miscalibration can be widely observed in modern neural networks (Guo et al., 2017; Desai and Durrett, 2020). The prediction confidence of a wellcalibrated model is expected to reflect its classification accuracy. For example, given 100 model predictions, each with confidence of 0.7, we expect 70 of them to be correctly classified. One notion of miscalibration is Expected Calibration Error (ECE) (Naeini et al., 2015) that represents the difference in expectation between confidence and accuracy. ECE can be defined as: $$ECE=\mathbb{E}[\mathbb{P}\left(\hat{Y}=Y|\hat{P}=p\right)-p]]\tag{4}$$ where Y and Yˆ denote the ground-truth label and the predicted label, respectively, Pˆ is the output probability associated with the predicted label and p ∈ (0, 1) is a given confidence. As Eq. (4) cannot be estimated using a finite number of samples if Pˆ is a continuous random variable, empirical approximations (Guo et al., 2017) are usually adopted by partitioning predictions into M bins of equal size and computing the weighted average: ECE = X M m=1 |Bm| n|acc(Bm) − conf(Bm)| (5) where Bm denotes the set of samples whose prediction confidence falls into the interval ( m−1 M , m M ], M is the bin size, acc(Bm) and *conf*(Bm) are accuracy and average confidence of Bm, respectively. Temperature scaling (Guo et al., 2017; Desai and Durrett, 2020) is an effective post-processing calibration method that produces calibrated probabilities. Given the logit vector z(x), the new confidence prediction is σ(z(x)/τ ) after temperature scaling with temperature τ being usually fixed in knowledge distillation. ## 3.3 Calibrated Knowledge Distillation In this paper, we propose an improved knowledge distillation method called CKD and show that transferring knowledge from a well-calibrated teacher improves the performance of students for stance detection. The overall training procedure of CKD is shown in Algorithm 1. In the first step, a teacher model is trained from hard labels of stance detection dataset. In the second step, we do temperature scaling of teacher predictions but in our proposed approach, instead of using a fixed temperature τ that shows the best performance on the validation set, we dynamically choose the temperature τ that minimizes the ECE in each generation step (see Step 2 in Algorithm 1). Note that the selection of temperature in each generation is time-efficient as we simply need to divide the softmax teacher outputs by potential temperature values and then compute the ECE. Then, in each consecutive step, a new student model is initialized with an identical model architecture and trained based on both hard labels and calibrated teacher predictions. We expect the calibrated teacher predictions to help students learn better inter-class similarity and achieve better performance. The algorithm is also illustrated in Figure 1. To summarize, CKD is different from previous adaptive knowledge distillation (Li et al., 2021b) of stance detection in the following aspects: - We empirically connect the temperature of knowledge distillation with the calibration of distilled networks in CKD. - The temperature scaling hyperparameter τ is dynamically updated in each generation step. ## 4 Experimental Settings In this section, we first describe the stance detection datasets used for evaluation and introduce the evaluation metrics. Then, we describe several stance Algorithm 1: Calibrated-Knowledge Distillation with Dynamic Temp. Scaling Require :Train set Dtr = {(xi, yi)} n i=1 Val set Dval 1 Train the first teacher model by minimizing the cross-entropy loss of Dtr $$L_{C E}=\sum_{x_{i}\in D^{t r}}l_{C E}(p(x_{i}),y_{i})$$ 2 Do temperature scaling of teacher predictions with updated τ of temperature scaling that minimizes the ECE of teacher predictions on the Dval 3 Train a student model by minimizing the sum of cross-entropy loss and the KL-divergence loss of Dtr LKD = (1 − λ)LCE + λLKL 4 Iterative training: Use the student as a new teacher and go back to step 2. detection baselines that are used to be compared with our proposed method. ## 4.1 Datasets Three stance detection datasets of diverse domains are used to evaluate the performance of the proposed method. Train, validation and test sets are as provided by the authors. Examples from these datasets are shown in Table 2 and summary statistics of these datasets are shown in Tables 3, 4, 5. Details of these datasets are described as follows. AM AM (Stab et al., 2018) is an argument mining dataset containing eight topics: "Abortion", "Cloning", "Death Penalty", "Gun Control", "Marijuana Legalization", "Minimum Wage", "Nuclear Energy" and "School Uniforms". The dataset is annotated for detecting whether an argument is in support of, neutral or opposed to a given topic. COVID-19 COVID-19 (Glandt et al., 2021) is a stance detection dataset collected during COVID19 pandemic, which contains four targets "Face Mask", "Fauci", "Stay at Home Orders" and "School Closures". The dataset is annotated for detecting whether the author is in favor of, neutral or against these topics. P-Stance P-Stance (Li et al., 2021a) is a stance detection dataset collected during the 2020 U.S. presidential election, which contains three public figures "Donald Trump", "Joe Biden" and "Bernie Sanders". The dataset is annotated for detecting | Dataset | Target | Tweet | Stance | |-----------|----------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------| | AM | Nuclear Energy | It has been determined that the amount of greenhouse gases have decreased by almost half because of the prevalence in the utilization of nuclear power. | Favor | | COVID-19 | Face Masks | @SpeakerVos Masks in public places are a necessary prevention tool. You DON'T need to have symptoms to be infected. Many people get Covid and never experience symptoms- They're known as silent Carriers, who are indeed, Spreading the virus. | Favor | | P-Stance | Joe Biden | Holy shat! Is @JoeBiden sleep walking and dreaming. Hey Joe, do think anyone believes Obummers Administration isnt guilty of treason and sedition. How about the Logan Act? Your team are going to be in jail in 2 years for long long sentences. #JusticeComing2019 | Against | Topic #Total %Support %Oppose %None Abortion 3,929 17.31 20.92 61.77 Cloning 3,039 23.23 27.61 49.16 Death Penalty 3,651 12.52 30.43 57.05 Gun Control 3,341 23.56 19.90 56.54 Marijuana 2,475 23.72 25.29 50.99 Minimum Wage 2,473 23.29 22.28 54.43 Nuclear Energy 3,576 16.95 23.82 59.23 School Uniforms 3,008 18.12 24.23 57.65 Total 25,492 19.40 24.30 56.30 Table 2: Example from each stance detection dataset. Table 3: Data distribution of AM dataset (Stab et al., 2018). Target #Total %Favor %Against %None Face Mask 1,707 23.72 25.29 50.99 Fauci 1,864 23.29 22.28 54.43 Stay at Home 1,372 16.95 23.82 59.23 School Closure 1,190 18.12 24.23 57.65 Total 6,133 19.40 24.30 56.30 Table 4: Data distribution of COVID-19 dataset (Glandt et al., 2021). Table 5: Distribution of instances in P-Stance dataset (Li et al., 2021a). whether the author is in favor of or against these presidential candidates during the election. ## 4.2 Evaluation Metrics Similar to Mohammad et al. (2016a), macroaverage of F1-score (F*macro*) and micro-average of F1-score (F*micro*) are adopted to evaluate the performance of models. Favg is first calculated by averaging the F1-scores of label "Favor" and "Against".2 We calculate the Favg for each target 2Note that we calculate the Favg and F*micro* by averaging the F1-scores of label "Favor", "Against" and "Neutral" for COVID-19 dataset to be consistent with their results. and F*macro* is calculated by averaging the Favg across all targets for each dataset. Further, we obtain F*micro* by averaging the F1-scores of "Favor" and "Against" across all targets for each dataset. | Target | #Total | #Train | #Dev | #Test | |----------------|----------|----------|--------|---------| | Donald Trump | 7,953 | 6,362 | 795 | 796 | | Joe Biden | 7,296 | 5,806 | 745 | 745 | | Bernie Sanders | 6,325 | 5,056 | 634 | 635 | | Total | 21,574 | 17,224 | 2,174 | 2,176 | ## 4.3 Baseline Methods We run experiments with the following strong baselines of stance detection: FNN: Feed-forward networks that take texts as inputs without considering the target information. BiCE (Augenstein et al., 2016): A BiLSTM model that uses conditional encoding for stance detection. The topic is first encoded by a BiLSTM, whose hidden representations are then utilized to initialize a second BiLSTM with texts as inputs. TAN (Du et al., 2017): TAN is an attentionbased LSTM model that learns the correlation between target and text representations. CrossNet (Xu et al., 2018): CrossNet improves the BiCE by introducing a self-attention layer. BERT (Devlin et al., 2019): A pre-trained language model that jointly encodes target and text, and predicts the stance by appending a linear layer to the hidden representation of the [CLS] token. We fine-tune BERT on the stance detection task. TGA (Allaway and McKeown, 2020): A BERTbased model that exploits topic-grouped attention. BERT-KE (Kawintiranon and Singh, 2021): A BERT model that is pre-trained with knowledge enhanced masked language modeling. AKD (Li et al., 2021b): An adaptive knowledge distillation method that uses instance-specific temperature for stance detection. In addition, we compare the CKD with the following knowledge distillation ablation methods: KD-1: A vanilla knowledge distillation method without temperature scaling (τ = 1). The student has the same model architecture as the teacher (which is also known as self-distillation (Furlanello et al., 2018; Zhang and Sabuncu, 2020)). KD-T: A knowledge distillation method with temperature τ . τ is chosen from {2, 3.5, 5} on the validation set. The student has the same model architecture as the teacher. LSR (Szegedy et al., 2016): A label smoothing regularization technique used to encourage the base model to be less confident in making predictions. The proposed method is listed as follows: CKD: A knowledge distillation method with temperature τ . Unlike KD-T where τ is fixed, in our proposed method τ is dynamically chosen from 1 to 5 with a step size of 0.01 to minimize the ECE in each generation (step 2 of Algorithm 1). The pretrained BERTweet (Nguyen et al., 2020) is used as teacher and student models for CKD and knowledge distillation methods. The student has the same model architecture as the teacher. We use the default bin size of 10 to compute ECE and we report the performance of using different bin sizes in the next section. Note that unlike CKD, temperature τ of KD-T is chosen out of three options because it is impractical for KD-T to select the temperature in the range of 1 to 5 with step size 0.01, which requires to repeat the whole training procedure hundreds of times. Since temperature scaling will not change the prediction label, we can only tune the temperature according to the classification performance of student model on the validation set for KD-T, which is one of weaknesses of previous born-again networks. In this paper, we propose to select the temperature according to the ECE, which can be simply calculated with each teacher model alone (no training involved in temperature selection step) and thus it is much more time-efficient. We adopt the teacher annealing (Clark et al., 2019) for all knowledge distillation methods in our experiments. We performed all experiments on a single NVIDIA A5000 GPU. We implement our model in PyTorch framework (Paszke et al., 2019) using HuggingFace Transformers library (Wolf et al., 2020). More details of the hyperparameters are described in Appendix A. ## 5 Results In this section, we first compare the proposed CKD with strong baselines of stance detection. Then we present an ablation study to show that dynamically updating the temperature helps improve the performance in generations. In addition, we compare | Model | COVID-19 | AM | P-Stance | |----------|-------------------------------------------------|---------------|---------------| | FNN | 61.83 (59.03) | 47.21 (45.91) | 72.08 (71.32) | | BiCE | 63.07 (60.34) | 48.63 (47.59) | 74.54 (73.66) | | TAN | 68.37 (66.45) | 49.98 (49.55) | 76.06 (75.49) | | CrossNet | 67.94 (66.16) | 51.66 (51.20) | 76.15 (75.48) | | BERT | 71.23 (68.71) | 59.99 (59.51) | 78.38 (77.96) | | TGA | 71.88 (69.69) | 60.24 (59.72) | 77.99 (77.66) | | BERT-KE | 72.78 (70.60) | 58.27 (57.89) | 79.74 (79.24) | | BERTweet | 74.76 (71.95) | 63.70 (63.71) | 81.97 (81.55) | | AKD | 75.07 (72.54) | 64.82 (64.86) | 81.98 (81.54) | | CKD | 76.93∗ (74.53∗ ) 66.92∗ (67.02∗ ) 82.34 (81.97) | | | Table 6: Performance comparisons of different stance detection models. We report Fmicro (F*macro*) over four runs. Bold scores are best results. ∗: CKD improves the best baseline at p < 0.05 with paired t-test. the performance of different bin sizes of our proposed CKD on stance detection datasets. Next, we compare our CKD with the best baseline AKD and visualize the relationship between the confidence and the evaluation metric (F*micro*) in the form of reliability diagrams. At last, we discuss the performance of our CKD and other KD baselines in ECE on stance detection datasets. Main results Table 6 shows performance comparisons of CKD with the stance detection baselines on all three stance detection datasets. We can observe that our proposed CKD performs best in overall. Specifically, the best student model of CKD outperforms the best-performing baseline AKD by 1.86%, 2.10% and 0.36% in F*micro* on COVID-19, AM, and P-Stance datasets, respectively. A more detailed comparison between our CKD and AKD is presented later in this section. Note that CKD shows less improvements on PStance dataset. One explanation is that P-Stance has much larger train set for targets and the effect of knowledge distillation diminishes with increasing the size of train set—a fact that was observed before (Zhang and Sabuncu, 2020). Ablation study Table 7 shows the comparison results of our proposed method with the ablation methods of fixed temperature or no temperature (τ = 1) mentioned above on all three stance detection datasets. We train born-again networks sequentially with multiple generations. Gen0 and Gen1 columns show the performance of the first teacher (i.e., the BERTweet model in Table 6) and student, respectively. Then the first student is taken as a new teacher to transfer knowledge to the second student and results of the second student are shown | Model | Gen 0 | Gen 1 | Gen 2 | Gen 3 | |-----------------------------------------------------------------------------------------------------------------|---------------|---------------|------------------|---------------| | COVID-19 KD-1 | 74.76 (71.95) | 74.19 (71.61) | 74.40 (72.70) | 75.02 (72.90) | | KD-T | 74.76 (71.95) | 74.85 (72.98) | 75.29 (73.90) | 74.49 (71.94) | | LSR | 74.34 (71.70) | - | - | - | | CKD | 74.76 (71.95) | 74.90 (72.38) | 76.93∗ (74.53) | 75.70 (72.71) | | AM KD-1 | 63.70 (63.71) | 64.54 (64.56) | 65.23 (65.28) | 64.62 (64.68) | | KD-T | 63.70 (63.71) | 64.66 (64.63) | 65.12 (65.22) | 64.20 (64.24) | | LSR | 64.46 (64.52) | - | - | - | | CKD | 63.70 (63.71) | 65.19 (65.12) | 66.92∗ (67.02∗ ) | 65.55 (65.57) | | P-Stance KD-1 | 81.97 (81.55) | 81.74 (81.41) | 81.66 (81.25) | 81.71 (81.30) | | KD-T | 81.97 (81.55) | 81.55 (81.11) | 81.43 (80.98) | 81.29 (80.82) | | LSR | 81.43 (81.04) | - | - | - | | CKD | 81.97 (81.55) | 81.70 (81.31) | 82.16 (81.71) | 82.34 (81.97) | | Table 7: Performance comparisons of different models on stance detection datasets over multiple generations. We | | | | | | Model | Gen 1 | Gen 2 | Gen 3 | | | | | |-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------|----------------|-----------------|-------|-------|-------|-------| | COVID-19 KD-T 74.85 (72.98) | 75.29 (73.90) | 74.49 (71.94) | | | | | | | Bin-5 | 75.28 (72.70) | 75.18 (72.46) | 75.34 (72.78) | | | | | | Bin-10 | 74.90 (72.38) | 76.93 (74.53) | 75.70 (72.71) | | | | | | Bin-15 | 75.20 (72.66) | 75.92 (73.48) | 75.85 (72.85) | | | | | | AM KD-T | 64.66 (64.63) | 65.12 (65.22) | 64.20 (64.24) | | | | | | Bin-5 | 64.80 (64.65) | 66.27 (66.32) | 65.99 (65.93) | | | | | | Bin-10 | 65.19 (65.12) | 66.92 (67.02) | 65.55 (65.57) | | | | | | Bin-15 | 64.84 (64.78) | 66.26 (66.43) | 66.34 (66.31) | | | | | | P-Stance KD-T | 81.55 (81.11) | 81.43 (80.98) | 81.29 (80.82) | | | | | | Bin-5 | 81.58 (81.11) | 81.63 (81.21) | 82.06 (81.63) | | | | | | Bin-10 | 81.70 (81.31) | 82.16 (81.71) | 82.34 (81.97) | | | | | | Bin-15 | 81.59 (81.16) | 82.26 (81.83) | 81.94 (81.47) | Model | Gen 1 | Gen 2 | Gen 3 | | COVID-19 AKD | 75.07 (72.54) | 75.62 (73.00) | 75.03 (72.67) | | | | | | CKD | 74.90 (72.38) | 76.93∗ (74.53) | 75.70 (72.71) | | | | | | AM AKD | 64.82 (64.86) | 64.47 (64.46) | 65.47 (65.54) | | | | | | CKD | 65.19 (65.12) 66.92∗ (67.02∗ ) | 65.55 (65.57) | | | | | | | P-Stance AKD | 81.98 (81.54) | 81.92 (81.54) | 81.48 (81.15) | | | | | | CKD | 81.70 (81.31) | 82.16 (81.71) | 82.34 (81.97∗ ) | | | | | | Table 9: Performance comparisons of CKD and AKD in Fmicro (Fmacro). ∗: CKD improves the best AKD at p < 0.05 with paired t-test. fectiveness of the proposed method. Moreover, we | | | | | | | | ## In Gen2 Column. We can observe that both CKD and baselines achieve superior performance over the first teacher model (Gen0) on COVID-19 and AM, demonstrating the effectiveness of born-again networks on stance detection. Moreover, our proposed CKD consistently outperforms the distillation baselines on almost all stance detection datasets in different generations, which indicates that dynamically updating τ according to ECE as compared with a fixed τ yields better performance and a well-calibrated teacher can help train a better student. Bin size We evaluate the proposed CKD using different bin sizes. Table 8 shows performance comparisons of CKD using different bin sizes on stance datasets. We can see that CKD models with different bin sizes outperform the best distillation baseline KD-T in most cases, demonstrating the effectiveness of the proposed method. Moreover, we observe that the default bin size of 10 achieves the best performance on all stance datasets in overall. CKD vs. AKD First, we compare CKD with AKD in terms of F*micro* and F*macro* in Table 9. AKD (Li et al., 2021b) adopts an instance-specific temperature scaling strategy and is trained in only one generation. Here, we further extend AKD over multiple generations. First, we see that AKD achieves better performance with more generations, reinforcing the claim that training student models in multiple generations can improve the task performance. Second, our CKD shows superior performance over AKD on most generations for each dataset, which indicates that a well-calibrated teacher contributes more to stance detection task in generations. Second, we compare reliability diagrams of our CKD and AKD on COVID-19, AM, and P-Stance datasets in Figures 2 and 3. Reliability diagrams are a visual representation of model calibration (Niculescu-Mizil and Caruana, 2005; Guo et al., 2017). The prediction space is usually discretized Fmicro ![7_image_2.png](7_image_2.png) COVID-19 AM ![7_image_0.png](7_image_0.png) ![7_image_1.png](7_image_1.png) ![7_image_3.png](7_image_3.png) ![7_image_4.png](7_image_4.png) into ten bins and the mean predicted value (i.e., confidence) is plotted against the expected sample accuracy in each bin. If a model is perfectly calibrated, then acc(Bm) will be equal to *conf*(Bm) for all m ∈ {1*, ..., M*}. We can see that AKD is over-confident in its predictions on all datasets, with large gap areas in Figure 3. We then can observe that our CKD produces more calibrated predictions on COVID-19, AM and P-Stance datasets, verifying the calibration effects of temperature scaling. Based on these observations, we conclude that a student learns better from calibrated teacher predictions that provide less peaked supervision signals in each generation. ECE results In order to better understand the role of calibration in knowledge distillation in generations, we show the comparison of experimental results (ECE) on three stance detection datasets in Table 10. For each model, we report the performance of the teacher that helps train the best student on each dataset. We can observe that our proposed CKD shows the best performance, achieving the lowest ECE on each dataset. Moreover, the best student of our CKD achieves the highest microaveraged F1 on all datasets, as shown in Table 7, which indicates that calibrating teacher predictions can benefit the stance detection task. | Model | COVID-19 | AM | P-Stance | |---------|------------|-------|------------| | KD-1 | 11.75 | 20.25 | 8.21 | | KD-T | 3.79 | 14.23 | 5.03 | | AKD | 12.38 | 17.70 | 7.73 | | CKD | 2.48 | 3.79 | 3.58 | ## 6 Conclusion In this paper, we study the problem of knowledge distillation in generations on stance detection. We show that knowledge distillation in multiple generations can be beneficial to stance detection. Moreover, based on the existing works, we provide a new perspective that a well-calibrated teacher can benefit the student by providing smoother training signals and make it possible for the student to learn from inter-class similarity. Our proposed CKD produces calibrated teacher predictions by dynamically updating the temperature used for scaling in each generation. Experimental results show that our proposed method consistently outperforms the best-performing baseline on different stance detection datasets. Future work includes extending our proposed method to a broader range of NLP tasks such as emotion classification. ## 7 Limitations One limitation of our method is that it requires multiple generations to achieve the best performance on stance detection datasets. While the best student model significantly outperforms strong baselines, it takes longer training time and requires extra memory for the teacher model. This is a common limitation for knowledge distillation in generations. Another limitation of our method is that the improvements brought by knowledge distillation saturate after a few generations, which can be also observed in previous work. We will explore how to improve the performance saturation in the future. ## 8 Ethical Considerations Beyond the proposed method that helps correctly identify the stance towards specific targets, it is very important to consider the ethical implications of stance detection systems. Since stance detection systems could automatically collect and aggregate the topical stance for a specific target, these systems may have significant impact on decision-making. Algorithms are not perfect, and thus a potential harm is that these systems may make incorrect predictions and further mislead the decision-making. Researchers should be aware of potential harms from the misuse of stance detection systems, and should respect people's privacy during the data collection. ## Acknowledgments We thank the National Science Foundation for support from grants IIS-1912887, IIS-2107487, and ITE-2137846 which supported the research and the computation in this study. We also thank our reviewers for their insightful feedback and comments. ## References Gustavo Aguilar, Yuan Ling, Yu Zhang, Benjamin Yao, Xing Fan, and Chenlei Guo. 2020. Knowledge distillation from internal representations. In *The ThirtyFourth AAAI Conference on Artificial Intelligence,* AAAI 2020, pages 7350–7357. Abeer AlDayel and Walid Magdy. 2020. Stance detection on social media: State of the art and trends. arXiv preprint arXiv:2006.03644. Emily Allaway and Kathleen McKeown. 2020. Zeroshot stance detection: A dataset and model using generalized topic representations. In *Proceedings* of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8913– 8931. Emily Allaway, Malavika Srikanth, and Kathleen McKeown. 2021. Adversarial learning for zero-shot stance detection on social media. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4756–4767. Isabelle Augenstein, Tim Rocktäschel, Andreas Vlachos, and Kalina Bontcheva. 2016. Stance detection with bidirectional conditional encoding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 876–885. Kevin Clark, Minh-Thang Luong, Urvashi Khandelwal, Christopher D. Manning, and Quoc V. Le. 2019. Bam! Born-again multi-task networks for natural language understanding. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5931–5937. Anna Currey, Prashant Mathur, and Georgiana Dinu. 2020. Distilling multiple domains for neural machine translation. In *Proceedings of the 2020 Conference* on Empirical Methods in Natural Language Processing (EMNLP), pages 4500–4511. Leon Derczynski, Kalina Bontcheva, Maria Liakata, Rob Procter, Geraldine Wong Sak Hoi, and Arkaitz Zubiaga. 2017. SemEval-2017 task 8: RumourEval: Determining rumour veracity and support for rumours. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 69–76. Shrey Desai and Greg Durrett. 2020. Calibration of pre-trained transformers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 295–302. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186. Jiachen Du, Ruifeng Xu, Yulan He, and Lin Gui. 2017. Stance classification with target-specific neural attention networks. In *Proceedings of the 26th International Joint Conference on Artificial Intelligence*, pages 3988–3994. Tommaso Furlanello, Zachary Lipton, Michael Tschannen, Laurent Itti, and Anima Anandkumar. 2018. Born again neural networks. In *Proceedings of the* 35th International Conference on Machine Learning, pages 1607–1616. Kyle Glandt, Sarthak Khanal, Yingjie Li, Doina Caragea, and Cornelia Caragea. 2021. Stance detection in COVID-19 tweets. In *Proceedings of the* 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1596–1611. Genevieve Gorrell, Elena Kochkina, Maria Liakata, Ahmet Aker, Arkaitz Zubiaga, Kalina Bontcheva, and Leon Derczynski. 2019. SemEval-2019 task 7: RumourEval, determining rumour veracity and support for rumours. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 845–854. Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. 2017. On calibration of modern neural networks. In *Proceedings of the 34th International Conference on Machine Learning*, pages 1321–1330. Han Guo, Ramakanth Pasunuru, and Mohit Bansal. 2021. An overview of uncertainty calibration for text classification and the role of distillation. In *Proceedings of the 6th Workshop on Representation Learning* for NLP (RepL4NLP-2021), pages 289–306. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. *arXiv* preprint arXiv:1503.02531. Mahshid Hosseini and Cornelia Caragea. 2021. Distilling knowledge for empathy detection. In *Findings of the Association for Computational Linguistics: EMNLP 2021*, pages 3713–3724, Punta Cana, Dominican Republic. Association for Computational Linguistics. Mahshid Hosseini and Cornelia Caragea. 2022. Calibrating student models for emotion-related tasks. In *Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing*, pages 9266–9278, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2020. TinyBERT: Distilling BERT for natural language understanding. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 4163– 4174. Kornraphop Kawintiranon and Lisa Singh. 2021. Knowledge enhanced masked language model for stance detection. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, pages 4725–4735. Yoon Kim and Alexander M. Rush. 2016. Sequencelevel knowledge distillation. In *Proceedings of the* 2016 Conference on Empirical Methods in Natural Language Processing, pages 1317–1327. Dilek Küçük and Fazli Can. 2020. Stance detection: A survey. *ACM Comput. Surv.*, 53(1):1–37. Yingjie Li and Cornelia Caragea. 2019. Multi-task stance detection with sentiment and stance lexicons. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6298–6304. Yingjie Li and Cornelia Caragea. 2021a. A multi-task learning framework for multi-target stance detection. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 2320–2326. Yingjie Li and Cornelia Caragea. 2021b. Target-aware data augmentation for stance detection. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational* Linguistics: Human Language Technologies, pages 1850–1860. Yingjie Li, Tiberiu Sosea, Aditya Sawant, Ajith Jayaraman Nair, Diana Inkpen, and Cornelia Caragea. 2021a. P-stance: A large dataset for stance detection in political domain. In *Findings of the Association* for Computational Linguistics: ACL-IJCNLP 2021, pages 2355–2365. Yingjie Li, Chenye Zhao, and Cornelia Caragea. 2021b. Improving stance detection with multi-dataset learning and knowledge distillation. In *Proceedings of the* 2021 Conference on Empirical Methods in Natural Language Processing, pages 6332–6345. Yingjie Li, Chenye Zhao, and Cornelia Caragea. 2023. Tts: A target-based teacher-student framework for zero-shot stance detection. In *Proceedings of the* ACM Web Conference 2023, page 1500–1509. Bin Liang, Zixiao Chen, Lin Gui, Yulan He, Min Yang, and Ruifeng Xu. 2022a. Zero-shot stance detection via contrastive learning. In *Proceedings of the ACM* Web Conference 2022, page 2738–2747. Bin Liang, Yonghao Fu, Lin Gui, Min Yang, Jiachen Du, Yulan He, and Ruifeng Xu. 2021. Target-adaptive graph for cross-target stance detection. In *Proceedings of the Web Conference 2021*, page 3453–3464. Bin Liang, Qinglin Zhu, Xiang Li, Min Yang, Lin Gui, Yulan He, and Ruifeng Xu. 2022b. JointCL: A joint contrastive learning framework for zero-shot stance detection. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 81–91. Rui Liu, Zheng Lin, Yutong Tan, and Weiping Wang. 2021a. Enhancing zero-shot and few-shot stance detection with commonsense knowledge graph. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 3152–3157. Yang Liu, Sheng Shen, and Mirella Lapata. 2021b. Noisy self-knowledge distillation for text summarization. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 692–703. Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In *International Conference on Learning Representations*. Hossein Mobahi, Mehrdad Farajtabar, and Peter Bartlett. 2020. Self-distillation amplifies regularization in hilbert space. In Advances in Neural Information Processing Systems, volume 33, pages 3351–3361. Saif Mohammad, Svetlana Kiritchenko, Parinaz Sobhani, Xiao-Dan Zhu, and Colin Cherry. 2016a. A dataset for detecting stance in tweets. In *LREC*. Saif Mohammad, Svetlana Kiritchenko, Parinaz Sobhani, Xiaodan Zhu, and Colin Cherry. 2016b. Semeval-2016 task 6: Detecting stance in tweets. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 31–41. Saif M Mohammad, Parinaz Sobhani, and Svetlana Kiritchenko. 2017. Stance and sentiment in tweets. ACM Transactions on Internet Technology (TOIT), 17(3):26. Mahdi Pakdaman Naeini, Gregory F Cooper, and Milos Hauskrecht. 2015. Obtaining well calibrated probabilities using bayesian binning. In *AAAI*, pages 2901–2907. Dat Quoc Nguyen, Thanh Vu, and Anh Tuan Nguyen. 2020. BERTweet: A pre-trained language model for English tweets. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language* Processing: System Demonstrations, pages 9–14. Alexandru Niculescu-Mizil and Rich Caruana. 2005. Predicting good probabilities with supervised learning. In *Proceedings of the 22nd International Conference on Machine Learning*, page 625–632. Seo Yeon Park and Cornelia Caragea. 2022a. A data cartography based MixUp for pre-trained language models. In *Proceedings of the 2022 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4244–4250, Seattle, United States. Association for Computational Linguistics. Seo Yeon Park and Cornelia Caragea. 2022b. On the calibration of pre-trained language models using mixup guided by area under the margin and saliency. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5364–5374, Dublin, Ireland. Association for Computational Linguistics. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. In *Advances in Neural Information Processing* Systems, volume 32, pages 8024–8035. John C. Platt. 1999. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. In *Advances in Large Margin Classifiers*, pages 61–74. Benjamin Schiller, Johannes Daxenberger, and Iryna Gurevych. 2021. Stance detection benchmark: How robust is your stance detection? KI - Künstliche Intelligenz. Umme Aymun Siddiqua, Abu Nowshed Chy, and Masaki Aono. 2019. Tweet stance detection using an attention based neural ensemble model. In *Proceedings of the 2019 Conference of the North American* Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1868–1873. Parinaz Sobhani, Diana Inkpen, and Xiaodan Zhu. 2017. A dataset for multi-target stance detection. In *Proceedings of the 15th Conference of the European* Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 551–557. Christian Stab, Tristan Miller, Benjamin Schiller, Pranav Rai, and Iryna Gurevych. 2018. Cross-topic argument mining from heterogeneous sources. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 3664– 3674. Qingying Sun, Zhongqing Wang, Qiaoming Zhu, and Guodong Zhou. 2018. Stance detection with hierarchical attention network. In *Proceedings of the 27th* International Conference on Computational Linguistics, pages 2399–2409. Siqi Sun, Yu Cheng, Zhe Gan, and Jingjing Liu. 2019. Patient knowledge distillation for BERT model compression. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4323–4332. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, pages 2818–2826. Meihan Tong, Bin Xu, Shuai Wang, Yixin Cao, Lei Hou, Juanzi Li, and Jun Xie. 2020. Improving event detection via open-domain trigger knowledge. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5887–5897. Penghui Wei, Junjie Lin, and Wenji Mao. 2018. Multitarget stance detection via a dynamic memoryaugmented network. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, SIGIR 2018, Ann Arbor, MI, USA, July 08-12, 2018, pages 1229–1232. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45. Chang Xu, Cécile Paris, Surya Nepal, and Ross Sparks. 2018. Cross-target stance classification with selfattention networks. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 778– 783. Chenglin Yang, Lingxi Xie, Siyuan Qiao, and Alan L. Yuille. 2019. Training deep neural networks in generations: A more tolerant teacher educates better students. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pages 5628–5635. Lehan Yang and Jincen Song. 2021. Rethinking the knowledge distillation from the perspective of model calibration. *arXiv preprint arXiv:2111.01684*. Bowen Zhang, Min Yang, Xutao Li, Yunming Ye, Xiaofei Xu, and Kuai Dai. 2020. Enhancing crosstarget stance detection with transferable semanticemotion knowledge. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3188–3197. Zhilu Zhang and Mert Sabuncu. 2020. Self-distillation as instance-specific label smoothing. In Advances in Neural Information Processing Systems, volume 33, pages 2184–2195. Chenye Zhao and Cornelia Caragea. 2021. Knowledge distillation with BERT for image tag-based privacy prediction. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021), pages 1616–1625, Held Online. INCOMA Ltd. ## A Hyperparameters layers is 300 and the hidden dimension of LSTM is 300 for TAN, BiCE and CrossNet. For each BERT-based model, AdamW optimizer is used with a learning rate of 2e-5, and gradient clipping if the norm of the gradients exceeds 1. Each model is fine-tuned for 5 epochs, with a minibatch size of 32. For each non-BERT model, AdamW optimizer (Loshchilov and Hutter, 2019) is used with a learning rate of 1e-3, and gradient clipping if the norm of the gradients exceeds 1. Each model is trained for 30 epochs, with a mini-batch size of 128 in each iteration. A dropout of 0.5 is used after the embedding layer. The dimension of feed-forward ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitation Section after the Conclusion. ✓ A2. Did you discuss any potential risks of your work? Ethical Considerations Section after the Conclusion. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and Section 1. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 4. ✓ B1. Did you cite the creators of artifacts you used? Section 4. ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Section 4. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? We discussed our usage of baseline models and previous datasets in Section 4. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 4. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 4. ## C ✓ **Did You Run Computational Experiments?** Section 5. ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 4. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4. ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 5. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 4. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
wei-etal-2023-ptcspell
{PTCS}pell: Pre-trained Corrector Based on Character Shape and {P}inyin for {C}hinese Spelling Correction
https://aclanthology.org/2023.findings-acl.394
Chinese spelling correction (CSC) is a challenging task with the goal of correcting each wrong character in Chinese texts. Incorrect characters in a Chinese text are mainly due to the similar shape and similar pronunciation of Chinese characters. Recently, the paradigm of pre-training and fine-tuning has achieved remarkable success in natural language processing. However, the pre-training objectives in existing methods are not tailored for the CSC task since they neglect the visual and phonetic properties of characters, resulting in suboptimal spelling correction. In this work, we propose to pre-train a new corrector named PTCSpell for the CSC task under the detector-corrector architecture. The corrector we propose has the following two improvements. First, we design two novel pre-training objectives to capture pronunciation and shape information in Chinese characters. Second, we propose a new strategy to tackle the issue that the detector{'}s prediction results mislead the corrector by balancing the loss of wrong characters and correct characters. Experiments on three benchmarks (i.e., SIGHAN 2013, 2014, and 2015) show that our model achieves an average of 5.8{\%} F1 improvements at the correction level over state-of-the-art methods, verifying its effectiveness.
# Ptcspell: Pre-Trained Corrector Based On Character Shape And Pinyin For Chinese Spelling Correction Xiao Wei1∗ , Jianbao Huang1∗ , Hang Yu1† , Qian Liu2 1Shanghai University 2Nanyang Technological University, Singapore {xwei,845514379,yuhang}@shu.edu.cn, [email protected] ## Abstract Chinese spelling correction (CSC) is a challenging task with the goal of correcting each wrong character in Chinese texts. Incorrect characters in a Chinese text are mainly due to the similar shape and similar pronunciation of Chinese characters. Recently, the paradigm of pre-training and fine-tuning has achieved remarkable success in natural language processing. However, the pre-training objectives in existing methods are not tailored for the CSC task since they neglect the visual and phonetic properties of characters, resulting in suboptimal spelling correction. In this work, we propose to pre-train a new corrector named PTCSpell for the CSC task under the detector-corrector architecture. The corrector we propose has the following two improvements. First, we design two novel pre-training objectives to capture pronunciation and shape information in Chinese characters. Second, we propose a new strategy to tackle the issue that the detector's prediction results mislead the corrector by balancing the loss of wrong characters and correct characters. Experiments on three benchmarks (i.e., SIGHAN 2013, 2014, and 2015) show that our model achieves an average of 5.8% F1 improvements at the correction level over state-of-theart methods, verifying its effectiveness. ## 1 Introduction Chinese spelling correction (CSC) is a task which detects incorrect characters in Chinese text and corrects them. CSC is often used as post-processing to ensure the quality of a search engine query text (Gao et al., 2010; Duan et al., 2018) and academic papers (Pollock and Zamora, 1984). CSC plays an important role in correcting recognition errors due to similar pronunciation and character shape (Park et al., 2021; Nguyen et al., 2021), which is a common issue with automatic speech recognition *Xiao Wei and Jianbao Huang contributed equally. †Hang Yu is the corresponding author. | Type | Sentence | Correction | |-------------------------------------------|---------------------------------------------|--------------| | Phono- | 今天教师(shi1)里面很热。 | 室(shi4) | | logical | Trans.: It's very hot in the teacher today. | classroom | | Visual | 操场上有一于(yu2)个人。 | 千(qian1) | | Trans.: There are a at people in the gym. | thousand | | Table 1: Examples of incorrect character recognition due to similar pronunciation and shape. (ASR) and optical character recognition (OCR) systems. Compared with other languages, Chinese has distinct characteristics, such as its unique pronunciation system (usually represented as pinyin) and writing norms, which often lead to the two problems, namely the same pronunciation may correspond to multiple characters and different characters have similar shapes. According to Liu et al. (2010), around 83% and 48% of errors in Chinese texts can be attributed to phonological and visual similarity respectively. The first sentence in Table 1 is an example of a character with a similar pronunciation, where "室" is misspelled as "师", and the second sentence is an example of a character with a similar shape, where "千" is misspelled as "于". Chinese spelling correction is challenging because different people have different writing habits, resulting in a variety of mistakes for each character. As such, it is difficult for previous rule-based methods to address these issues effectively. Most of the recent works based on large pre-trained language models (Zhang et al., 2020; Zheng et al., 2021) perform well in the CSC task. For example, Guo et al. (2021); Liu et al. (2021) used artificially constructed confusion sets to pre-train language models for the CSC task. However, pre-training objectives tailored for CSC have not yet been explored. The visual and phonetic properties of characters are not fully considered in the pre-training process. Moreover, most works are based on the architecture of the detector-corrector (Zhang et al., 2020; Li et al., 2021; Zhu et al., 2022). However, | Sentence | Correction | |------------------------------------|--------------| | 我的录影(ying3)机在哪? | 音(yin1) | | Where is my video recorder? | tape | | 晚上的花园很安(an1)静。 | 宁(ning2) | | The garden is very quiet at night. | peaceful | an inherent problem occurs if the detector predicts a correct character as being wrong, in which case the corrector may change the correct character to another one which is also a reasonable response. For example, in Table 2, the correct character "影" is replaced by "音", and the correct character "安" is replaced by "宁". It can be seen that the modified sentence changes the semantics of the original text. To solve these problems, we propose a pretrained corrector based on the visual and pronunciation features of characters for the CSC task. By doing so, the corrector can capture the phonetic similarity and visual similarity between characters, and such a pre-training strategy is more tailored to the CSC task. To address the problem of the detector's prediction results misleading the corrector, our basic idea is to strengthen the ability of the corrector to recognize correct characters to ease the errors caused by detector. In this work, we propose a two-stage pre-trained corrector. In the first stage of pre-training, the visual and phonological features of characters are taken into account, and the pre-training strategies matching CSC are designed, respectively from similar vision to character and from similar pinyin to character. The pre-trained models are denoted as similar character shape BERT (SCSBERT) and similar pinyin BERT (SPBERT). In the second stage of pre-training, three different pre-trained models, SCSBERT, BERT, and SPBERT are fully fused. In addition, we propose a novel loss function to fine-tune the corrector. We calculate the loss of a small number of correct characters and the loss of all the wrong characters in the original text, so that it not only corrects the wrong characters, it also prevents the correct characters from being corrected by mistake. To verify the effectiveness of our method, we conduct experiments with our model on the SIGHAN 2013, 2014 and 2015 test set using the official SIGHAN testing tool, which achieve an average improvement in F1 of 5.2% and 5.8% at the detection-level and correction-level compared to the latest works, MDCSpell (Zhu et al., 2022) and REALISE (Xu et al., 2021). Our main contributions are summarized as follows: (i) We propose a pre-trained corrector which is tailored for the CSC task; (ii) We design two pre-training objectives based on vision and pronunciation to enable the corrector to capture the shape and pinyin similarity between characters; (iii) we propose a new strategy to solve the problem where the prediction results of the detector mislead the corrector by balancing the loss of incorrect characters and correct characters. ## 2 Related Work CSC is the task of detecting and correcting wrong characters in Chinese sentences. The previous works were mainly based on the n-gram language model, rule-based method and confusion set for character error detection and correction (Yeh et al., 2013; Chang et al., 2015; Chu and Lin, 2015). After this, the CSC task is usually transformed into a sequence tagging task. Some machine learning and deep learning methods such as CRF and Bi-LSTM have been used to classify each character in a text (Chang et al., 2015; Wang et al., 2018). These methods detect characters and select a character with the highest probability of being correct. Large pre-trained language models (PTMs) based on Transformer (Vaswani et al., 2017), such as BERT (Devlin et al., 2019), XLNET (Yang et al., 2019) and SpanBERT (Joshi et al., 2020) have been proposed and are being increasingly used for CSC tasks. In Soft-masked BERT (Zhang et al., 2020), BERT is used as a corrector to correct wrong positions predicted by the gate recurrent unit (GRU). Hong et al. (2019) set different masking schemes to fine-tune BERT, and selected the optimal candidate results as the error correction results. In addition to using BERT as a corrector, ELECTRA's discriminator (Clark et al., 2020) is also used to detect errors (Zheng et al., 2021). Although BERT can be used in CSC tasks and achieves good results, the similarity between the target character and original character is not easy to learn. Therefore, a confusion set (LEEa et al., 2019) is used to pre-train the BERT-like model to solve the above problem. Guo et al. (2021) used artificially constructed confusion sets to pre-train BERT, which makes BERT more capable of correcting phonologically and visually similar characters. Liu et al. (2021) set different proportions to select phonologically and visually similar characters in the process of using a confusion set for training. Chinese spelling errors are mainly caused by characters which are similar in shape and pronunciation. Recent works focus on how to make full use of visual and phonetic features to improve the correction of these two types of errors. To correct errors due to characters having a similar shape, the Chinese characters are encoded into images or split into strokes. To correct errors due to characters having a similar pronunciation, the original text is converted into pinyin or speech features (Wang et al., 2018; Xu et al., 2021; Liu et al., 2021). Although prior research has achieved good results on the CSC task, the following two problems remain unresolved. First, the visual and phonetic features of similar Chinese characters cannot be directly fused into BERT for pre-training. Second, in relation to the detector-corrector architecture, the corrector cannot effectively alleviate the error caused by detector. To solve these problems, we propose a pre-trained language model based on visual features and pinyin features and a special loss function for the corrector. ## 3 Methodology In this section, we briefly introduce the preliminaries of the our method, then we detail the proposed PTCSpell. ## 3.1 Preliminaries Task Definition The CSC task is defined as follows. Given a piece of Chinese text X = (x1, x2, x3*, . . . , x*n) which may contain some errors, the target is to transform it to a corrected text Y = (y1, y2, y3*, . . . , y*n). This task is generally formed as a mapping of the original sequence X to the corrected sequence Y , i.e., f(X) = Y . The particularity lies in that there are usually only a small number of wrong characters and the wrong character xi ∈ X has some similarity to its correct character yi ∈ Y . Architecture The mainstream paradigm for the CSC task is based on the detector-corrector architecture (Zhang et al., 2020; Li et al., 2021). The detector identifies whether each character is correct or wrong, and the *corrector* generates corrections for the detected errors. Our method is designed based on this architecture. We briefly introduce the basic detector and corrector networks used in Figure 1: The architecture of the detection network. ![2_image_0.png](2_image_0.png) our method, and our motivation for designing our PTCSpell method. Detector The detection network used in our model is based on the ELECTRA discriminator (Clark et al., 2020), as shown in Figure 1. Following Zheng et al. (2021); Li et al. (2021), error detection is defined as a character-level binary classification task. We employ the pre-trained Chinese ELECTRA* to initialize the parameters of the discriminator. The input sequence X is represented as character-level tokens, and then ELECTRA's discriminator encodes them to Hd. The classification layer can be expressed as follows: $$\mathbf{H_{1}}=G E L U(\mathbf{W_{1}}\mathbf{H}^{d}+\mathbf{b_{1}})$$ $$\begin{array}{c}{{H_{1}=G E L C\left(W_{1}H^{2}+b_{1}\right)}}\\ {{}}\\ {{H_{2}=L a y e r N o r m(H_{1})}}\\ {{}}\\ {{H_{3}=W_{2}H_{2}+b_{2}}}\end{array}$$ where *GELU* is an activation function proposed by Hendrycks and Gimpel (2016), *LayerNorm* is the layer normalization proposed by Ba et al. (2016), W1 ∈ Rd×dand W2 ∈ R1×d(d is the size of the hidden states from ELECTRA's discriminator) are trainable parameters, b1 and b2 are bias vectors. H3 = {h1, h2*, . . . ,* hn} is the classification layer's output representation of each character. The probability that each character may be wrong can be defined as: $$P^{d}(g_{i}=1|X)=s i g m o i d(\mathbf{h}_{i})$$ $$\left(4\right)$$ where *sigmoid* is the activation function, P d(gi = 1|X) is a conditional probability indicating the probability that xiis an error character. Corrector The correction network is usually defined as a multi-class classification task at the token level, which is used to correct the characters at the location where the detection network output is "1", as shown in Figure 1. The correction network in some recent works is based on BERT with masked *We use the released *chinese-electra-180g-basediscriminator* on Hugging Face https://huggingface.co/ hfl/chinese-electra-180g-base-discriminator. language model (MLM) task (Hong et al., 2019; Zhang et al., 2020). That is, each wrong character in the original text X detected by the detector is replaced with the [MASK] token, then the model predicts the most likely correct character to replace the [MASK] token. However, it is important to observe the wrong characters in the original text for the corrector. For example, if people can read wrong characters in context, then it will be more conducive to correct them from a similar shape or similar pronunciation. As such, the disadvantage of the previous works is that the original text is lost when correcting, which may lead to deviation from the semantics of the original text. Motivations The contribution of this work is mainly in relation to the correction network, which includes the following two aspects as follows. Firstly, we pre-train the correction network using similar character shape and similar pinyin. Secondly, we fine-tune the correction network under a special strategy which balances the loss of correct characters and wrong characters. To solve the problem in the correction network caused by masked text, Zhu et al. (2022) use original text as input instead of masked text, which is also used in our architecture. In addition, we pretrain the correction network from similar character shape and similar pinyin. These pre-training objectives reduce the gap between pre-training and fine-tuning, and make the error correction process more targeted to two common types of spelling errors. Another inherent problem caused by the architecture of the detector-corrector is that the detector may predict a correct character as the wrong character, which misleads the corrector. To alleviate the detector prediction error, we not only calculate the loss of wrong characters, but also correct characters during the fine-tuning of the corrector. ## 3.2 Ptcspell We propose a pre-trained corrector based on character shape and pinyin for Chinese spelling correction, named PTCSpell. The architecture of the correction network is shown in Figure 2. It consists of three modules, i.e., Similar Character Shape BERT (SCSBERT), BERT and Similar Pinyin BERT (SPBERT), where SCSBERT and SPBERT are new designs. As suggested by the name of SCSBERT and SPBERT, the architecture of these two modules is exactly the same as BERT, both of which comprise 12 transformer blocks and 12 self-attention heads. The main difference between them is the pre-training objectives, i.e., SCSBERT and SPBERT are pre-trained by our designed pre-training objectives from scratch, while BERT is simply initialized by Chinese BERT with whole word masking†(Cui et al., 2021). More specifically, our proposed pre-training strategies are that SCSBERT is pre-trained by the visual features of characters and SPBERT is pre-trained by the phonetic features of characters (more details are given in Section 3.3). The input of SCSBERT and BERT is the original text, while SPBERT is pinyin, which is converted from the original text using the pypinyin tool‡. Formally, given input X, SCSBERT, BERT and SPBERT encode it as Hc, Hband Hp, respectively. Then we consider how to fuse them to generate a unified representation for X. Since the character and pinyin at each position correspond to each other, it is felicitous to use the concatenation operator to fuse different features. In addition, the operator is conducive to preserving all information about the character and pinyin, thus it is expressed as follows: $$H^{f u s e d}=C o n c a t(H^{c},H^{b},H^{p})$$ $\eqref{eq:walpha}$ where Hc, Hb, Hpare the last hidden state of SCSBERT, BERT and SPBERT, respectively, Hc ∈ Rb×m×d, Hb ∈ Rb×m×d, Hp ∈ Rb×m×d, H*fused* ∈ Rb×m×3d. b is the batch size, m is the maximum length of the text in the batch. d is the size of hidden states of BERT. After this, we feed H*fused* to the classification layer. The formula expression of the classification layer is the same as (1), (2) and (3), except for the dimension of W, i.e., W1 ∈ R3d×3d, W2 ∈ Rv×3d, where v is the size of the vocabulary. The final output of the classification layer is h ′ j , where j is the position of the character to be corrected. $$P^{c}(y_{j}|X)=s o f t m a x({\mathbf{h}}_{j}^{'})$$ $$(6)$$ ) (6) where *softmax* is the activate function, and P c(yj |X) indicates the probability that xj is corrected to yj . ![4_image_0.png](4_image_0.png) ## 3.3 Pre-Training There are two stages of pre-training in the training correction network: the first stage is pre-training SCSBERT and SPBERT from scratch, the second stage is pre-training the whole correction network. The purpose of the first stage of pre-training is to ensure SCSBERT and SPBERT have the ability to correct characters with similar shape and pronunciation, respectively. The second stage of pre-training can fully fuse the features extracted by SCSBERT, BERT and SPBERT. SCSBERT To obtain the shape similarity of characters visually, we employ the SIFT (Lowe, 2004) image matching algorithm to calculate the visual similarity between two characters. Then, the characters with higher visual similarity are selected to replace the correct characters in the corpus to construct samples which have a similar character shape. As shown in Figure 2 (*SCSBERT Pre-training*), we use samples which have a similar character shape and the original text of the corpus as the pre-training dataset for SCSBERT. SPBERT *SPBERT Pre-training* is illustrated in Figure 2. Firstly, we convert the prepared corpus text to pinyin, and then randomly replace some pinyin tokens with similar pronunciation ones. The sequence with the replaced pinyin is the input text X, and the original corpus data is the corrected text. The pre-training process of SPBERT converts a similar pinyin to a correct character. Both the first stage of pre-training and the second stage of pre-training use the following two strategies: (i) randomly replace 10% of tokens of the text with similar characters or similar pinyin (the purple characters as shown in Figure 2); (ii) randomly select 4% of tokens of the text each time, and leave them unchanged (the green characters as shown in Figure 2). ## 3.4 Fine-Tuning Detection and correction tasks are defined as character classification. The difference is that the detection task is a binary classification task, while the correction task is a multi-classification task, where the number of classes is the size of the vocabulary. The loss function of the detection network is defined as: $$L^{d}=-\sum_{i=1}^{n}\log P^{d}(g_{i}|X)\qquad\qquad(7)$$ The correction network loss includes two parts, i.e., the loss of correct characters (the blue characters in Figure 2) and the loss of wrong characters (the red characters in Figure 2) in the original text. Given X = {x1, x2*, . . . , x*n}, Y = {y1, y2*, . . . , y*n}, we denote a target set T1 consisting of correct characters in the original text. More specifically, we select characters from Y that are equal to the characters in X. $$T_{1}=\left\{y_{i}|x_{i}=y_{i},1\leq i\leq n,i\in N^{*}\right\}\tag{8}$$ $$=\left\{t_{1},t_{2},\ldots,t_{m_{1}}\right\}$$ are $m_{1}$ is the size of $T_{1}$ where m1 is the size of T1. Characters of α proportion selected from T1 are denoted as T2. The set size is m2 = αm1, where α is a pre-defined the hyper-parameters. $$T_{2}=\{t_{i}|t_{i}\in T_{1},1\leq i\leq m_{2},i\in N^{*}\}\tag{9}$$ $$=\{t_{1}^{\prime},t_{2}^{\prime},\ldots,t_{m_{2}}^{\prime}\}$$ To obtain the corrected characters corresponding to the wrong characters in the original text, we denote F as a set with size m3: $$F=\{y_{i}|x_{i}\neq y_{i},1\leq i\leq n,i\in N^{*}\}\tag{10}$$ $$=\{f_{1},f_{2},\ldots,f_{m_{3}}\}$$ Therefore, the loss of the correction network can be expressed as: $$L^{c}=-\sum_{i=1}^{m_{2}}\log P^{c}(t_{i}^{\prime}|X)-\sum_{i=1}^{m_{3}}\log P^{c}(f_{i}|X)\tag{11}$$ We train the detection network and correction network by minimizing L dand L c. ## 4 Experiments 4.1 Datasets And Metrics To make a fair comparison with previous works, we use SIGHAN training data (Wu et al., 2013; Yu et al., 2014; Tseng et al., 2015) and the generated pseudo data (Wang et al., 2018) as the datasets for pre-training and fine-tuning. We use the official SIGHAN2015 testing tool to evaluate the performance of our method in the three SIGHAN test sets. Following previous works, we use sentence-level precision, recall, and F1 as the metrics. Our data preprocessing methods include converting traditional characters to simplified characters in the whole dataset using OpenCC tool§and removing a few mislabeled data in the SIGHAN2014 and SIGHAN2015 training sets. The SIGHAN dataset is repeated 5 times and joined with the Wang271K dataset as the final training set. The results are shown in Table 3. ## 4.2 Implementation Details We fine-tune the detection network with 10 epochs, in which the learning rate is 1.5e-5 and the batch §https://github.com/BYVoid/OpenCC Training Set #Sent Avg.Length #Errors SIGHAN2013 700 41.8 343 SIGHAN2014 3434 49.6 5140 SIGHAN2015 2337 31.3 3046 Wang217K 271329 42.6 391962 Total 303684 42.5 424607 Test Set #Sent Avg.Length #Errors SIGHAN2013 1000 74.3 1224 SIGHAN2014 1062 50.0 771 SIGHAN2015 1100 30.6 703 Total 3162 50.9 2698 size is 8. The learning process of the correction network consists of two stages of pre-training and one stage of fine-tuning. We train the correction network with 10 epochs in the three stages, in which the learning rate is 2e-5 and the batch size is 8. The optimizer for the detection network and the correction network is AdamW (Loshchilov and Hutter, 2018). We use a warm-up (He et al., 2016) strategy to adjust the learning rate. Specifically, the learning rate increases linearly in the first quarter of iterations, but decreases linearly in the next three quarters of iterations. Following previous works (Xu et al., 2021), "的", "地" and "得" in SIGHAN2013 are not considered during evaluating because they are mixed in the dataset. ## 4.3 Baselines We compare our model with the following methods: - SpellGCN (Cheng et al., 2020): Graph convolutional networks are used to incorporate phonological and visual similarity knowledge into BERT. - GAD (Guo et al., 2021): This method captures rich global context information to reduce the impact of local error context information. - DCN (Wang et al., 2021): The candidate Chinese characters are generated by pinyin and then an attention-based network is used to model the dependencies between two adjacent characters. - REALISE (Xu et al., 2021): This method selectively mixes the semantic, phonetic and graphic information of Chinese characters. | Dataset | Method | Detection Level | Correction Level | | | | | |-------------------------------|-------------------------------|-------------------|--------------------|------|------|------|------| | SpellGCN (Cheng et al., 2020) | 80.1 | 74.4 | 77.2 | 78.3 | 72.7 | 75.4 | | | GAD (Guo et al., 2021) | 85.7 | 79.5 | 82.5 | 84.9 | 78.7 | 81.6 | | | DCN (Wang et al., 2021) | 86.8 | 79.6 | 83.0 | 84.7 | 77.7 | 81.0 | | | REALISE⋆(Xu et al., 2021) | 88.6 | 82.5 | 85.4 | 87.2 | 81.2 | 84.1 | | | MDCSpell (Zhu et al., 2022) | 89.1 | 78.3 | 83.4 | 87.5 | 76.8 | 81.8 | | | ELECTRA+BERT⋆ (baseline) | 99.3 | 75.6 | 85.8 | 99.3 | 76.0 | 86.1 | | | PTCSpell (ours)⋆ | 99.7 | 80.6 | 89.1 | 99.7 | 79.2 | 88.3 | | | SIGHAN2013 | SpellGCN (Cheng et al., 2020) | 65.1 | 69.5 | 67.2 | 63.1 | 67.2 | 65.3 | | GAD (Guo et al., 2021) | 66.6 | 71.8 | 69.1 | 65.0 | 70.1 | 67.5 | | | DCN (Wang et al., 2021) | 67.4 | 70.4 | 68.9 | 65.8 | 68.7 | 67.2 | | | REALISE(Xu et al., 2021) | 67.8 | 71.5 | 69.6 | 66.3 | 70.0 | 68.1 | | | MDCSpell (Zhu et al., 2022) | 70.2 | 68.8 | 69.5 | 69.0 | 67.7 | 68.3 | | | ELECTRA+BERT (baseline) | 65.4 | 68.1 | 66.7 | 64.0 | 64.0 | 64.0 | | | PTCSpell (ours) | 84.1 | 71.2 | 77.1 | 83.8 | 69.4 | 75.9 | | | SIGHAN2014 | SpellGCN (Cheng et al., 2020) | 74.8 | 80.7 | 77.7 | 72.1 | 77.7 | 75.9 | | GAD (Guo et al., 2021) | 75.6 | 80.4 | 77.9 | 73.2 | 77.8 | 75.4 | | | DCN (Wang et al., 2021) | 77.1 | 80.9 | 79.0 | 74.5 | 78.2 | 76.3 | | | REALISE(Xu et al., 2021) | 77.3 | 81.3 | 79.3 | 75.9 | 79.9 | 77.8 | | | MDCSpell (Zhu et al., 2022) | 80.8 | 80.6 | 80.7 | 78.4 | 78.2 | 78.3 | | | ELECTRA+BERT (baseline) | 75.5 | 83.0 | 79.1 | 74.6 | 79.0 | 76.7 | | | PTCSpell (ours) | 89.6 | 81.2 | 85.2 | 89.4 | 79.0 | 83.8 | | | SIGHAN2015 | | | | | | | | - MDCSpell (Zhu et al., 2022): This method designs a multi-task framework, where BERT is used as a corrector to capture the visual and phonological features of the characters and integrated the hidden states of the detector to reduce the impact of error. ## 4.4 Results As shown in Table 4, we observe that our PTCSpell achieves significant performance gain over the other baselines. It can be seen that the performance of our PTCSpell greatly exceeds that of ELECTRA+BERT. Specifically, at the correction level, our PTCSpell outperforms it in terms of F1 by 2.2%, 11.9% and 7.1% on the three SIGHAN test sets, respectively. Then, we compare our PTCSpell with the most competitive works, such as MDCSpell and REALISE. At the correction level, our PTCSpell outperforms both of these in terms of F1 by 4.2%, 7.6% and 5.5%, respectively, and in terms of precision by 12.2%, 14.8% and 11% on the three SIGHAN test sets, respectively. The main reasons are two-fold. First, PTCSpell learns the similarity between characters, thus the corrector selects characters which are similar to the original text in the correction process. Second, the novel loss function proposed on the corrector alleviates the problem of the detector predicting correct characters as errors. Although PTCSpell has made great improvements in precision and F1, recall is not improved compared with the baselines. A possible reason is that the detector fails to detect all errors, leading to a bottleneck in the corrector. ## 4.5 Ablation Study We explore the influence of parameter α in the loss function on model performance and the contribution of the SCSBERT and SPBERT modules to the PTCSpell model. We evaluate our model at the sentence-level on the 2013, 2014 and 2015 SIGHAN test sets. The average F1 performance of the three test sets is shown in Table 5 and Table 6 (the detailed results are provided in Appendix A.1). Table 5 shows the influence of different parameter α in the loss function on the performance of the PTCSpell model. We test the performance of our model by varying α in range of [0, 0.1] and [0.92, 1] with a step of 0.02. We find that a smaller α is better than a larger α. When α is 0, the model loses the ability to retain the original text semantics, so the model does not perform well. When α is 1, this means that all characters participate in the calculation of loss. It is observed that when α | α | Det. F1 | Cor. F1 | |----------|-----------|-----------| | 0 | 78.0 | 76.4 | | 0.04 | 83.8 | 82.7 | | 0.02-0.1 | 83.3 | 82.1 | | 0.92-1 | 82.9 | 81.9 | | 1 | 83.1 | 81.9 | | Model | α | Det. F1 | Cor. F1 | |-------------------|------|-----------|-----------| | PTCSpell | 0.04 | 83.8 | 82.7 | | PTCSpell(-PT) | 0.04 | 83.3 | 82.2 | | SCSBERT+BERT(-PT) | 0.04 | 83.1 | 82.0 | | BERT+SPBERT(-PT) | 0.04 | 82.6 | 81.3 | | BERT(-PT) | 0.04 | 82.3 | 81.2 | is 0.04, the model achieves the best performance. Interestingly, according to our statistics, we find that wrong characters accounted for 3.3% of all characters in the training set. We infer that when α is set at 0.04, the number of correct characters and wrong characters is balanced, which is helpful to train the model. Table 6 analyses the second stage of pre-training, SCSBERT and SPBERT for our PTCSpell. To verify the effectiveness of the pre-training in the second stage, we conduct experiments PTCSpell and PTCSpell(-PT). We find that the second stage of pre-training greatly improves the performance of the model. To verify the effectiveness of SCSBERT and SPBERT, we conduct experiments PTCSpell (-PT), SCSBERT + BERT (-PT), BERT + SPBERT (-PT) and BERT (-PT), where SCSBERT and SPBERT are pre-trained by character shape and pinyin, respectively. We find that SCSBERT and SPBERT both improve the performance of the model. Through these experiments, The effectiveness of SCSBERT, SPBERT and the second stage of pre-training are verified. ## 4.6 Case Study Table 7 provides details on two cases of Chinese spelling correction. Given an input sentence, we compare the error correction effect of the baseline (ELECTRA + BERT) and PTCSpell. The detector (ELECTRA) detects character errors, where | Input | 乍么才能让孩子对绘画蝉声兴趣呢? | |----------|------------------------------------------------| | Detector | 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0 | | Baseline | 什么才能让孩子对绘画蚊身兴趣呢? | | PTCSpell | 怎么才能让孩子对绘画产生兴趣呢? | | Trans. | How to get children interested in painting? | | Input | 我们学校购买了十台录影机。 | | Detector | 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0 | | Baseline | 我们学校购买了十台录音机。 | | PTCSpell | 我们学校购买了十台录影机。 | | Trans. | Our school has purchased ten video recorders. | "0" means the current character is correct and "1" means the current character is wrong. If the output of the detector in the current position is "1", the character is corrected by the corrector. In the first case, "乍" is correctly predicted to be the wrong character, but the baseline corrects it to "什" and forms a phrase with the following "么", which fails to capture the character shape of "乍". Our PTCSpell corrects "乍" to "怎", because it captures the character shape successfully. For the other two wrong characters "蝉声", it is difficult for baseline to correct consecutive wrong characters, causing "蝉" to be corrected to "蚊" due to its similar shape. In contrast, our PTCSpell corrects them to "生产" based on similar pronunciations and is not affected by consecutive errors. The first case shows that PTCSpell achieves good performance in both similar shape and pronunciation errors. In the second case, "影" is a correct character but the detector predicts it to be a wrong character. We can see that the baseline does not handle this well, so it is replaced with "音", while PTCSpell leaves this character unchanged. The second case shows that PTCSpell is not misled by the detector because it learns whether the current character is really wrong and tries its best to maintain the semantics of the original sentence during error correction. ## 5 Conclusion We propose a pre-trained corrector, which is a part of the detector-corrector architecture, for modeling similar shape and pronunciation errors in the CSC task. Our main contribution is to enhance the ability of the corrector. The architecture of PTCSpell is based on BERT, but is pre-trained from scratch based on similar character shapes and pinyin respectively. In addition, we propose a special loss function that enhances the ability of the corrector to retain the correct characters in the original text. Finally, the experimental results show that our model is effective. In the future, we will design a better detector to further improve the recall score for the whole detector-corrector model. ## Limitations Our model achieves outstanding performance in relation to Chinese spelling correction. However, it has several potential limitations: (i) Errors of missing and redundant characters cannot be corrected by our model. The PTCSpell model only focuses on spelling errors, and requires that the input text has no grammatical or semantic errors. (ii) The error-correcting language is targeted at Chinese. The pre-trained model based on similar pinyin cannot adapt to other languages, while pretrained model based on similar character shape can adapt to other languages well, because the pinyin input method is unique to Chinese, but character error due to a similar shape is a common problem in many languages. Nevertheless, we put forward the idea of matching the pre-trained model with error correction tasks, which is suitable for all languages. ## Acknowledgments This rescarch was supported by the Shanghai Science and Technology Young Talents Sailing Program (22YF1413600). We thank Maoxin Shen for helpful discussions, and the anonymous reviewers for their insightful comments. ## References Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450. Tao-Hsing Chang, Hsueh-Chih Chen, and Cheng-Han Yang. 2015. Introduction to a proofreading tool for chinese spelling check task of sighan-8. In Proceedings of the Eighth SIGHAN Workshop on Chinese Language Processing, pages 50–55. Xingyi Cheng, Weidi Xu, Kunlong Chen, Shaohua Jiang, Feng Wang, Taifeng Wang, Wei Chu, and Yuan Qi. 2020. SpellGCN: Incorporating phonological and visual similarities into language models for Chinese spelling check. In *Proceedings of the 58th* Annual Meeting of the Association for Computational Linguistics, pages 871–881, Online. Association for Computational Linguistics. Wei-Cheng Chu and Chuan-Jie Lin. 2015. NTOU Chinese spelling check system in sighan-8 bake-off. In Proceedings of the Eighth SIGHAN Workshop on Chinese Language Processing, pages 137–143, Beijing, China. Association for Computational Linguistics. Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. Electra: Pre-training text encoders as discriminators rather than generators. In *International Conference on Learning Representations*. Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, and Ziqing Yang. 2021. Pre-training with whole word masking for chinese bert. *IEEE/ACM Transactions on Audio, Speech, and Language Processing*, 29:3504–3514. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Jianyong Duan, Tianxiao Ji, and Hao Wang. 2018. Error correction for search engine by mining bad case. IEICE Transactions on Information and Systems, E101.D(7):1938–1945. Jianfeng Gao, Xiaolong Li, Daniel Micol, Chris Quirk, and Xu Sun. 2010. A large scale ranker-based system for search query spelling correction. In *Proceedings* of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 358–366, Beijing, China. Coling 2010 Organizing Committee. Zhao Guo, Yuan Ni, Keqiang Wang, Wei Zhu, and Guotong Xie. 2021. Global attention decoder for Chinese spelling error correction. In *Findings of* the Association for Computational Linguistics: ACLIJCNLP 2021, pages 1419–1428, Online. Association for Computational Linguistics. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770– 778. Dan Hendrycks and Kevin Gimpel. 2016. Gaussian error linear units (gelus). *arXiv preprint* arXiv:1606.08415. Yuzhong Hong, Xianguo Yu, Neng He, Nan Liu, and Junhui Liu. 2019. FASPell: A fast, adaptable, simple, powerful Chinese spell checker based on DAEdecoder paradigm. In *Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019)*, pages 160–169, Hong Kong, China. Association for Computational Linguistics. Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2020. SpanBERT: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8:64–77. Lung-Hao LEEa, Wun-Syuan WUb, Jian-Hong LIa, YuChi LINc, and Yuen-Hsien TSENG. 2019. Building a confused character set for chinese spell checking. In *27th International Conference on Computers in* Education, ICCE 2019, pages 703–705. Asia-Pacific Society for Computers in Education. Jing Li, Gaosheng Wu, Dafei Yin, Haozhao Wang, and Yonggang Wang. 2021. Dcspell: A detector-corrector framework for chinese spelling error correction. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1870–1874. Chao-Lin Liu, Min-Hua Lai, Yi-Hsuan Chuang, and Chia-Ying Lee. 2010. Visually and phonologically similar characters in incorrect simplified Chinese words. In *Coling 2010: Posters*, pages 739–747, Beijing, China. Coling 2010 Organizing Committee. Shulin Liu, Tao Yang, Tianchi Yue, Feng Zhang, and Di Wang. 2021. PLOME: Pre-training with misspelled knowledge for Chinese spelling correction. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2991–3000, Online. Association for Computational Linguistics. Ilya Loshchilov and Frank Hutter. 2018. Decoupled weight decay regularization. In *International Conference on Learning Representations*. David G Lowe. 2004. Distinctive image features from scale-invariant keypoints. International journal of computer vision, 60(2):91–110. Thi Tuyet Hai Nguyen, Adam Jatowt, Mickael Coustaty, and Antoine Doucet. 2021. Survey of post-ocr processing approaches. *ACM Comput. Surv.*, 54(6). Seongmin Park, Dongchan Shin, Sangyoun Paik, Subong Choi, Alena Kazakova, and Jihwa Lee. 2021. Improving distinction between asr errors and speech disfluencies with feature space interpolation. *arXiv* preprint arXiv:2108.01812. Joseph J. Pollock and Antonio Zamora. 1984. Automatic spelling correction in scientific and scholarly text. *Commun. ACM*, 27(4):358368. Yuen-Hsien Tseng, Lung-Hao Lee, Li-Ping Chang, and Hsin-Hsi Chen. 2015. Introduction to SIGHAN 2015 bake-off for Chinese spelling check. In Proceedings of the Eighth SIGHAN Workshop on Chinese Language Processing, pages 32–37, Beijing, China. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30. Baoxin Wang, Wanxiang Che, Dayong Wu, Shijin Wang, Guoping Hu, and Ting Liu. 2021. Dynamic connected networks for Chinese spelling check. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 2437–2446, Online. Association for Computational Linguistics. Dingmin Wang, Yan Song, Jing Li, Jialong Han, and Haisong Zhang. 2018. A hybrid approach to automatic corpus generation for Chinese spelling check. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 2517–2527, Brussels, Belgium. Association for Computational Linguistics. Shih-Hung Wu, Chao-Lin Liu, and Lung-Hao Lee. 2013. Chinese spelling check evaluation at SIGHAN bakeoff 2013. In Proceedings of the Seventh SIGHAN Workshop on Chinese Language Processing, pages 35–42, Nagoya, Japan. Asian Federation of Natural Language Processing. Heng-Da Xu, Zhongli Li, Qingyu Zhou, Chao Li, Zizhen Wang, Yunbo Cao, Heyan Huang, and XianLing Mao. 2021. Read, listen, and see: Leveraging multimodal information helps Chinese spell checking. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021*, pages 716–728, Online. Association for Computational Linguistics. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. *Advances in neural information processing systems*, 32. Jui-Feng Yeh, Sheng-Feng Li, Mei-Rong Wu, WenYi Chen, and Mao-Chuan Su. 2013. Chinese word spelling correction based on n-gram ranked inverted index list. In *Proceedings of the Seventh SIGHAN* Workshop on Chinese Language Processing, pages 43–48, Nagoya, Japan. Asian Federation of Natural Language Processing. Liang-Chih Yu, Lung-Hao Lee, Yuen-Hsien Tseng, and Hsin-Hsi Chen. 2014. Overview of sighan 2014 bakeoff for chinese spelling check. In Proceedings of The Third CIPS-SIGHAN Joint Conference on Chinese Language Processing, pages 126–132. Shaohua Zhang, Haoran Huang, Jicong Liu, and Hang Li. 2020. Spelling error correction with soft-masked BERT. In *Proceedings of the 58th Annual Meeting of* the Association for Computational Linguistics, pages 882–890, Online. Association for Computational Linguistics. Liying Zheng, Yue Deng, Weishun Song, Liang Xu, and Jing Xiao. 2021. An alignment-agnostic model for Chinese text error correction. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 321–326, Punta Cana, Dominican Republic. Association for Computational Linguistics. Chenxi Zhu, Ziqiang Ying, Boyu Zhang, and Feng Mao. 2022. MDCSpell: A multi-task detector-corrector framework for Chinese spelling correction. In *Findings of the Association for Computational Linguistics: ACL 2022*, pages 1244–1253, Dublin, Ireland. Association for Computational Linguistics. ## A Appendix A.1 Ablation We show detailed results on the effect of different alpha on model performance. After two stages of pre-training, we set different alpha for the model training, and record the performance of the model on three SIGHAN test sets. The detailed results are shown in Table 8. At the same time, we explore the contribution of SCSBERT and SPBERT to the whole model. In the following experiment, SCSBERT and SPBERT are pre-trained respectively, but the second stage pre-training is not carried out. Then they are finetuned, where the alpha in the loss function is set to 0.04. The detailed results are shown in Table 9. | SIGHAN2013 SIGHAN2014 SIGHAN2015 | |------------------------------------| | Prec. | Rec. | F1. | Prec. | Rec. | F1. | | |---------|--------|-------|---------|--------|-------|-------| | 0.00 | 99.34 | 78.59 | 87.75 | 99.32 | 76.40 | 86.37 | | 0.02 | 99.74 | 79.94 | 88.75 | 99.74 | 78.69 | 87.97 | | 0.04 | 99.74 | 80.56 | 89.13 | 99.74 | 79.21 | 88.30 | | 0.06 | 99.61 | 79.31 | 88.31 | 99.60 | 78.27 | 87.66 | | 0.08 | 99.61 | 79.73 | 88.57 | 99.61 | 78.79 | 87.99 | | 0.10 | 99.48 | 79.94 | 88.65 | 99.47 | 78.69 | 87.87 | | 0.92 | 99.61 | 79.21 | 88.25 | 99.60 | 78.27 | 87.66 | | 0.94 | 99.61 | 79.21 | 88.25 | 99.60 | 78.38 | 87.73 | | 0.96 | 99.61 | 79.21 | 88.25 | 99.60 | 78.48 | 87.79 | | 0.98 | 99.61 | 78.69 | 87.92 | 99.60 | 77.55 | 87.20 | | 1.00 | 99.61 | 79.83 | 88.63 | 99.61 | 78.90 | 88.05 | | 0.00 | 65.56 | 68.08 | 66.79 | 64.57 | 65.19 | 64.88 | | 0.02 | 83.18 | 71.35 | 76.81 | 82.84 | 69.62 | 75.65 | | 0.04 | 84.09 | 71.15 | 77.08 | 83.76 | 69.42 | 75.92 | | 0.06 | 85.51 | 69.23 | 76.51 | 85.19 | 67.50 | 75.32 | | 0.08 | 85.68 | 69.04 | 76.46 | 85.44 | 67.69 | 75.54 | | 0.10 | 84.16 | 68.46 | 75.50 | 83.78 | 66.54 | 74.17 | | 0.92 | 87.01 | 68.27 | 76.51 | 86.85 | 67.31 | 75.84 | | 0.94 | 85.68 | 66.73 | 75.03 | 85.43 | 65.38 | 74.07 | | 0.96 | 86.91 | 67.69 | 76.11 | 86.72 | 66.54 | 75.30 | | 0.98 | 85.17 | 68.46 | 75.91 | 84.91 | 67.12 | 74.97 | | 1.00 | 86.31 | 67.88 | 76.00 | 85.93 | 65.77 | 74.51 | | 0.00 | 76.40 | 83.03 | 79.58 | 75.74 | 80.07 | 77.85 | | 0.02 | 88.76 | 81.55 | 85.00 | 88.36 | 78.41 | 83.09 | | 0.04 | 89.61 | 81.18 | 85.19 | 89.35 | 78.97 | 83.84 | | 0.06 | 90.23 | 80.07 | 84.85 | 89.91 | 77.31 | 83.13 | | 0.08 | 89.02 | 80.81 | 84.72 | 88.70 | 78.23 | 83.14 | | 0.10 | 89.67 | 80.07 | 84.60 | 89.47 | 78.41 | 83.58 | | 0.92 | 89.94 | 79.15 | 84.20 | 89.61 | 76.38 | 82.47 | | 0.94 | 90.97 | 79.89 | 85.07 | 90.73 | 77.68 | 83.70 | | 0.96 | 90.27 | 78.78 | 84.14 | 89.98 | 76.20 | 82.52 | | 0.98 | 89.96 | 79.34 | 84.31 | 89.70 | 77.12 | 82.94 | | 1.00 | 90.72 | 79.34 | 84.65 | 90.43 | 76.75 | 83.03 | Table 8: Impact of different α in the loss function. | Dataset | Model | Detection Level | Correction Level | | | | | |-------------------|------------------|-------------------|--------------------|-------|-------|-------|-------| | Prec. | Rec. | F1. | Prec. | Rec. | F1. | | | | PTCSpell(-PT) | 99.48 | 79.73 | 88.52 | 99.47 | 78.59 | 87.80 | | | SCSBERT+BERT(-PT) | 99.74 | 80.46 | 89.07 | 99.74 | 79.21 | 88.30 | | | SIGHAN2013 | BERT+SPBERT(-PT) | 99.61 | 79.94 | 88.70 | 99.61 | 78.69 | 87.92 | | BERT(-PT) | 99.61 | 79.42 | 88.37 | 99.60 | 78.27 | 87.66 | | | PTCSpell(-PT) | 85.78 | 68.46 | 76.15 | 85.47 | 66.73 | 74.95 | | | SCSBERT+BERT(-PT) | 85.44 | 68.85 | 76.25 | 85.12 | 67.12 | 75.05 | | | SIGHAN2014 | BERT+SPBERT(-PT) | 82.83 | 68.65 | 75.08 | 82.34 | 66.35 | 73.48 | | BERT(-PT) | 82.24 | 67.69 | 74.26 | 81.86 | 65.96 | 73.06 | | | PTCSpell(-PT) | 89.43 | 81.18 | 85.11 | 89.19 | 79.15 | 83.87 | | | SCSBERT+BERT(-PT) | 88.57 | 80.07 | 84.11 | 88.26 | 77.68 | 82.63 | | | SIGHAN2015 | BERT+SPBERT(-PT) | 88.03 | 80.07 | 83.86 | 87.76 | 78.04 | 82.62 | | BERT(-PT) | 89.09 | 79.89 | 84.24 | 88.79 | 77.49 | 82.76 | | Table 9: Results of ablation experiment on PTCSpell. (-PT) denotes that the second stage of pre-training is removed. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitations ✓ A2. Did you discuss any potential risks of your work? 4.5 Ablation Study; Limitations ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Left Blank. ✓ B1. Did you cite the creators of artifacts you used? Footnote 2 and 3 on the second page. ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Footnote 2 and 3 on the second page. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? 4 Experiments ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? We use datasets published online. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Footnote 2 and 3 on the second page. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. 4.1 Datasets and Metrics ## C ✓ **Did You Run Computational Experiments?** Left Blank. ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 4.2 Implementation Details The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 4.2 Implementation Details ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 4.5 Ablation Study ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 4.2 Implementation Details D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
zhang-zhou-2023-disentangling
Disentangling Text Representation With Counter-Template For Unsupervised Opinion Summarization
https://aclanthology.org/2023.findings-acl.395
Approaches for unsupervised opinion summarization are generally based on the reconstruction model and generate a summary by decoding the aggregated representation of inputs. Recent work has shown that aggregating via simple average leads to vector degeneration, generating the generic summary. To tackle the challenge, some approaches select the inputs before aggregating. However, we argue that the selection is too coarse as not all information in each input is equally essential for the summary. For example, the content information such as {``}great coffee maker, easy to set up{''} is more valuable than the pattern such as {``}this is a great product{''}. Therefore, we propose a novel framework for unsupervised opinion summarization based on text representation disentanglement with counter-template. In specific, a disentangling module is added to the encoder-decoder architecture which decouples the input text representation into two parts: content and pattern. To capture the pattern information, a counter-template is utilized as supervision, which is automatically generated based on contrastive learning. Experimental results on two benchmark datasets show that the proposed approach outperforms the state-of-the-art baselines on both quality and stability.
# Disentangling Text Representation With Counter-Template For Unsupervised Opinion Summarization ## Yanyue Zhang And Deyu Zhou∗ School of Computer Science and Engineering, Key Laboratory of Computer Network and Information Integration, Ministry of Education, Southeast University, China {yanyuez98,d.zhou}@seu.edu.cn ## Abstract Approaches for unsupervised opinion summarization are generally based on the reconstruction model and generate a summary by decoding the aggregated representation of inputs. Recent work has shown that aggregating via simple average leads to vector degeneration, generating the generic summary. To tackle the challenge, some approaches select the inputs before aggregating. However, we argue that the selection is too coarse as not all information in each input is equally essential for the summary. For example, the content information such as "great coffee maker, easy to set up" is more valuable than the pattern such as "this is a great product". Therefore, we propose a novel framework for unsupervised opinion summarization based on text representation disentanglement with counter-template. In specific, a disentangling module is added to the encoder-decoder architecture which decouples the input text representation into two parts: content and pattern. To capture the pattern information, a countertemplate is utilized as supervision, which is automatically generated based on contrastive learning. Experimental results on two benchmark datasets show that the proposed approach outperforms the state-of-the-art baselines on both quality and stability. ## 1 Introduction With the unprecedented development of online interactive platforms, opinion summarization has received significant interest in natural language processing communities. Unlike other summarization tasks for news, Wikipedia, and medical treatment records, opinion summarization pays more attention to user opinions in product reviews, blog journals, and social media texts. Opinion summarization has great potential in many application scenarios. Due to the lack of large-scale annotated data, opinion summarization is generally formalized as ∗Corresponding author. an unsupervised learning framework. A series of reconstruction models have been developed (Bražinskas et al., 2020a; Amplayo and Lapata, 2020; Elsahar et al., 2021; Amplayo et al., 2021a). The training goal is to reconstruct input text through an encoder-decoder architecture such as autoencoders (AE), variational autoencoders (VAE). As shown in the upper part of Figure 1, when generating summaries, the encoder is employed to aggregate the text representations from a set of texts via averaging (the 'Mean' module in Figure 1) to obtain a summary representation. The representation is used to generate the summary by the decoder. Recently, it has been demonstrated in Iso et al. (2021) that simply averaging tends to generate generic summaries, as shown in the upper part of Figure 1. To tackle the challenge, a 'Select' module is incorporated in Coop (Iso et al., 2021) to strictly select input text. However, we argue that it has such a limitation: coarse-grained selection. There might be some information hidden in the abandoned reviews which is important for summarization. Moreover, the kept reviews might contain some redundant information, leading to generic or incorrect sentences, as shown in the middle of Figure 1. In this paper, instead of treating text vectors as a whole, we assume that the text representation consists of *content* and *pattern* information. The content information is summary-relevant and related to specific product characteristics, such as aspects, emotions, and opinions illustrated as the orange part of texts in Figure 1. The pattern information marked in blue describes some common patterns that occur frequently in the corpus and can be used to describe most products. Both are valuable for reconstruction training. However, when text representations are aggregated to obtain a summary representation, the repeated pattern information is more conspicuous, which squeezes valuable content information and results in the generation of ![1_image_0.png](1_image_0.png) overly generic summaries. To separate the content information from the pattern information, the text representation needs to be disentangled. Recently, Qin et al. (2020) projects feature into orthogonal space to disentangle representation. However, it is not straightforward to disentangle in the same way as no supervision labels are available for the pattern information. To create supervision signals of the pattern information, we elaborately design a text template to capture the common semantic patterns shared in the corpus. It is called *counter-template* since the extracted pattern representation works as a negative guidance against the content information. Therefore, we propose a novel framework based on Text Representation disentAnglement with Counter-templatE for unsupervised opinion summarization (TRACE). A disentangling module is added to the encoder-decoder architecture which orthogonally decouples the text representation into two parts: content and pattern. The content representation is used to reconstruct the input text and the pattern representation is supervised by the counter-template. Moreover, to construct the counter-template automatically, a novel approach based on contrastive learning and average strategy is proposed. A counter-template generator is trained to not only preserve the general generation constraints in VAE but also make all text representations in the same batch similar to each other using contrastive learning. The counter-template is obtained by decoding the average of several text representations. The main contributions of this paper are summarized as follows: Representation disentAnglement with Counter-templatE for unsupervised opinion summarization, TRACE, is proposed. A disentangling module is added in the encoderdecoder architecture which decouples the text representation into two parts: content and pattern. - A novel approach based on contrastive learning is proposed to construct the countertemplate automatically, which provides supervision for disentanglement. - Experimental results on two benchmark datasets show that the proposed framework outperforms the state-of-the-art baselines on generation quality and stability. ## 2 Related Work Opinion summarization generally focuses on short and numerous product reviews. According to whether summaries are extracted from original texts, they can be divided into extractive approaches and abstractive approaches. Extractive approaches (Hu and Liu, 2004, 2006; Angelidis and Lapata, 2018; Zhao and Chaturvedi, 2020) capture the opinion of product reviews through emotional polarity or aspect information. Some methods attempt to obtain more flexible sentences through graphs (Ganesan et al., 2010) or decision theory (Di Fabbrizio et al., 2014; Carenini et al., 2013). Recently, Angelidis et al. (2021) leverage vector quantization (Van Den Oord et al., 2017) to capture a semantic sense. On this basis, SemAE (Chowdhury et al., 2022) uses dictionary learning to capture fine-grained and diverse semantics. $$\begin{array}{r l r l r l}{{\bullet}}&{{}{\mathrm{A}}}&{{}{\mathrm{~n o v e l}}}&{{}{\mathrm{~f r a m e w o r k~}}}&{{}{\mathrm{~b a s e d~}}}&{{}{\mathrm{~o n v e l}}}\end{array}$$ ![2_image_0.png](2_image_0.png) ![2_image_1.png](2_image_1.png) As text generation technology continues to evolve, a series of end-to-end abstractive methods have been developed for opinion summarization (Bražinskas et al., 2020a; Amplayo and Lapata, 2020; Iso et al., 2021). They use the encoderdecoder architecture and employ the average representation of inputs for summarization. Another type of method (Elsahar et al., 2021; Amplayo et al., 2021a; Suhara et al., 2020) focuses on modeling aspects and emotional information. Suhara et al. (2020) use a two-stage approach that first identifies opinion phrases and then uses the phrases to generate smooth sentences. Other works use aspect seed words (Elsahar et al., 2021; Amplayo et al., 2021a) or implicit aspect code (Amplayo et al., 2021b), making the summary generation more controllable. Recently, Iso et al. (2021) show that averaging text representations simply tends to generate overly generic summaries. Then they use word overlapping to search for a better subset of input texts when inferring a summary. After that, Wassos (Song et al., 2022) optimizes the process of summary inference by approximating the Wasserstein barycenter to construct a better summary distribution. Although both Wassos and the proposed TRACE conduct text representation decoupling, TRACE differs from Wassos in the following two aspects: (1) different purposes. TRACE aims to tackle the coarse-grained selection problem in opinion summarization while Wassos employs text disentanglement for the better generation. (2) different ways. TRACE designs a novel way to separate the content and pattern information via the special design counter-template while Wassos employs a general way to disentangle syntactic and semantic spaces via the linearized parse tree sequence and the bag-of-words distribution (Bao et al., 2019). ## 3 Methodology In this section, we describe TRACE, a novel disentangled framework with counter-template guidance for unsupervised opinion summarization. We first present the overview of the summarization model. Then, we introduce the counter-template generation approach based on a contrastive learning model and average inference strategy. Finally, we describe the detailed components of TRACE and explain how to train the model. ## 3.1 Overview Of The Summarization Model Figure 2 shows the overall architecture of TRACE. It contains four components, an encoder pθ, a feature extractor fϕ, an orthogonal disentangled component f⊥, and a decoder qφ. fϕ and f⊥ form the disentanglement module M. Given a set of texts T = {t1, · · · , tN }, where N is the number of texts. In the beginning, a countertemplate *temp* containing the pattern information is automatically generated. In the training stage, each input text tiis passed to the encoder pθ(zi| ti) to get a text representation zi. Then ziis disentangled to content representation ci and common pattern representation pi by the disentanglement module M. pi is used to predict the counter-template text *temp* by decoder qφ(*temp* | pi). ciis used to reconstruct the input text tithrough the same decoder qφ(ti| ci). After training, each set of input texts T is passed to the encoder pθ(zi| ti) and the disentanglement structure M to obtain a content representation set C = {c1, · · · , cN }. Then the summary representation c is computed following Iso et al. (2021). The summary s is inferred from c by the decoder qφ(s | c). ## 3.2 Counter-Template Construction To automatically generate the counter-template, a generator based on contrastive learning is constructed. Then the counter-template is inferred via average strategy. The counter-template generator includes an encoder Es and a decoder Ds. To avoid the performance gap between counter-template generating and summary generating, we use bidirectional long short-term memory (Bi-LSTM) and a mean pooling layer as the encoder, LSTM as the decoder in the counter-template generator, same with the summary generator. Generator Training To train the generator, the reconstruction loss and Kullback–Leibler (KL) regularization of VAE are employed to ensure the fluency of generation. Moreover, the contrastive loss is employed to make each latent vector z s i more similar. Contrastive learning (Chen et al., 2020) aims to shorten the distance between positive samples and extend the distance between negative samples. Here we design two ways of selecting positive samples: text-level and summary-level contrastive learning. The distance of the negative samples is set to a constant because of no negative samples. For text-level contrastive learning, all input texts within the batch are treated as positive samples of each other. Given a set of texts T = {t1, · · · , tN }, where N is the number of texts, the latent vector z s i can be computed by encoding text ti via Es. Following Chen et al. (2020), the normalized temperature-scaled cross-entropy loss is adopted as the contrastive objective: $${\mathcal{L}}_{t c x t}=$$ $$-\left[\sum_{i=1}^{n}\log\frac{\sum_{j=1}^{n}\mathbb{1}_{\left[j\neq i\right]}\exp\left(sim\left(z_{i}^{s},z_{j}^{s}\right)/\tau\right)}{\exp\left(C/\tau\right)}\right]/n,\tag{1}$$ ![3_image_0.png](3_image_0.png) where sim(·) indicates the cosine similarity function, n is the batch size in training, τ controls the temperature, 1 is the indicator and C is set to the maximum value of cosine similarity which is 1.0. For summary-level contrastive learning, several sets of texts belonging to different objects (here, products or business) are input into the model. For the i-th object, there is a set of text Ti = {ti1, · · · , tiNi}, where Niis the number of texts in Ti. After encoded by Es, the summary representation z s i of i-th object is averaged from the latent vector set Z s i = {z s i1 , · · · , zs iNi}. For each summary representation, its corresponding latent vector z s ij and other summary representations form the positive samples. The loss function is similar to that of text-level contrastive learning: $$\begin{array}{l}{{L_{s u m}=}}\\ {{-\left[\sum_{i=1}^{n}\log\frac{\sum_{j=1}^{n}\mathbb{I}_{[j\neq i]}\exp\left(s i m\left(\Xi_{i}^{s},\Xi_{j}^{s}\right)/\tau\right)}{\exp\left(C/\tau\right)}\right]/n}}\\ {{-\left[\sum_{i=1}^{n}\log\frac{\sum_{k=1}^{N_{i}}\exp\left(s i m\left(\Xi_{i}^{s},z_{i k}^{s}\right)/\tau\right)}{\exp\left(C/\tau\right)}\right]/n,}}\end{array}\tag{2}$$ The overall loss function is defined as: $$\begin{array}{l}{{{\mathcal{L}}(\gamma,\psi)={\mathcal{L}}_{r e c}+{\mathcal{L}}_{K L}+{\mathcal{L}}_{c o n}}}\\ {{{\mathcal{L}}_{r e c}(\gamma,\psi)=\sum_{i=1}^{N}\mathbb{E}_{p_{\gamma}\left(z_{i}^{s}|t_{i}\right)}\left[\log q_{\psi}(t_{i}\mid z_{i}^{s})\right]\quad(3)}}\\ {{{\mathcal{L}}_{K L}(\gamma)=\mathbb{D}_{K L}(p_{\gamma}(z_{i}^{s}\mid t_{i})||p(z_{i}^{s}))}}\end{array}$$ where γ and ψ are the parameters of the encoder Es and decoder Ds. Lcon is the contrastive loss. For the text-level contrastive learning, Lcon = L*text*. For the summary-level contrastive learning, Lcon = Lsum. We choose the standard Gaussian distribution as the prior distribution p(z s i ). Counter-Template Inference When generating the counter-template, the average strategy is employed. As shown in Figure 3, there are three steps: encoding, aggregating, and decoding. Firstly, several sets of texts belonging to different objects are used as input to the encoder Es. Then, an average of the text representations is obtained as countertemplate representation z*temp*. Thus, z*temp* contains a wide variety of information from different products or businesses. And the common pattern information dominates in z*temp* compared with the content information. Finally, z*temp* is used to generate the counter-template via the trained decoder Ds. ## 3.3 Model Components The Encoder pθ Iso et al. (2021) show that large pre-training language models such as BERT (Kenton and Toutanova, 2019) and GPT-2 (Radford et al.) do not necessarily lead to performance improvement in unsupervised opinion summarization. Therefore, we employ the BIMEANVAE model (Iso et al., 2021) which uses BiLSTM as encoder pθ(zi| ti) and applies a mean pooling layer to the BiLSTM layer to obtain the primitive text representation zi. The Feature Extractor fϕ The common pattern representation pi is extracted from zi by the feature extractor fϕ(pi| zi). Because the pattern information occurs in zi naturally, a linear projection network is used as the feature extractor fϕ to compute the common pattern representation pi : $$\overline{{{p}}}_{i}=W_{P}^{T}z_{i}+b_{P},$$ where WP and bP are the parameters. Due to the supervision of the invariant counter-template, pi contains sample-independent pattern information. The Orthogonal Disentangled Component f⊥ Similar to Qin et al. (2020), the content representation ciis disentangled from zi by projecting zi onto the orthogonal direction of the common pattern representation pi in the orthogonal disentangled component f⊥. We first project zi onto pi to get pi: $$p_{i}=P r o j(z_{i},{\overline{{p}}}_{i})$$ pi = *P roj*(zi, pi) (4) where $Proj$ is a projection function. $$P r o j(x,y)={\frac{x\cdot y}{\mid y\mid\mid y\mid}}$$ are vectors. where x, y are vectors. Then the content representation ciis obtained in the orthogonal direction of pi: $$c_{i}=P r o j(z_{i},(z_{i}-p_{i}))$$ The Decoder qφ Following Iso et al. (2021), LSTM is employed as the decoder qφ with two functions. Firstly, the distribution qφ(ti| ci) is computed by the reconstruction of the input ti from the content representation ci. And qφ(*temp* | pi) is obtained by the prediction of the counter-template *temp* via pi . ## 3.4 Model Training The reconstruction loss, the template loss, and the KL loss are employed for model training. The content representation ciis used as the input of the decoder to reconstruct the input text ti. And the reconstruction loss is defined as: $$\begin{array}{l}{{{\cal L}_{r e c o n}(\theta,\phi,\varphi)=}}\\ {{\sum_{i=1}^{N}\frac{\mathbb{E}}{p_{\theta}(z_{i}|t_{i})}[\log q_{\varphi}(t_{i}\mid c_{i})f_{\perp}(c_{i}\mid p,z_{i})f_{\phi}(p\mid z_{i})]}}\end{array}\tag{7}$$ The reconstruction loss improves the quality of the decoded text and forces the text representation zi and content representation cito store content information. The common pattern representation pi is used to predict the counter-template *temp*. The loss is imposed by minimizing: $$\begin{array}{l}{{{\mathcal L}_{t e m p}(\theta,\phi,\varphi)=}}\\ {{\sum_{i=1}^{N}\mathbb{E}_{p\theta(z_{i}|t_{i})}[\log q_{\varphi}(t e m p\mid p)f_{\phi}(p\mid z_{i})],}}\end{array}\tag{8}$$ The template loss ensures the common pattern. representation pi contains common pattern information of the dataset. Finally, following the typical VAE and variational inference, we add the regularizers LKL which controls the amount of information in zi by penalizing KL divergence of the estimated posteriors pθ(zi| ti) from the corresponding priors. We choose the standard Gaussian distribution as the prior distribution p(zi). The final loss function is defined as: $$(4)$$ $$\begin{array}{l}{\cal L}(\theta,\phi,\varphi)={\cal L}_{temp}+{\cal L}_{recon}+{\cal L}_{KL}\\ {\cal L}_{KL}={\mathbb{D}}_{KL}(p_{\theta}(z_{i}\mid t_{i})||p(z_{i}))\end{array}\tag{9}$$ where $\theta,\phi,$and $\varphi$ are the parameters of the model. $$(S)$$ ## 4 Experiments $$(6)$$ To investigate the effectiveness of the proposed method, the best and the average results were performed on two datasets. In addition to the best performance, the mean and standard deviation were | Dataset | Category | Number | PCT | |-------------|------------|----------|--------| | Electronics | 922957 | 59.80% | | | Health | 181017 | 11.73% | | | Home | 296053 | 19.18% | | | Clothing | 143474 | 9.30% | | | Amazon Yelp | Catering | 3493385 | 74.98% | | Other | 1165583 | 25.02% | | reported to evaluate the stability of the model. Besides, two analysis experiments were conducted for the counter-template. ## 4.1 Experimental Datasets The experiments were conducted on two publicly available datasets, Amazon product reviews (He and McAuley, 2016) and Yelp business reviews (Chu and Liu, 2019). More details can be found in Appendix. After preprocessing, we encountered a difference between Amazon and Yelp. Amazon has category labels for each product. However, there is only some meta information about categories on Yelp. Through data analysis in Table 1, we discovered the proportion of reviews from different businesses on Yelp is highly imbalanced with about 75% of reviews related to food and beverage. An overwhelming number of catering reviews affects our counter-template generation, leading to templates with obvious catering information. Therefore, we only keep reviews of food-related categories in training, validation, and test sets of Yelp to form Yelp-Res. ## 4.2 Baselines Following prior work (Iso et al., 2021; Song et al., 2022), we compare TRACE with **TextVAE** (Ganesan et al., 2010), **Opinosis** (Ganesan et al., 2010), MeanSum (Chu and Liu, 2019), **Copycat** (Bražinskas et al., 2020b), **Coop** (Iso et al., 2021) and Wassos (Song et al., 2022). The detailed introduction is in Appendix. ## 4.3 Implementation Setting We used Adam optimizer (Kingma and Ba, 2015) with a linear scheduler, whose initial learning rate is set to 10−3. To mitigate the KL vanishing issue, we also applied KL annealing (Kingma et al., 2016; Li et al., 2019; Iso et al., 2021). For beam search in the generation, the beam size is set to 4. To generate summary-like texts, we employed the firstperson pronoun blocking (Iso et al., 2021), which prohibits generating first-person pronouns (e.g. I, my, me) during summary generation. The ROUGE1/2/L scores based on F1 (Lin and Hovy, 2002) are reported for automatic evaluation. For each model, we ran 4 times and reported the best, the mean, and standard deviation in Table 2. All experiments were conducted on NVIDIA GeForce RTX 3090. The training time of our model is about 6.5 hours, with 8 epochs on Amazon and 6 on Yelp-res. Besides, we also reported the results of the human design counter-template (TRACE-h). In particular, we train a VAE summarization model and use simple averaging to generate several summaries. Then we clip and combine the generic sentences without content information in the generated summaries. In the process of combination, we also try to place the selected sentences in the same position as they originally had in the summary and control the length of the counter-template to about half of the average length of the golden summaries. ## 4.4 Results As shown in the upper of Table 2, The Max part reports the best performance of models. Our framework (TRACE) significantly obtains the new state-of-the-art performance on both benchmark datasets with counter-templates whether the automated (TRACE-a) or the human design (TRACEh). On Yelp-Res, the model with human-designed counter-templates (TRACE-a) is better than the auto-generated ones. But on Amazon, the situation is reversed. Probably because we tested 12 different counter-templates on Amazon than 4 on Yelp, due to the analysis experiments. The more autogenerated counter-templates are tested, the more likely to find a better one, which can even exceed the human design in performance. As shown in the lower of Table 2, the *Mean* part reports the mean and standard deviation of the model performance, which represent the stability of the model. In this perspective, TRACE significantly outperforms all competing models on both Amazon and Yelp-Res. In general, the performance of TRACE-h is better than TRACE-a both in the Max and the *Mean*. We consider that this is because the human design process eliminates all the unidentified content information in the counter-templates and ensures the purity of the pattern information. | Amazon | Yelp-Res | | | | | | |-----------|--------------|-------------|--------------|--------------|-------------|--------------| | R1 | R2 | RL | R1 | R2 | RL | | | Max | | | | | | | | TextVAE‡ | 22.87 | 2.75 | 14.46 | 25.42 | 3.11 | 15.04 | | Opinosis‡ | 28.42 | 4.57 | 15.50 | 24.88 | 2.78 | 14.09 | | MeanSum | 28.32 | 3.28 | 16.19 | 28.27 | 3.54 | 16.00 | | Copycat | 31.17 | 6.49 | 19.71 | 28.31 | 5.38 | 17.72 | | Coop | 36.93 | 7.05 | 21.34 | 34.21 | 6.49 | 19.19 | | Wassos(T) | 30.80 | 6.38 | 19.45 | 27.43 | 6.03 | 18.38 | | Wassos(0) | 33.24 | 7.25 | 21.31 | 25.50 | 4.84 | 17.78 | | TRACE-a | 37.90 | 7.53 | 22.48 | 34.39 | 7.07 | 19.90 | | TRACE-h | 37.34 | 7.51 | 21.49 | 34.80 | 7.81 | 20.10 | | Mean | | | | | | | | MeanSum | 27.63 ± 0.72 | 3.05 ± 0.22 | 15.87 ± 0.51 | 28.00 ± 0.34 | 3.53 ± 0.19 | 15.80 ± 0.14 | | Copycat | 29.46 ± 1.50 | 5.18 ± 0.44 | 19.06 ± 0.67 | 27.21 ± 0.92 | 4.98 ± 0.52 | 17.74 ± 0.28 | | Coop | 35.00 ± 1.11 | 5.85 ± 0.77 | 19.56 ± 0.99 | 33.24 ± 0.58 | 6.57 ± 0.23 | 19.18 ± 0.14 | | Wassos(T) | 27.90 ± 2.42 | 5.65 ± 0.74 | 18.48 ± 1.10 | 27.95 ± 1.40 | 5.82 ± 0.26 | 18.28 ± 0.06 | | Wassos(0) | 30.78 ± 3.84 | 6.75 ± 1.07 | 19.09 ± 1.45 | 24.65 ± 1.46 | 4.35 ± 0.25 | 16.30 ± 0.73 | | TRACE-a | 36.33 ± 0.29 | 7.01 ± 0.32 | 21.30 ± 0.42 | 34.11 ± 0.26 | 6.74 ± 0.16 | 19.41 ± 0.29 | | TRACE-h | 36.52 ± 0.41 | 7.16 ± 0.23 | 21.32 ± 0.01 | 34.08 ± 0.45 | 6.86 ± 0.47 | 19.56 ± 0.34 | | R1 | R2 | RL | | |-----------|--------------|-------------|--------------| | w/o con | 36.51 ± 0.62 | 6.95 ± 0.31 | 21.28 ± 0.48 | | w/ sum-b | 36.33 ± 0.29 | 7.01 ± 0.32 | 21.30 ± 0.42 | | w/ sum-l | 35.50 ± 0.60 | 6.57 ± 0.31 | 20.98 ± 0.44 | | w/ text-b | 36.29 ± 0.34 | 6.82 ± 0.21 | 21.14 ± 0.33 | | w/ text-l | 35.84 ± 0.74 | 6.52 ± 0.50 | 21.09 ± 0.51 | | human | 36.52 ± 0.41 | 7.16 ± 0.23 | 21.32 ± 0.10 | ## 4.5 Counter-Template Analysis For the proposed counter-template generation method, we separately conducted two experiments on different contrastive learning methods in Table 3 and different generation inputs in Table 4. Some examples of counter-templates are shown in Table 5. Because using golden validation and test sets for counter-template generation might leak some information, they are only used for analysis. Besides, we randomly sampled some products from the training set and grouped their reviews as set 1. Repeating this procedure yielded set 2. Then, they were mixed to obtain the third set, labeled as 1+2. For each model, we test three counter-templates generated from set 1, set 2, and set 1+2. Of the three results, the result with the highest mean value will be broadcasted with the standard deviation. The best results are in Table 7 inside the appendix. As shown in Table 3, the performance of the model is slightly affected by the ablation of contrastive learning (w/o con), which means our average generation strategy plays a major role. In general, summary-level contrastive learning is more effective than text-level. The first possibility is that the training batch size of the text-level countertemplate generator has to be a quarter of the summary-level model because of the time complexity. Besides, We suspect that summary-level contrastive learning will not only make the text representations more similar to each other but also make them more similar to generic summary representations. In addition, We test the performance of summarization on the golden validation and test sets during the training of the counter template generator. Choosing the checkpoint that performs best on the | R1 | R2 | RL | Number | | |---------|--------------|-------------|--------------|-----| | Coop | 35.00 ± 1.11 | 5.85 ± 0.77 | 19.56 ± 0.99 | - | | 1 | 36.03 ± 0.26 | 6.35 ± 0.17 | 20.93 ± 0.27 | 294 | | 2 | 35.42 ± 0.25 | 6.97 ± 0.28 | 20.52 ± 0.11 | 221 | | 1+2 | 36.33 ± 0.29 | 7.01 ± 0.32 | 21.30 ± 0.42 | 515 | | 1-large | 36.12 ± 0.42 | 6.72 ± 0.42 | 20.97 ± 0.64 | 490 | | 2-large | 36.00 ± 0.19 | 6.49 ± 0.16 | 20.81 ± 0.30 | 475 | | 1-l+2-l | 35.68 ± 0.72 | 6.61 ± 0.20 | 20.66 ± 0.44 | 965 | | Val | 35.65 ± 0.49 | 6.75 ± 0.11 | 20.78 ± 0.17 | 224 | | Test | 36.28 ± 0.58 | 6.70 ± 0.34 | 20.99 ± 0.36 | 256 | Table 5: Generated counter-template examples on Amazon via the input set in Table 4. | Test | This is a great product. It's very easy to use and clean. The only thing that would be better is the size of this one, but it's not a big deal for the money. | |---------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 1 | This is a great product. It's very easy to use and clean. The only problem is that it doesn't take up much room in the kitchen, and has a nice feel to it. | | 1+2 | This is a great product. It's very easy to use and it works well. The only drawback is that the blue light is a little weak, but it does not have to be in the way. | | 1-large | This is a great product. It's very easy to use and clean. The only drawback is that it has to be a little bit more expensive than the brand but it works. | | 1-l+2-l | This is a great product. It's very easy to use and it works well. The only drawback is that the blue light is a little weak, but you can't get it to work with. | summarization test (b) achieves better results than the last (l). It may be because the counter-template generation model with better Rouge performance in the summary generation is more likely to learn pattern information which the corresponding summarization model will learn. Since using summary-level contrastive learning and selecting the model with the best rouge (sumb) perform best, we build on this to conduct the analysis experiment using different inputs for the counter-template generation. The set labeled as 1-large is obtained by randomly selecting products from the training set based on 1, and 2-large the same. As shown in Table 4, although all the results outperform the SOTA baseline, different countertemplates lead to obviously different results. As the examples shown in Table 5, the counter-templates are very similar to each other. There is no obvious relationship between the number of input reviews and the generated counter-templates or the performance of the model. In general, the advantage of automatic generation is continuously generating different counter-templates by extracting different input sets to achieve higher performance. But at present, only experiments can determine the impact of the template on the model, which consumes time and computing resources. ## 5 Conclusion To avoid generating generic summaries, we propose TRACE, a novel framework for unsupervised opinion summarization based on text representation disentanglement with counter-template. The additional disentanglement module inside the encoderdecoder architecture decouples the pattern and content information in the text representation under the guidance of the special counter-template. Experimental results on Amazon and Yelp-Res show the proposed approach outperforms the state-of-the-art baselines on both quality and stability. ## 6 Limitations As shown in Table 1, an overwhelming number of catering reviews on Yelp makes the countertemplates with obvious catering information. For example, "This is a great place for a quick bite to eat. The food is delicious and the staff is very friendly. They have a good selection of beer and wine. The place is always busy, but it's worth the wait." In this case, the pattern information in the text is not consistent with other businesses irrelevant to catering. As shown in Table 6, although our method has a slight improvement over the previous methods in the mean and standard deviation, it is only comparable to the SOTA at the best performance. Since the counter-template is exactly the same text for the whole data set, the performance of the model is affected perhaps when the pattern information from different texts in the data set has large differences. When we extract the restaurantrelated parts of the dataset as Yelp-Res that have more similar pattern information, our model performs better. ## 7 Acknowledgments We would like to thank anonymous reviewers for their valuable comments and helpful suggestions. This research work is supported by the Big Data Computing Center of Southeast University. We would also like to thank Professor Yulan He for her valuable suggestions for our paper. This work was funded by the National Natural Science Foundation of China (62176053). ## References Reinald Kim Amplayo, Stefanos Angelidis, and Mirella Lapata. 2021a. Aspect-controllable opinion summarization. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. Reinald Kim Amplayo, Stefanos Angelidis, and Mirella Lapata. 2021b. Unsupervised opinion summarization with content planning. In *Proceedings of the AAAI* Conference on Artificial Intelligence. Reinald Kim Amplayo and Mirella Lapata. 2020. Unsupervised opinion summarization with noising and denoising. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*. Stefanos Angelidis, Reinald Kim Amplayo, Yoshihiko Suhara, Xiaolan Wang, and Mirella Lapata. 2021. Extractive opinion summarization in quantized transformer spaces. Transactions of the Association for Computational Linguistics, 9:277–293. Stefanos Angelidis and Mirella Lapata. 2018. Summarizing opinions: Aspect extraction meets sentiment prediction and they are both weakly supervised. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 3675–3686. Yu Bao, Hao Zhou, Shujian Huang, Lei Li, Lili Mou, Olga Vechtomova, Xinyu Dai, and Jiajun Chen. 2019. Generating sentences from disentangled syntactic and semantic spaces. In *Proceedings of the 57th Annual Meeting of the Association for Computational* Linguistics, pages 6008–6019. Arthur Bražinskas, Mirella Lapata, and Ivan Titov. 2020a. Unsupervised opinion summarization as copycat-review generation. In *Proceedings of the* 58th Annual Meeting of the Association for Computational Linguistics. Arthur Bražinskas, Mirella Lapata, and Ivan Titov. 2020b. Unsupervised opinion summarization as copycat-review generation. In *Proceedings of the* 58th Annual Meeting of the Association for Computational Linguistics, pages 5151–5169, Online. Association for Computational Linguistics. Giuseppe Carenini, Jackie Chi Kit Cheung, and Adam Pauls. 2013. Multi-document summarization of evaluative text. *Computational Intelligence*, 29(4):545– 576. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020. A simple framework for contrastive learning of visual representations. In *International Conference on Machine Learning*. PMLR. Somnath Basu Roy Chowdhury, Chao Zhao, and Snigdha Chaturvedi. 2022. Unsupervised extractive opinion summarization using sparse coding. In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long* Papers), pages 1209–1225. Eric Chu and Peter Liu. 2019. Meansum: a neural model for unsupervised multi-document abstractive summarization. In *International Conference on Machine Learning*. Giuseppe Di Fabbrizio, Amanda Stent, and Robert Gaizauskas. 2014. A hybrid approach to multidocument summarization of opinions in reviews. In Proceedings of the 8th International Natural Language Generation Conference (INLG), pages 54–63, Philadelphia, Pennsylvania, U.S.A. Association for Computational Linguistics. Hady Elsahar, Maximin Coavoux, Jos Rozen, and Matthias Gallé. 2021. Self-supervised and controlled multi-document opinion summarization. In *Proceedings of the 16th Conference of the European Chapter* of the Association for Computational Linguistics. Kavita Ganesan, ChengXiang Zhai, and Jiawei Han. 2010. Opinosis: A graph based approach to abstractive summarization of highly redundant opinions. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010). Aaron Van Den Oord, Oriol Vinyals, et al. 2017. Neural discrete representation learning. *Advances in neural* information processing systems, 30. Ruining He and Julian McAuley. 2016. Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering. In proceedings of the 25th international conference on world wide web. Chao Zhao and Snigdha Chaturvedi. 2020. Weaklysupervised opinion summarization by leveraging external information. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 34, pages 9644–9651. Minqing Hu and Bing Liu. 2004. Mining opinion features in customer reviews. In *AAAI*, volume 4, pages 755–760. Minqing Hu and Bing Liu. 2006. Opinion extraction and summarization on the web. In Proceedings of the 21st National Conference on Artificial Intelligence - Volume 2, AAAI'06, page 1621–1624. AAAI Press. Hayate Iso, Xiaolan Wang, Yoshihiko Suhara, Stefanos Angelidis, and Wang-Chiew Tan. 2021. Convex aggregation for opinion summarization. In *Findings* of the Association for Computational Linguistics: EMNLP 2021, pages 3885–3903. Jacob Devlin Ming-Wei Chang Kenton and Lee Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL-HLT, pages 4171–4186. Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In *ICLR (Poster)*. Durk P Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. 2016. Improved variational inference with inverse autoregressive flow. Advances in neural information processing systems, 29. Bohan Li, Junxian He, Graham Neubig, Taylor BergKirkpatrick, and Yiming Yang. 2019. A surprisingly effective fix for deep latent variable modeling of text. In *EMNLP/IJCNLP (1)*. Chin-Yew Lin and Eduard Hovy. 2002. Manual and automatic evaluation of summaries. In Proceedings of the ACL-02 Workshop on Automatic Summarization, pages 45–51. Qi Qin, Wenpeng Hu, and Bing Liu. 2020. Feature projection for improved text classification. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8161–8171. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. Jiayu Song, Iman Munire Bilal, Adam Tsakalidis, Rob Procter, and Maria Liakata. 2022. Unsupervised opinion summarisation in the wasserstein space. Yoshihiko Suhara, Xiaolan Wang, Stefanos Angelidis, and Wang-Chiew Tan. 2020. Opiniondigest: A simple framework for opinion summarization. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*. ## A Dataset Preparation B Baselines Following the similar pre-processing way (Bražinskas et al., 2020b; Chu and Liu, 2019; Iso et al., 2021), only products with a minimum of 10 reviews were used within the maximum of 128 tokens each respectively. Besides reviews used for training, these two datasets also contain gold-standard summaries for 200 and 60 sampled objects, respectively. In Amazon, each product is with 3 human-created summaries, released by (Bražinskas et al., 2020b). And only 1 human-created summary for each business in Yelp, released by (Chu and Liu, 2019). For both datasets, the summaries are manually created from 8 input reviews. We used the same dev/test split, 100/100 for Yelp and 28/32 for Amazon, released by their authors for our experiments. The following approaches are chosen as baselines: TextVAE (Ganesan et al., 2010): A vanilla text VAE model that has a unidirectional LSTM layer and uses the last hidden state to calculate the posterior distribution. **Opinosis** (Ganesan et al., 2010): A graph-based summarization framework that generates concise abstractive summaries with highly redundant opinions. **MeanSum** (Chu and Liu, 2019): An unsupervised multi-document summarization model that minimizes the auto-encoder reconstruction loss and the similarity loss. **Copycat** (Bražinskas et al., 2020b): An unsupervised multi-document summarization model that captures the dependency relationship between the product and reviews by defining a hierarchical VAE. Coop (Iso et al., 2021): An unsupervised multidocument summarization framework that searches input combinations for the summary aggregation using the input-output word overlapping. **Wassos** (Song et al., 2022): An unsupervised multidocument summarization framework that uses the Wasserstein barycenter of the semantic and syntactic distributions to obtain the summary. | Yelp | Yelp-Res | | | | | | |---------|--------------|-------------|--------------|--------------|-------------|--------------| | R1 | R2 | RL | R1 | R2 | RL | | | Max | | | | | | | | Coop | 34.36 | 7.12 | 19.96 | 34.21 | 6.49 | 19.19 | | TRACE-a | 34.24 | 7.00 | 19.68 | 34.39 | 7.07 | 19.90 | | TRACE-h | 34.66 | 7.05 | 19.70 | 34.80 | 7.81 | 20.10 | | Mean | | | | | | | | Coop | 33.92 ± 0.35 | 6.52 ± 0.35 | 19.13 ± 0.42 | 33.23 ± 0.58 | 6.57 ± 0.23 | 19.18 ± 0.13 | | TRACE-a | 33.90 ± 0.57 | 6.60 ± 0.22 | 19.29 ± 0.27 | 34.11 ± 0.26 | 6.67 ± 0.16 | 19.41 ± 0.29 | | TRACE-h | 34.13 ± 0.58 | 6.64 ± 0.25 | 19.44 ± 0.32 | 34.08 ± 0.45 | 6.86 ± 0.47 | 19.56 ± 0.34 | R1 R2 RL w/o con 37.90 7.53 **22.48** w/ sum-b 36.92 7.13 21.84 w/ sum-l 37.41 7.00 21.75 w/ text-b 36.60 7.21 21.79 w/ text-l 37.32 **7.65** 21.97 human 37.34 7.51 21.49 R1 R2 RL Number Val 36.18 6.80 21.02 224 Test **37.61** 7.10 21.63 256 1 36.32 6.66 21.47 294 2 36.73 6.74 21.21 221 1+2 36.58 **7.29 21.69** 515 1-large 36.54 7.10 21.64 490 2-large 36.04 6.52 20.83 475 1-l+2-l 36.80 6.70 21.69 965 | Val | This is a great product. It's very easy to use and it holds up well. The only problem is that the cord is a little weak, but it doesn't seem to be as good for the price. | |---------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Test | This is a great product. It's very easy to use and clean. The only thing that would be better is the size of this one, but it's not a big deal for the money. | | 1 | This is a great product. It's very easy to use and clean. The only problem is that it doesn't take up much room in the kitchen, and has a nice feel to it. | | 2 | This is a great product. It's very easy to use and it works well. The only drawback is that the blue light is a little weak, but you can't get it to work for the price. | | 1+2 | This is a great product. It's very easy to use and it works well. The only drawback is that the blue light is a little weak, but it does not have to be in the way. | | 1-large | This is a great product. It's very easy to use and clean. The only drawback is that it has to be a little bit more expensive than the brand but it works. | | 2-large | This is a great product for the price. It's very comfortable and looks good. The only problem is that it doesn't hold up to the side of the screen, but it is a nice size. | | 1-l+2-l | This is a great product. It's very easy to use and it works well. The only drawback is that the blue light is a little weak, but you can't get it to work with. Table 9: Generated counter-template via the inputs in Table 4. | ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 6 ✗ A2. Did you discuss any potential risks of your work? 6 ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** 3 ✓ B1. Did you cite the creators of artifacts you used? 3.1 3.4 B2. Did you discuss the license or terms for use and / or distribution of any artifacts? No response. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? No response. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. describe in 4.2 ## C ✓ **Did You Run Computational Experiments?** Left Blank. ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 4.2 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? No response. ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 4.4 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 4.4 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
oh-etal-2023-evaluation
Evaluation of Question Generation Needs More References
https://aclanthology.org/2023.findings-acl.396
Question generation (QG) is the task of generating a valid and fluent question based on a given context and the target answer. According to various purposes, even given the same context, instructors can ask questions about different concepts, and even the same concept can be written in different ways. However, the evaluation for QG usually depends on single reference-based similarity metrics, such as n-gram-based metric or learned metric, which is not sufficient to fully evaluate the potential of QG methods. To this end, we propose to paraphrase the reference question for a more robust QG evaluation. Using large language models such as GPT-3, we created semantically and syntactically diverse questions, then adopt the simple aggregation of the popular evaluation metrics as the final scores. Through our experiments, we found that using multiple (pseudo) references is more effective for QG evaluation while showing a higher correlation with human evaluations than evaluation with a single reference.
# Evaluation Of Question Generation Needs More References Shinhyeok Oh∗ Hyojun Go∗ **Yunsung Lee Hyeongdon Moon** Myeongho Jeong Hyun Seung Lee Seungtaek Choi† Riiid AI Research {shinhyeok.oh, hyojun.go, seungtaek.choi}@riiid.co, ## Abstract Question generation (QG) is the task of generating a valid and fluent question based on a given context and the target answer. According to various purposes, even given the same context, instructors can ask questions about different concepts, and even the same concept can be written in different ways. However, the evaluation for QG usually depends on single reference-based similarity metrics, such as ngram-based metric or learned metric, which is not sufficient to fully evaluate the potential of QG methods. To this end, we propose to paraphrase the reference question for a more robust QG evaluation. Using large language models such as GPT-3, we created semantically and syntactically diverse questions, then adopt the simple aggregation of the popular evaluation metrics as the final scores. Through our experiments, we found that using multiple (pseudo) references is more effective for QG evaluation while showing a higher correlation with human evaluations than evaluation with a single reference. ## 1 Introduction Question generation (QG) is the task of generating questions that are relevant to and answerable by given text. Since QG can be applied in not only educational scenarios (Kurdi et al., 2020; Steuer et al., 2021; Moon et al., 2022) but also improving question-answering tasks (Chen et al., 2021; Wang et al., 2018; Yu et al., 2020), designing better QG frameworks and their automatic evaluations have gained more attention (Chakrabarty et al., 2022; Ushio et al., 2022). However, previous QG works mostly evaluate their methods based on how similar the generated questions are to the gold reference questions (Chan and Fan, 2019; Zhou et al., 2017; Du and Cardie, 2018), using n-gram-based similarity metrics, such ∗ Equal Contribution. † Corresponding author. as BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004). Given a single reference, these metrics do not account for the lexical and semantic diversity of questions (Zhang et al., 2020), showing poor correlation with human judgment (Liu et al., 2016; Novikova et al., 2017; Chaganty et al., 2018). Though prior works studied alternative metrics of leveraging language models, such as BERTScore (Zhang et al., 2020) and BLEURT (Sellam et al., 2020), such metrics are limited in that the diversity of gold questions is only implicitly represented in the embedding space, rather than data space (or, raw questions). To explicitly compare with the diverse gold questions in the data space, we propose to augment the single reference question for evaluating QG frameworks, which we call Multi-Reference Evaluation (MRE), by leveraging the few-shot ability of large language models (LLMs) like GPT-3 (Brown et al., 2020) and ChatGPT (OpenAI, 2022). Though there have been efforts to augment references for improving evaluations, they are either limited in other text generation tasks, such as machine translation (Bawden et al., 2020) and question answering (Liu et al., 2021), or the methods are hard to be applied in question generation tasks, as naive LLMs generate some negative (toxic or erroneous) questions (Wang et al., 2022b). Therefore, we utilize LLMs for paraphrasing to augment a reference question, rather than generating new questions from the given context. To the best of our knowledge, we are the first to apply reference augmentation to evaluate the QG frameworks. We briefly summarize our main contributions as follows: - We propose to augment the single reference for multiple reference evaluation (MRE) that can explicitly consider syntactic and semantic variations of questions. Experimental results on quiz design dataset (Laban et al., 2022) show that the performance of existing metrics can be considerably improved when MRE is applied. - MRE is metric-agnostic, such that various metrics can be improved with our method. Since each existing metric can discover different insights, such as BLEU for lexical similarity and BERTScore for semantic similarity, MRE can improve these multiple various lenses for investigating QG frameworks. - We release the augmented reference questions as supplementary materials, which provide an opportunity to reproduce our results for further research. We further validated whether the augmented references are correct or not by human annotators. ## 2 Methodology 2.1 Single Reference Evaluation (Sre) Previous works for QG evaluation measure the quality of a generated question q gin regards to a gold reference question q ras M(q g, qr), where M denotes a similarity metric that is widely used in QG evaluation such as BLEU and ROUGE-L. However, since these metrics suppose only one gold reference, even an appropriate question can be assigned a low score, namely *false positive* problem. ## 2.2 Multi-Reference Evaluation (Mre) To deal with this problem, we propose the multireference evaluation, where the candidate question q gis compared with multiple references Q = {q r0 , qr1 , . . . , qrN }: $$s=\operatorname*{max}_{i}M(q_{i}^{r},q^{g})\quad{\mathrm{for}}\quad i=0,\ldots,N.\quad0$$ By comparing more diverse gold questions with existing metrics, we can measure the more realistic ability of QG frameworks. Note that, as our method could adopt any similarity-based metrics, we can better gain useful insights from various metrics showing different characteristics of generated questions. However, as it is impractical to collect such multiple references with human annotators, we leverage the recent large language models, specifically GPT-3 and ChatGPT, such that replace Q with Qˆ. Given a reference question q r0 , we augment it with N questions: $$\hat{Q}=\mathrm{LLM}(q_{0}^{r}).$$ 0). (2) Note that we give a gold question q r0 only, rather than the pair of context and question as in (Liu et al., 2021). It is because the zero-shot QG ability of LLMs is reportedly risky for educational purposes (Wang et al., 2022b). We thus use LLMs as a paraphrase generator, which reportedly works well since there is a high correlation between paraphrasing and training paradigms about LLM (Chen et al., 2022). As GPT-3 is inferior to ChatGPT in the zero-shot settings, here we employ the in-context learning ability of GPT-3, where we give three ChatGPTparaphrased questions questions as a demonstration for GPT-3 like Appendix A. We will further investigate the correctness of the paraphrased questions in experiments (Section 3.6). ## 3 Experiments 3.1 Dataset And Evaluation To verify the effectiveness of MRE, we use quiz design dataset (Laban et al., 2022) for measuring the correlation between automatic question evaluation and human annotation. The quiz design dataset includes 3,164 human-annotated samples, which consist of context, answer, and automatically generated questions. For each sample, the human annotates whether the question is fluent, able to derive the given answer, and fits the context (1) or not (0). We define the gold human score of a question as the average of the discrete human annotations in [0, 1]. Then, we select questions with a human score of 1 as the reference question for the given passage. Finally, for the remaining questions, we measure the Pearson correlation coefficient (Freedman et al., 2007) and Spearman's rank correlation coefficient (Zar, 2005) between the human score and automatic evaluation scores. ## 3.2 Metrics Here, as we aim to enhance the existing QG evaluation metrics with multi-reference evaluation, we choose widely used metrics to apply multireference evaluation. We apply multi-reference evaluation to BLEU-4 (Papineni et al., 2002), ROUGE-L (Lin, 2004), METEOR (Banerjee and Lavie, 2005), BERTScore (Zhang et al., 2020), BLEURT (Sellam et al., 2020). Also, we add RQUGE (Mohammadshahi et al., 2022), which is a reference-free QG evaluation metric, as our baseline. We briefly summarize the metrics used in | Pearson Correlation | Spearman Correlation | | | | | | | | | | |-----------------------|------------------------|----------|------------------|----------|--------|--------|---------|--------|--------|--------| | MRE | MRE | | | | | | | | | | | SRE | SRE | | | | | | | | | | | HRQ-VAE | GPT-3 | GPT-3 | ChatGPT (0-shot) | HRQ-VAE | GPT-3 | GPT-3 | ChatGPT | | | | | (0-shot) | (3-shot) | (0-shot) | (3-shot) | (0-shot) | | | | | | | | BLEU-4 | 0.2028 | 0.2443 | 0.2782 | 0.3162 | 0.3630 | 0.2772 | 0.3224 | 0.2688 | 0.3021 | 0.3340 | | ROUGE-L | 0.2908 | 0.3325 | 0.3241 | 0.3447 | 0.3799 | 0.2787 | 0.3270 | 0.3050 | 0.3330 | 0.3637 | | RQUGE | 0.2932 | - | - | - | - | 0.2571 | - | - | - | - | | METEOR | 0.3447 | 0.2968 | 0.3480 | 0.3877 | 0.4116 | 0.3111 | 0.2822 | 0.3159 | 0.3562 | 0.3780 | | BERTScore | 0.3556 | 0.3634 | 0.3552 | 0.3877 | 0.4033 | 0.3462 | 0.3568 | 0.3327 | 0.3723 | 0.3859 | | MoverScore | 0.4383 | 0.3835 | 0.4297 | 0.4693 | 0.4953 | 0.3882 | 0.3643 | 0.3885 | 0.4214 | 0.4292 | | BLEURT | 0.4739 | 0.4287 | 0.4656 | 0.4803 | 0.5019 | 0.4566 | 0.4193 | 0.4456 | 0.4648 | 0.4816 | ## Our Experiments As Follows: - **BLEU-4** (Papineni et al., 2002) is a metric that utilizes n-gram precision to evaluate the similarity between a generated text and a reference text. The metric counts the number of occurrences of unigrams, bigrams, trigrams, and four-grams that match their corresponding counterparts in the reference text. - **ROUGE-L** (Lin, 2004) is a metric that utilizes unigram recall to evaluate the similarity between a generated text and a reference text. The metric counts the length of the longest common subsequence as the numerator rather than the exact number of matches. - **RQUGE** (Mohammadshahi et al., 2022) first predicts answer span with question answering model then computes score with scorer module from given generated question, gold answer, and context. Since RQUGE does not depend on a reference question for evaluation, we only report the correlation of the original RQUGE. - **METEOR** (Banerjee and Lavie, 2005) measures a score by using a combination of unigram-precision, unigram-recall, and fragmentation measures. - **BERTScore** (Zhang et al., 2020) utilize contextual embeddings for compute token similarity. We report BERTScore based on roberta-large. - **BLEURT** (Sellam et al., 2020) is a trained metric using a regression model trained on rating data. It combine expressivity and robustness by pre-training a fully learned metric Model Same answer Same meaning GPT-3 (0-shot) 0.77 0.79 GPT-3 (3-shot) 0.83 0.83 ChatGPT (0-shot) 0.92 0.93 Table 2: Human evaluation results of whether paraphrased question by the LLM has the same correct answer and meaning as the reference question. on large amounts of synthetic data, before fine-tuning it on human ratings. ## 3.3 Implementation Details We implemented the paraphrasing frameworks by using two LLMs: OpenAI GPT-3 API (Brown et al., 2020) and ChatGPT Webservice (OpenAI, 2022). For GPT-3, we set the model as "text-davinci-003" and the temperature as 0.5. For ChatGPT, we utilized the default setting since we cannot control it. Our prompts are described in Appendix A. We made 20 examples by using LLMs. For additional comparisons with the fine-tuned paraphrasing model, we also implemented HRQ-VAE (Hosking et al., 2022). ## 3.4 Main Results As shown in Table 1, we empirically validate the following observations of the advantages of diversified multi-reference evaluation: 1) Our multireference evaluation tends to improve the correlation between human score and evaluation metrics. 2) On LLMs, correlation with the human score is high in the order of ChatGPT (0-shot), GPT-3 (3shot), and GPT-3 (0-shot) paraphrasing framework. Specifically, GPT-3 (3-shot) and ChatGPT paraphrasing framework considerably improve both Pearson correlation and Spearman correlation for all metrics, while paraphrasing with GPT-3 (0-shot) ∆(MRE − SRE) Human Score BLEU-4 ROUGE-L METEOR BERTScore MoverScore BLEURT 1 + 0.2267 + 0.1221 + 0.1034 + 0.0592 + 0.0439 + 0.0400 0 + 0.0350 + 0.0846 + 0.0941 + 0.0398 + 0.0190 + 0.0373 Table 3: Score changes with multiple reference evaluation ∆(MRE − SRE) through ChatGPT for questions of human score 0 and 1. ![3_image_0.png](3_image_0.png) and HRQ-VAE failed at increasing correlations of some metrics. Also, the increase in correlation through MRE is related to the performance of the paraphrasing framework. As shown in Table 2, the paraphrase of the reference question is better in the order of ChatGPT, GPT-3 (3-shot), and GPT-3 (0-shot). Considering the effect of MRE is also in the same order, we conjecture that the performance of the paraphrasing framework is also important for the effect of MRE. More details in Table 2 are described in Section 3.6. ## 3.5 Analysis For Mre The effect of N We analyze the effect of the number of reference questions N by changing N to 1, 2, 5, 10, and 20. Figure 1 shows the change of the correlation coefficient according to the change of N. The results show that even if only one augmented reference question is used, the correlation is higher than that of the single reference evaluation. Also, if more augmented reference questions are used, the correlation with the human score increases and becomes saturated when N exceeds a certain level ## (N ≈ 5). Score change with multi-reference evaluation We further explore how MRE changes original metrics. Specifically, we report average score differences between the original metric and the multireference version of it with ChatGPT for accepted and unaccepted candidate questions. Questions with the human score of 1 and 0 are considered accepted questions and unaccepted questions, respectively. As shown in Table 3, multi-reference evaluation increases the score of accepted questions relatively more than that of an unaccepted question. For example, BLEU-4 score increases by 0.2267 for accepted questions, compared to 0.0350 for unaccepted questions. These results mean that multireference evaluation makes original metrics more correlated with the human score by enlarging the score of acceptable questions than unacceptable questions. ## 3.6 Human Evaluation Of Question Paraphrase The assumption of multi-reference evaluation is that most paraphrased questions with LLMs can serve the meaning like the gold questions. We conduct a human study to validate this assumption. For each of GPT-3 (0-shot), GPT-3 (3-shot), and ChatGPT, we sample 50 pairs of reference questions and paraphrased questions and annotate each pair whether the paraphrased questions have the same meaning and have the same answer compared to reference questions. Specifically, we ask two annotators to evaluate with a binary rating (1 for "same" and 0 for "not same"). As shown in Table 2, 92% and 93% of the questions paraphrased by ChatGPT are evaluated as having the same answer and meaning, respectively. In addition, even when paraphrasing with GPT3 3-shot, it has the same meaning and the same answer at a high rate. We refer to Appendix B for more details about human annotation. | Generated Question | Approach | Reference Question | B-4 | R-L | BS | BR | Human | |----------------------|-------------------------------------------------------|----------------------|---------------------------------------------|-------|------|------|---------| | SRE | What does it mean if energy is sustainable? | 0.00 | 0.27 | 0.68 | 0.75 | | | | MRE-B-4 | What is the definition of sustainable energy? | 1.00 | - | | | | | | MRE-R-L | What is the definition of sustainable energy? | - | 1.00 | 1.00 | | | | | MRE-BS | What is the definition of sustainable energy? | - | - | 1.00 | - | | | | MRE-BR | What is the definition of sustainable energy? | - | - | - | 0.97 | | | | E1 | What is the definition of sustainable energy? | SRE | What are some renewable energy sources? | 0.00 | 0.86 | 0.87 | 0.83 | | MRE-B-4 | What are some examples of renewable energy? | 0.53 | - | | | | | | MRE-R-L | What are some examples of alternative energy sources? | - | 0.87 | 1.00 | | | | | MRE-BS | What are some examples of renewable energy? | - | - | 0.95 | - | | | | MRE-BR | What are some examples of renewable energy? | - | - | - | 0.85 | | | | E2 | What are some examples of renewable energy sources? | SRE | What does it mean if energy is sustainable? | 0.00 | 0.33 | 0.74 | 0.77 | | MRE-B-4 | What does sustainable energy mean? | 0.00 | - | | | | | | MRE-R-L | What does it mean if energy is sustainable? | - | 0.33 | 0.00 | | | | | MRE-BS | What does sustainable energy mean? | - | - | 0.76 | - | | | | MRE-BR | What does it mean if energy is sustainable? | - | - | - | 0.77 | | | | E3 | How is energy sustainable? | | | | | | | 0.00 MRE-B-4 What does sustainable energy mean? 0.00 - MRE-R-L What does it mean if energy is sustainable? - 0.33 MRE-BS What does sustainable energy mean? - - 0.76 - MRE-BR What does it mean if energy is sustainable? - - - 0.77 Table 4: Examples of SRE and MRE results. MRE-B-4, MRE-R-L, MRE-BS, and MRE-BR denotes to use BLEU-4, ROUGE-L, BERTScore, and BLEURT as M, respectively. Reference Question for SRE represents the given reference question q r 0 , and the Reference Question for MRE-B-4, MRE-R-L, MRE-BS, and MRE-BR represent one of Qˆ that obtained the max score for each measure. ## 3.7 Case Study For example in E1 in Table 4, one of the texts in paraphrased references matches the generated question. MRE achieves gains over SRE by 1.00 (0.00 → 1.00) on BLEU-4, and we found a positive effect on all other metrics. In E2, the text that received the highest score among paraphrased references differs from each metric. We can observe that MRE works well by showing that you can choose one of the paraphrased references that are measured to be similar for each metric. Moreover, score increases suggest that MRE leads to positive shifts in the metric scores when the human score is 1 (E1, E2). However, the score to utilize MRE cannot be lower than SRE in any example because MRE takes the maximum score for the true reference and paraphrased references. Thus, if the human score is low, it is important to have a small negative effect. One may ask about the risk of MRE giving a higher score than SRE for wrong questions as in E3. However, we argue that it doesn't weaken the strength of MRE as the gaps between SRE and MRE for wrong questions are relatively smaller than that for correct questions, which we compared in Table 3. ## 4 Conclusion & Future Work In this paper, we studied the problem of evaluating the question generation frameworks, and observed that automatically augmenting the reference question with large language models is surprisingly effective, showing higher correlations with humanannotated scores. Though we evaluated the effectiveness of multiple reference evaluations for testtime evaluations, where the gold human score is given, we hope future research to explore other scenarios, such as measuring validation performance (asking how much the test performance can be actually improved) and multi-reference training as in (Jeong et al., 2021). Exploring other tasks (machine translation and document summarization) or generation methods (giving context and the reference question together to LLMs) would be interesting for future research. ## 5 Limitations Inapplicability to reference-free evaluation: Since our MRE supposes that there is an available reference question to be augmented (paraphrased), it is not applicable to reference-free question evaluations such as QRelScore (Wang et al., 2022a) and RQUGE (Mohammadshahi et al., 2022). Inapplicability for answer-unconditional QG frameworks: MRE can't be applied to answerunconditional QG frameworks because it only augments the reference question by paraphrasing without considering other possible questions of supposing other answers. Large computations: To generate multi-reference questions, our method requires inference of large language models, which results in huge computational costs. Therefore, this can become burdensome as the test dataset grows. ## 6 Ethical Considerations We honor and support the ACL code of Ethics. In order to conduct our human annotation for paraphrased sentences, two humans are recruited. We make sure that humans would be paid a wage of 15 dollars per hour. ## References Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In *Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization*, pages 65–72, Ann Arbor, Michigan. Association for Computational Linguistics. Rachel Bawden, Biao Zhang, Lisa Yankovskaya, Andre Tättar, and Matt Post. 2020. A study in improving BLEU reference coverage with diverse automatic paraphrasing. In *Findings of the Association for* Computational Linguistics: EMNLP 2020, pages 918–932, Online. Association for Computational Linguistics. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In *Advances in Neural Information Processing Systems*, volume 33, pages 1877–1901. Curran Associates, Inc. Arun Chaganty, Stephen Mussmann, and Percy Liang. 2018. The price of debiasing automatic metrics in natural language evalaution. In *Proceedings of the* 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 643–653, Melbourne, Australia. Association for Computational Linguistics. Tuhin Chakrabarty, Justin Lewis, and Smaranda Muresan. 2022. Consistent: Open-ended question generation from news articles. Ying-Hong Chan and Yao-Chung Fan. 2019. A recurrent BERT-based model for question generation. In Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pages 154–162, Hong Kong, China. Association for Computational Linguistics. Yiran Chen, Pengfei Liu, and Xipeng Qiu. 2021. Are factuality checkers reliable? adversarial metaevaluation of factuality in summarization. In *Findings of the Association for Computational Linguistics:* EMNLP 2021, pages 2082–2095. Zheng Chen, Hu Yuan, and Jiankun Ren. 2022. Zeroshot domain paraphrase with unaligned pre-trained language models. *Complex & Intelligent Systems*, pages 1–14. Xinya Du and Claire Cardie. 2018. Harvesting paragraph-level question-answer pairs from Wikipedia. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1907–1917, Melbourne, Australia. Association for Computational Linguistics. David Freedman, Robert Pisani, Roger Purves, and Ani Adhikari. 2007. Statistics. Tom Hosking, Hao Tang, and Mirella Lapata. 2022. Hierarchical sketch induction for paraphrase generation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2489–2501, Dublin, Ireland. Association for Computational Linguistics. Myeongho Jeong, Seungtaek Choi, Jinyoung Yeo, and Seung-won Hwang. 2021. Label and context augmentation for response selection at dstc8. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29:2541–2550. Ghader Kurdi, Jared Leo, Bijan Parsia, Uli Sattler, and Salam Al-Emari. 2020. A systematic review of automatic question generation for educational purposes. International Journal of Artificial Intelligence in Education, 30(1):121–204. Philippe Laban, Chien-Sheng Wu, Lidiya Murakhovs'ka, Wenhao Liu, and Caiming Xiong. 2022. Quiz design task: Helping teachers create quizzes with automated question generation. In *Findings of* the North American Chapter of the Association for Computational Linguistics: NAACL 2022. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In *Text Summarization Branches Out*, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Chia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How NOT to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2122–2132, Austin, Texas. Association for Computational Linguistics. Ruibo Liu, Jason Wei, and Soroush Vosoughi. 2021. Language model augmented relevance score. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)*, pages 6677–6690, Online. Association for Computational Linguistics. Alireza Mohammadshahi, Thomas Scialom, Majid Yazdani, Pouya Yanki, Angela Fan, James Henderson, and Marzieh Saeidi. 2022. Rquge: Reference-free metric for evaluating question generation by answering the question. *arXiv preprint arXiv:2211.01482*. Hyeongdon Moon, Yoonseok Yang, Hangyeol Yu, Seunghyun Lee, Myeongho Jeong, Juneyoung Park, Jamin Shin, Minsam Kim, and Seungtaek Choi. 2022. Evaluating the knowledge dependency of questions. In *Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing*, pages 10512–10526, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Jekaterina Novikova, Ondˇrej Dušek, Amanda Cercas Curry, and Verena Rieser. 2017. Why we need new evaluation metrics for NLG. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2241–2252, Copenhagen, Denmark. Association for Computational Linguistics. OpenAI. 2022. Chatgpt: Optimizing language models for dialogue. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In *Proceedings of the* 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Thibault Sellam, Dipanjan Das, and Ankur Parikh. 2020. BLEURT: Learning robust metrics for text generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7881–7892, Online. Association for Computational Linguistics. Tim Steuer, Anna Filighera, Tobias Meuser, and Christoph Rensing. 2021. I do not understand what i cannot define: Automatic question generation with pedagogically-driven content selection. arXiv preprint arXiv:2110.04123. Asahi Ushio, Fernando Alva-Manchego, and Jose Camacho-Collados. 2022. Generative language models for paragraph-level question generation. Xiaoqiang Wang, Bang Liu, Siliang Tang, and Lingfei Wu. 2022a. Qrelscore: Better evaluating generated questions with deeper understanding of contextaware relevance. *arXiv preprint arXiv:2204.13921*. Yansen Wang, Chenyi Liu, Minlie Huang, and Liqiang Nie. 2018. Learning to ask questions in open-domain conversational systems with typed decoders. In *Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long* Papers), pages 2193–2203. Zichao Wang, Jakob Valdez, Debshila Basu Mallick, and Richard G. Baraniuk. 2022b. Towards humanlike educational question generation with large language models. In *Artificial Intelligence in Education*, pages 153–166, Cham. Springer International Publishing. Qian Yu, Lidong Bing, Qiong Zhang, Wai Lam, and Luo Si. 2020. Review-based question generation with adaptive instance transfer and augmentation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 280–290. Jerrold H Zar. 2005. Spearman rank correlation. *Encyclopedia of biostatistics*, 7. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with BERT. In *8th International* Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. Qingyu Zhou, Nan Yang, Furu Wei, Chuanqi Tan, Hangbo Bao, and Ming Zhou. 2017. Neural question generation from text: A preliminary study. In *Natural Language Processing and Chinese Computing* - 6th CCF International Conference, NLPCC 2017, Dalian, China, November 8-12, 2017, Proceedings, volume 10619 of *Lecture Notes in Computer Science*, pages 662–671. Springer. ## A Llm Prompts In this section, we describe the details of how to paraphrase the questions with LLMs. To avoid confusion, we report it with an example question. ## A.1 Chatgpt (0-Shot) & Gpt-3 (0-Shot) We use 0-shot input for ChatGPT and GPT-3 as: Please paraphrase the following sentence 20 times: Where is the Statue of Liberty located? ## A.2 Gpt-3 (3-Shot) For the 3-shot prompt, we use the examples paraphrased by ChatGPT as the 3-shot. Our 3-shot prompt is described as: Please paraphrase the following sentence 20 ,→ times: Sentence: What is this software found useful for? 1. For what purpose is this software deemed ,→ useful? 2. What are the uses of this software? 3. Can you tell me what this software is useful ,→ for? 4. What are the benefits of this software? 5. In what ways is this software found to be ,→ useful? 6. What are the applications of this software? 7. Can you explain the usefulness of this ,→ software? 8. What does this software excel at? 9. Can you tell me what tasks this software is ,→ useful for? 10. In what scenario is this software useful? 11. Can you describe the utility of this ,→ software? 12. What is the purpose of this software? 13. For what is this software commonly used? 14. What does this software aid in? 15. Can you tell me the function of this ,→ software? 16. What are the advantages of this software? 17. What is this software good for? 18. In what ways does this software provide ,→ value? 19. What is the benefit of using this software? 20. Can you tell me what this software is ,→ commonly used for? Sentence: Who is the girl? 1. Can you tell me the name of the young lady? 2. Who is the female in question? 3. Could you identify the girl for me? 4. To whom are you referring as the girl? 5. The girl, who is she? 6. Could you tell me who the girl is? 7. Who is the young woman being spoken of? 8. Can you name the girl in question? 9. Who is the lady in question? 10. Could you give me the name of the girl? 11. Who is being referred to as the girl? 12. Can you tell me who the female is? 13. Who is the girl being discussed? 14. Can you identify the young lady? 15. The girl, can you tell me her name? 16. Who is the subject of the girl? 17. Can you name the female in question? 18. Who is the girl you are asking about? 19. Can you provide the name of the girl? 20. Who is the young woman being referred to? Sentence: Sentence: Where is the Eiffel Tower? 1. Can you tell me the location of the Eiffel ,→ Tower? 2. Could you inform me where the Eiffel Tower is ,→ situated? 3. I'm wondering where the Eiffel Tower is ,→ located? 4. The Eiffel Tower, where can I find it? 5. Could you give me the whereabouts of the ,→ Eiffel Tower? 6. The Eiffel Tower, where is it located? 7. Can you indicate the location of the Eiffel ,→ Tower? 8. Can you provide me with the location of the ,→ Eiffel Tower? 9. Where can I find the Eiffel Tower? 10. The Eiffel Tower, where is it situated? 11. Can you tell me where the Eiffel Tower is ,→ located? 12. Could you give me the location of the Eiffel ,→ Tower? 13. Where is the Eiffel Tower situated? 14. The Eiffel Tower, where is it found? 15. Could you inform me where the Eiffel Tower ,→ can be found? 16. Can you give me the whereabouts of the Eiffel ,→ Tower? 17. Where is the Eiffel Tower located? 18. The Eiffel Tower, where is it positioned? 19. Can you indicate the whereabouts of the ,→ Eiffel Tower? 20. Can you provide me with the whereabouts of ,→ the Eiffel Tower? Sentence: ## B Human Annotation Two annotators participate in our study. All the pairs from paraphrasing LLMs are randomly shuffled and anonymized, and each pair is evaluated by the following two dimensions: Same Answer Human annotators check whether the paraphrased question has the same answer as the reference question. Annotation is performed by binary rate, 1 for "having the same answer" and 0 for "having the different answer". Same meaning It checks whether the paraphrased question has the same meaning as the reference question. Humans annotate the question as 1 for "having the same meaning" and 0 for "having a different meaning". The inter-annotator agreement is 0.24 for the same meaning, and 0.21 for the same answer. Although the agreement was low due to the difference in their standards, the model preference was clearly preserved for both annotators. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 5 ✓ A2. Did you discuss any potential risks of your work? 5 ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1 ✓ A4. Have you used AI writing assistants when working on this paper? Grammarly, correct grammar for all sections ## B ✓ **Did You Use Or Create Scientific Artifacts?** 3.1 ✓ B1. Did you cite the creators of artifacts you used? 3.1 B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Not applicable. Left blank. ## C ✓ **Did You Run Computational Experiments?** 3 ✗ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? just using api service for augmentation The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 3.3 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 3 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 3.2, 3.3 D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** 3.6 D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? 6 ✗ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? just evaluation for automatic generated data D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. ✗ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? just evaluation for automatic generated data
tang-etal-2023-xtremeclip
{X}treme{CLIP}: Extremely Parameter-efficient Tuning for Low-resource Vision Language Understanding
https://aclanthology.org/2023.findings-acl.397
Recently, Contrastive Visual-Language Pre-training (CLIP) has demonstrated remarkable capability in various Visual Language Understanding (VLU) tasks. Yet, most CLIP-based methods require tasks-specific designs and sufficient training data. In this paper, we introduce a simple yet efficient paradigm for low-resource VLU named XtremeCLIP, which involves very few trainable parameters to improve the generalization ability of the trained models. In our XtremeCLIP framework, we reformulate a series of VLU tasks as a unified open-book affinity-matching problem. Furthermore, to handle the insufficient supervised signals in small datasets, we adopt contrastive learning to utilize the implicit sorting information of ground-truth labels to provide more supervised cues. Extensive experiments over multiple datasets on visual entailment, visual question answering, and image classification show that XtremeCLIP consistently outperforms existing baselines in low-resource settings.
# Xtremeclip: Extremely Parameter-Efficient Tuning For Low-Resource Vision Language Understanding Moming Tang1, Chengyu Wang2, Jianing Wang1, Chuanqi Tan2**, Songfang Huang**2, Cen Chen1∗ , **Weining Qian**1 1East China Normal University, 2Alibaba Group [email protected],[email protected], [email protected], {chuanqi.tcq,songfang.hsf}@alibaba-inc.com, {cenchen,wnqian}@dase.ecnu.edu.cn ## Abstract Recently, Contrastive Visual-Language Pretraining (CLIP) has demonstrated remarkable capability in various Visual Language Understanding (VLU) tasks. Yet, most CLIP-based methods require tasks-specific designs and sufficient training data. In this paper, we introduce a simple yet efficient paradigm for lowresource VLU named XtremeCLIP, which involves very few trainable parameters to improve the generalization ability of the trained models. In our XtremeCLIP framework, we reformulate a series of VLU tasks as a unified open-book affinity-matching problem. Furthermore, to handle the insufficient supervised signals in small datasets, we adopt contrastive learning to utilize the implicit sorting information of ground-truth labels to provide more supervised cues. Extensive experiments over multiple datasets on visual entailment, visual question answering, and image classification show that XtremeCLIP consistently outperforms existing baselines in low-resource settings. 1 ## 1 Introduction Pre-trained Visual-Language models such as XVLM (Zeng et al., 2021) and CLIP (Radford et al., 2021) have been proposed to unify visual and textual representations in the same embedding space and shown great potential for Visual Language Understanding (VLU). Conventional fine-tuning approaches (Clark et al., 2020; Lee et al., 2020; Wang et al., 2023) heavily depend on the time-consuming and labor-intensive process of data annotation, which are bothersome in low-resource scenarios. In the literature, Ben Zaken et al. (2022); Song et al. (2022) propose partial-parameter fine-tuning to preserve the pre-trained knowledge of these models. Yao et al. (2021); Song et al. (2022); Tsimpoukelli et al. (2021) reformulate visual grounding and visual question answering as a "fill-inblank" problem by hand-crafted prompts. Gao et al. (2021); Zhang et al. (2022) utilize lightweight adapters (Houlsby et al., 2019) to retain the knowledge of CLIP. Besides, Zhou et al. (2022b,a); Zhu et al. (2022) address image classification tasks by utilizing textual representations describing image categories. Despite the success, we suggest there are still some drawbacks in existing works. i) The discrete prompt paradigm requires labor-intensive promptengineering, while the soft template paradigm results in an unstable training process. ii) Adapters or partial-parameter fine-tuning methods may underperform due to their relatively large number of tunable parameters, requiring additional training data to achieve satisfactory results. iii) The aforementioned methods are task-specific in design, implying that their effectiveness may be derived from task-specific architectures. Hence, it is vital for us to design a more unified parameter-efficient tuning approach in order to solve various VLU tasks. In this paper, we present XtremeCLIP, an extremely parameter-efficient tuning method for solving various VLU tasks based on CLIP (Radford et al., 2021). XtremeCLIP reformulates a series of VLU tasks uniformly into an open-book affinitymatching problem. Here, we adopt a knowledgebase prototype matrix to record the salient characteristics for each class by visual-textual fusion features, then perform affinity matching between image-text pairs and prototypes of each class. We further utilize the implicit sorting information of ground-truth labels by contrastive learning to provide more supervised cues from low-resource training sets. During model training, all parameters of textual and visual encoders in CLIP are fixed. Hence, XtremeCLIP is extremely parameterefficient. We conduct extensive experiments on a visual entailment (VE) benchmark (i.e., SNLI-VE), a ![1_image_0.png](1_image_0.png) visual question answering (VQA) benchmark (i.e., VQA v2), and three widely used image classification (IC) benchmarks (i.e., EuroSAT, DTD, and FGVC). Results show XtremeCLIP consistently outperforms baselines in low-resource scenarios. ## 2 Xtremeclip: The Proposed Method The model architecture and training procedure of XtremeCLIP are in Figure 1. First, a knowledgebase prototype matrix is constructed (Snell et al., 2017) by combining visual and textual features, designed to serve as a repository of the key characteristics for each class. Then, open-book affinity matching is performed between the image-text instance and the prototypes for each class. ## 2.1 Prototype Matrix Construction Given a set of N image-text training instances: D = {(imgi,txti), li} N i=1, where li denotes the ground-truth label, txti denotes the corresponding textual description of the image imgi. Image-text pairs are encoded using visual V and textual T encoders of CLIP: $$v_{1}={\mathcal{V}}(\mathrm{img_{i}}),v_{2}={\mathcal{T}}(\mathrm{txt_{i}})),v_{1},v_{2}\in R^{d}\quad0$$ d(1) A fusion function F is employed to obtain uniform image-text representations that capture the interactions between visual and textual information: $${\mathcal{F}}(v_{1},v_{2})=[v_{1},v_{2},v_{1}+v_{2},v_{1}-v_{2},v_{1}\times v_{2}]\;\;(2)$$ where F(v1, v2) ∈ R5d. These fusion features are used to construct the knowledge-base prototype matrix denoted as WP by averaging them per their ground-truth labels: $$M_{c}=\frac{\sum_{i=1}^{N}I(l_{i}=c)\cdot\mathcal{F}(\mathcal{V}(\mathrm{img_{i}}),\mathcal{T}(\mathrm{txt_{i}}))}{\sum_{i=1}^{N}I(l_{i}=c)}\tag{3}$$ $$W_{P}=\left[M_{1},\cdots,M_{C}\right],\left(W_{P}\in R^{C\times5d}\right)\tag{4}$$ where $C$ denotes the number of classes, $M_{c}$ denotes the prototype of the c-th class and c ∈ 1 *· · ·* C, I(·) denotes the indicator function, and [·] denotes the concatenation operator. ## 2.2 Open-Book Matching Prototype Matching for VE and VQA. In VE or VQA, affinity matching is performed between the fusion feature of a given image-text pair and the prototypes for each class: Pi = F(V(imgi), T (txti)) · W⊤ P . Prototype Matching for IC. In traditional IC tasks, only images are provided without corresponding textual descriptions. We obtain textual descriptions (prompts) for all classes, followin Radford et al. (2021). Given an image imgi and the textual descriptions of all image categories {tc|c = 1 *· · ·* C}, the predicted probability (denoted as Pi,c) of imgi w.r.t. the c-th image category is as follows: Pi,c = F(V(imgi), T (tc)) · M⊤ c. Thus, the entire probabilistic distribution Piis: Pi = [Pi,c|c = 1 *· · ·* C]. ## 2.3 Training Paradigm XtremeCLIP has only one set of tunable parameters, namely the Prototype Matrix denoted by Wp. Its fusion function, visual, and textual encoders are solely utilized for constructing the prototype matrix, with all parameters frozen during the training phase. In XtremeCLIP, the model is trained using the Cross-Entropy (CE) loss given Pi. The sample-wise CE loss is defined as follows: $$L_{C E}=-\sum_{c\in\{1\cdots C\}}l_{i,c}\cdot\log P_{i,c}\qquad\quad(5)$$ where li,c denotes the ground-truth label w.r.t. the c-th class. However, the model can hardly achieve satisfactory performance with only supervised signals from CE in low-resource tasks. Given that instances' affinity with ground-truth classes should be ranked higher than other classes, this implicit sorting information can be utilized to guide the model to recognize instances' ground-truth classes via contrastive learning (Zhong et al., 2020). We define the affinity of the ground-truth category (i.e., the prototype matching probability, denoted as Pi,l) as positive samples and other affinities in Pi as negative samples. Following Liu et al. (2022); Liu and Liu (2021), the sample-wise Contrastive Learning (CL) loss is computed as: $$L_{CL}=\sum_{c=1}^{C}\max(0,P_{i,l}-P_{i,c}).\tag{6}$$ The total loss function for XtremeCLIP, namely $L$ is defined as: L = LCE + LCL. ## 3 Experiments 3.1 Experimental Settings We briefly describe the experimental settings and leave more details in Appendix. Datasets. SNLI-VE (Xie et al., 2018) is utilized for visual entailment, consisting of image-text pairs whereby a premise is defined by an image. VQA v2 (Goyal et al., 2017) is utilized for visual question answering, containing questions about images. Here, we only consider the yes/no samples. Questions with open answers require decoder models and are not the focus of this paper For IC, **EuroSAT** (Helber et al., 2019) contains satellite images consisting out of 10 categories. DTD (Cimpoi et al., 2014) contains describable textures images with 47 classes. **FGVC** (Maji et al., 2013) contains images of 102 aircraft model variants. Baselines. In our work, we compare XtremeCLIP with zero-shot CLIP (Radford et al., 2021); finetuning paradigms including standard fine-tuning, mixout (Lee et al., 2020), pre-trained weight decay (weight decay) (Lee et al., 2020) and Layerwise Learning Rate Decay (LLRD) (Clark et al., 2020); partial-parameter fine-tuning paradigms including BitFit (Ben Zaken et al., 2022) and BiNor (Song et al., 2022); and adapter-based methods including CLIP-Adapter (Gao et al., 2021) and Tip-Adapter (Zhang et al., 2022). Backbone. For fair comparison, all baselines and our approach adopt the ViT-B/16 (ViT-Base with the patch size 16 × 16) version of CLIP. Other versions of CLIP are also experimented with. ## 3.2 Experimental Results VE&VQA results in low-resource settings. Table 1 presents the results of XtremeCLIP and baselines, in low-resource VE and VQA. The finetuning paradigms perform worse than partial finetuning paradigms in all settings, which demonstrates conventional fine-tuning paradigms are datahungry and not suitable for low-resource VLU tasks. XtremeCLIP consistently outperforms partial fine-tuning and adapter-based methods, showing that reformulating VLU tasks as prototype affinity matching can efficiently utilize visual-textual information with much fewer trainable parameters. Few-shot IC. Table 2 presents the performance of XtremeCLIP and baselines, in few-shot IC. Finetuning paradigms are still not suitable for fewshot image classification. Unlike BiNor and BitFit, CLIP-Adapter and Tip-Adapter specifically utilize adapters to learn from low-resource datasets meanwhile preserving the knowledge of CLIP, thus performing the best among baselines. Although XtremeCLIP has fewer trainable parameters than baselines, it still performs the best thanks to the supervised cues provided by contrastive learning and our task modeling approach. Ablation study. We replace the prototype matrix of XtremeCLIP with a randomly initialized matrix i.e., XtremeCLIP w/o. proto). We also detach the contrastive loss from XtremeCLIP (i.e., XtremeCLIP w/o. cl), or replace the fusion feature with the concatenation of visual and textual features (i.e., XtremeCLIP w/o. fusion). Table 3 presents the results of XtremeCLIP and its ablations. Detaching contrastive loss drops the performance, as contrastive learning provides more supervised cues. Reformulating VLU tasks as prototype affinity matching is somehow an open-book retrieval problem, which can augment model performance (Chen et al., 2022). Replacing the fusion feature drastically drops performance for VLU, which demonstrates the importance of interaction | Method | # Params. | SNLI-VE | VQA v2 | Avg | | | | | |----------------------------------|-------------|-----------|----------|-------|-------|-------|-------|-------| | 2k | 5k | 10k | 2k | 5k | 10k | | | | | Zero-shot learning | 0 | 33.74 | 52.03 | 42.89 | | | | | | Full fine-tuning | 149M | 47.31 | 48.12 | 51.10 | 52.79 | 53.29 | 54.10 | 51.12 | | LLRD (Clark et al., 2020) | 149M | 50.18 | 55.35 | 57.23 | 52.06 | 52.90 | 53.88 | 53.60 | | mixout (Lee et al., 2020) | 149M | 50.19 | 53.97 | 55.16 | 53.17 | 53.86 | 53.83 | 53.36 | | weight decay (Lee et al., 2020) | 149M | 50.68 | 54.07 | 55.09 | 53.18 | 53.92 | 53.81 | 53.46 | | BitFit (Ben Zaken et al., 2022) | 176-178K | 54.88 | 58.02 | 59.56 | 52.96 | 53.84 | 54.72 | 55.66 | | BiNor (Song et al., 2022) | 208-210K | 54.91 | 58.03 | 59.54 | 52.93 | 53.83 | 54.75 | 55.67 | | CLIP-Adapter (Gao et al., 2021) | 131K-262K | 54.77 | 57.83 | 59.21 | 53.21 | 53.45 | 54.21 | 55.45 | | Tip-Adapter (Zhang et al., 2022) | 5-10M | 54.65 | 58.11 | 59.67 | 52.94 | 53.63 | 54.70 | 55.62 | | XtremeCLIP | 5-7K | 55.61 | 59.53 | 62.06 | 53.51 | 56.44 | 59.21 | 57.73 | Table 1: Accuracy (%) on Visual Entailment and Visual Question Answering tasks with 2000, 5000, 10000 training samples. Here, \#Params. denotes the number of tunable parameters. Best results are in bold. Method # Params. EuroSat (10) DTD (47) FGVC (102) Avg 8 shot 16 shot 8 shot 16 shot 8 shot 16 shot Zero-shot learning 0 48.43 44.27 24.8 39.17 Full fine-tuning 149M 62.99 67.75 62.06 64.78 27.72 28.14 52.24 LLRD (Clark et al., 2020) 149M 70.91 75.58 64.30 69.39 30.18 31.36 56.95 mixout (Lee et al., 2020) 149M 70.85 72.23 64.07 68.97 28.98 30.24 55.89 weight decay (Lee et al., 2020) 149M 70.93 72.17 64.01 69.09 29.04 30.03 55.88 BitFit (Ben Zaken et al., 2022) 196∼427K 74.15 83.59 64.36 66.43 38.52 41.61 61.44 BiNor (Song et al., 2022) 228∼459K 78.63 86.59 65.07 70.04 38.43 41.73 63.42 CLIP-Adapter (Gao et al., 2021) 131∼262K 81.85 88.37 65.07 71.10 40.17 44.88 65.24 Tip-Adapter (Zhang et al., 2022) 84∼979K 82.02 87.49 67.32 71.81 39.51 45.12 65.55 XtremeCLIP 25∼256K 82.57 89.19 67.61 72.81 42.66 48.30 **67.19** Table 2: Accuracy (%) on Image Classification tasks with 8-shot and 16-shot images of EuroSat, DTD and FGVC. Here, (·) stands for the number of image categories. Best results are in bold. Table 3: Accuracy (%) of XtremeCLIP and its ablations. Table 4: Accuracy (%) of XtremeCLIP and full finetuning (FT) utilizing various CLIP backbones. ## Between Visual And Textual Information. Model-scale study. We test various CLIP versions with results in Table 4. The settings are the same as in ablation study. It shows that XtremeCLIP can be effectively adapted to different CLIPs and consistently has good performance. Data-scale study. Figure 2 presents the influence of the number of training instances on XtremeCLIP. As the number of training data increases, the accuracy of XtremeCLIP on VE and FGVC signifi- | Method | VE (10k) | VQA (10k) | FGVC (16) | |-------------|---------------|---------------|---------------| | Full | 62.06 | 59.21 | 48.30 | | w/o. cl | 60.94 (-1.12) | 54.90 (-4.31) | 48.21 (-0.09) | | w/o. proto | 61.98 (-0.08) | 55.45 (-3.76) | 48.15 (-0.15) | | w/o. fusion | 58.25 (-3.81) | 54.62 (-4.59) | 48.09 (-0.21) | ![3_image_0.png](3_image_0.png) | Backbone | Method | VE | VQA | FGVC | |------------|----------|-------|-------|--------| | ViT-B/16 | Full FT | 51.10 | 54.12 | 28.14 | | XtremeCLIP | 62.06 | 59.21 | 48.30 | | | ViT-B/32 | Full FT | 52.88 | 57.13 | 21.54 | | XtremeCLIP | 61.09 | 58.71 | 40.29 | | | ViT-L/14 | Full FT | 54.59 | 56.05 | 28.86 | | XtremeCLIP | 61.79 | 59.12 | 58.93 | | cantly increases while full fine-tuning only slightly increases, which demonstrates XtremeCLIP has higher data-using efficiency. ## 4 Conclusion We propose XtremeCLIP, a simple and efficient paradigm that reformulates VLU tasks as a prototype affinity matching problem. We adopt contrastive learning to leverage implicit sorting information from ground-truth labels, providing more supervised cues to handle insufficient supervised signals in small datasets. Experimental results demonstrate that XtremeCLIP consistently outperforms all baselines in low-resource scenarios. ## Limitations In this paper, the proposed XtremeCLIP framework is mainly focused on CLIP-based deterministic VLU tasks. In future work, we will extend XtremeCLIP to other Pre-trained Vision-Language models and apply XtremeCLIP to generative tasks such as image captioning, visual grounding or visual relation extraction. ## Acknowledgments This work was supported by the National Natural Science Foundation of China under Grant No. 62202170 and Alibaba Group through the Alibaba Innovation Research Program. ## References Elad Ben Zaken, Yoav Goldberg, and Shauli Ravfogel. 2022. BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models. In ACL, pages 1–9. Xiang Chen, Lei Li, Ningyu Zhang, Chuanqi Tan, Fei Huang, Luo Si, and Huajun Chen. 2022. Relation Extraction as Open-Book Examination: Retrievalenhanced prompt tuning. In *SIGIR*, pages 2443– 2448. M. Cimpoi, S. Maji, I. Kokkinos, S. Mohamed, , and A. Vedaldi. 2014. Describing Textures in the Wild. In *CVPR*, pages 3606–3613. Kevin Clark, MinhThang Luong, Quoc Le, and Christopher D. Manning. 2020. Pre-Training Transformers as Energy-Based Cloze Models. In *EMNLP*, pages 285–294. Peng Gao, Shijie Geng, Renrui Zhang, Teli Ma, Rongyao Fang, Yongfeng Zhang, Hongsheng Li, and Yu Qiao. 2021. CLIP-Adapter: Better VisionLanguage Models with Feature Adapters. *ArXiv*. Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the V in VQA matter: Elevating the role of image understanding in Visual Question Answering. In *CVPR*, pages 398– 414. Patrick Helber, Benjamin Bischke, Andreas Dengel, and Damian Borth. 2019. Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification. *IEEE Journal of Selected Topics* in Applied Earth Observations and Remote Sensing, pages 2217–2226. Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-Efficient Transfer Learning for NLP. In ICML, page 2790–2799. Cheolhyoung Lee, Kyunghyun Cho, and Wanmo Kang. 2020. Mixout: Effective Regularization to Finetune Large-scale Pretrained Language Models. In *ICLR*. Yixin Liu and Pengfei Liu. 2021. SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization. In ACL, pages 1065–1072. Yixin Liu, Pengfei Liu, Dragomir Radev, and Graham Neubig. 2022. BRIO: Bringing Order to Abstractive Summarization. In ACL, pages 2890–2903. S. Maji, J. Kannala, E. Rahtu, M. Blaschko, and A. Vedaldi. 2013. Fine-Grained Visual Classification of Aircraft. Technical report. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. 2021. Learning Transferable Visual Models From Natural Language supervision. In *ICML*, pages 8748–8763. Jake Snell, Kevin Swersky, and Richard Zemel. 2017. Prototypical Networks for Few-shot Learning. In NeurIPS, pages 4080–4090. Haoyu Song, Li Dong, Weinan Zhang, Ting Liu, and Furu Wei. 2022. CLIP Models are Few-Shot Learners: Empirical Studies on VQA and Visual Entailment. In ACL, pages 6088–6100. Maria Tsimpoukelli, Jacob Menick, Serkan Cabi, S. M. Ali Eslami, Oriol Vinyals, and Felix Hill. 2021. Multimodal Few-Shot Learning with Frozen Language Models. In *NeurIPS*, pages 200–212. Chengyu Wang, Minghui Qiu, Taolin Zhang, Tingting Liu, Lei Li, Jianing Wang, Ming Wang, Jun Huang, and Wei Lin. 2022. Easynlp: A comprehensive and easy-to-use toolkit for natural language processing. In *EMNLP (System Demonstrations)*, pages 22–29. Xiaodan Wang, Lei Li, Zhixu Li, Xuwu Wang, Xiangru Zhu, Chengyu Wang, Jun Huang, and Yanghua Xiao. 2023. AGREE: aligning cross-modal entities for image-text retrieval upon vision-language pre-trained models. In *WSDM*, pages 456–464. Ning Xie, Farley Lai, Derek Doran, and Asim Kadav. 2018. Visual Entailment Task for Visually-Grounded Language Learning. In *NeurIPS*. Yuan Yao, Ao Zhang, Zhengyan Zhang, Zhiyuan Liu, Tat-Seng Chua, and Maosong Sun. 2021. CPT: Colorful Prompt Tuning for Pre-trained Vision-Language Models. *ArXiv*. Yan Zeng, Xinsong Zhang, and Hang Li. 2021. MultiGrained Vision Language Pre-Training: Aligning Texts with Visual Concepts. *ArXiv*. Renrui Zhang, Wei Zhang, Rongyao Fang, Peng Gao, Kunchang Li, Jifeng Dai, Yu Qiao, and Hongsheng Li. 2022. Tip-Adapter: Training-Free Adaption of CLIP for Few-Shot Classification. In *ECCV*, page 493–510. Ming Zhong, Pengfei Liu, Yiran Chen, Danqing Wang, Xipeng Qiu, and Xuanjing Huang. 2020. Extractive Summarization as Text Matching. In ACL, pages 6197–6208. Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. 2022a. Conditional prompt learning for vision-language models. In *CVPR*, pages 16816– 16825. Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. 2022b. Learning to Prompt for VisionLanguage Models. *IJCV*, pages 2337–2348. Beier Zhu, Yulei Niu, Yucheng Han, Yuehua Wu, and Hanwang Zhang. 2022. Prompt-aligned Gradient for Prompt Tuning. *ArXiv*. A Case Study ![5_image_0.png](5_image_0.png) Figure 3 presents the probability distributions of several images before and after fine-tuning of our approach. The constructed knowledge-base prototype matrix indeed captures the salient characteristics of categories. Based on the knowledge, images can be correctly classified even in zero-shot learning. After fine-tuning, the performance of XtremeCLIP is further boosted. Dataset Prompt template EuroSAT a centered satellite photo of {}. DTD {} texture. FGVC a photo of a {}, a type of aircraft. Table 5: The hard prompt templates for image classification datasets. {} denotes the position of the category names to be filled in. Table 6: Statistics of experimental datasets. \#Class: the number of task categories. \#Test: the number of test instances. ## B Experimental Details | Task | Dataset | # Class | # Test | |-------------------------------|-----------------------------|-----------|----------| | EuroSAT (Helber et al., 2019) | 10 | 8100 | | | IC | DTD (Cimpoi et al., 2014) | 47 | 1692 | | FGVC (Maji et al., 2013) | 102 | 3333 | | | VE | DNLI-VE (Xie et al., 2018) | 3 | 17901 | | VQA | VQA V2 (Goyal et al., 2017) | 2 | 80541 | ## B.1 Training Corpora We collect the pre-processed IC training corpora (i.e. FGVC (Maji et al., 2013), EuroSAT (Helber et al., 2019) and DTD (Cimpoi et al., 2014)) from the open-sourced project of (Zhang et al., 2022) on Github 2. The hand-crafted prompt templates that describe the category names for EuroSAT, DTD, and FGVC are listed in Table 5. During model training, we randomly select 8 and 16 images of each category for few-shot IC. For visual entailment and visual questionanswering tasks, we download the pre-processed SNLI-VE (Xie et al., 2018) and VQA v2 (Goyal et al., 2017) from the open-sourced project XVLM (Zeng et al., 2021) on Github 3and randomly select 2000, 5000, and 10000 samples from each dataset for low-resource VLU tasks. The statistics are listed in Table 6. ## B.2 Experimental Details Of Our Approach We employ ViT-B/16 from OpenAI CLIP 4as the default underlying model. We train XtremeCLIP by AdamW algorithm with β1 = 0.9, β2 = 0.999, ϵ = 1e − 4. The training is processed on an NVIDIA Tesla A100 GPU. We run XtremeCLIP 20 epochs for VE and VQA with a batch size of 16 and it takes around 20 minutes; and 100 epochs for IC with a batch size of 16 and it takes around 60 minutes. | Fusion Function | VE (10K) | VQA (10K) | FGVC (16) | | | | | |---------------------------------|------------|-------------|-------------|-------|---------|-----|------| | XtremeCLIP | 62.06 | 59.21 | 48.30 | | | | | | Quadratic | 58.98 | 53.64 | 45.27 | | | | | | Exponential | 57.45 | 52.41 | 42.52 | Model | EuroSAT | DTD | FGVC | | XtremeCLIP | 89.19 | 72.81 | 48.21 | | | | | | CoOp (Zhou et al., 2022b) | 84.87 | 62.57 | 37.48 | | | | | | Linear Probe (Gao et al., 2021) | 82.76 | 63.97 | 36.39 | | | | | Table 7: Accuracy (%) of XtremeCLIP utilizing various fusion fuctions. Quadratic for quadratic combination, Exponential for elementwise exponential operation. ## B.3 Experimental Details Of Baselines For full fine-tuning paradigms (i.e. Mixout (Lee et al., 2020), pre-trained weight decay (weight decay) (Lee et al., 2020), layerwise Learning rate decay (LLRD) (Clark et al., 2020)) and partial parameter fine-tuning paradigms (i.e. BiNor (Song et al., 2022), BitFit (Ben Zaken et al., 2022), Linear Probe (Radford et al., 2021)), we set the learning rate for CLIP parameters as 5e − 7 and the learning rate for classification head as 2e − 3 after the grid search. We train full fine-tuning baselines and partial fine-tuning baselines by AdamW algorithm with β1 = 0.9, β2 = 0.999, ϵ = 1e − 4. We run the aforementioned baselines 20 epochs for VE and VQA with a batch size of 16, and 100 epochs for IC with a batch size of 16. For CLIP-Adapter (Gao et al., 2021) 5and TipAdapter (Zhang et al., 2022) 6, we directly take their open-sourced codes on GitHub. Though TipAdapter is proposed for few-shot IC only, by replacing the image features with the visual-textual fusion features of the input image-text pairs when constructing the instance retrieval matrix, it can be directly utilized for other VLU tasks as well. To adapt CLIP-Adapter to VE and VQA, we respectively apply visual and textual adapter to the visual V and textual encoder T of CLIP to learn adaptive visual and textual features, then weight sum the adaptive visual and textual features with the original visual and textual feature from CLIP, following the original paper. Thereafter, we get the visual-textual fusion representations by the fusion function F. Finally, we perform image-text pair classification with the classification head as in Gao et al. (2021). ## B.4 Fusion Function Ablation Table 7 shows the results of XtremeCLIP with various fusion functions, including traditional higher order and element-wise exponential operations. The results indicate that the selected fusion func-5https://github.com/gaopengcuhk/CLIP-Adapter 6https://github.com/gaopengcuhk/Tip-Adapter Table 8: Accuracy (%) of XtremeCLIP,CoOp and Linear Probe on image classification tasks with 16-shot images of EuroSat, DTD and FGVC. tion, namely F in Eq. 2, is both simple and highly effective, outperforming the others. ## B.5 Additional Comparasion Table 8 presents the image classification results of XtremeCLIP and the baseline methods, namely CoOp (Zhou et al., 2022b), and Linear Probe (Gao et al., 2021), which are solely utilized for image classification. The results demonstrate that reformulating image classification as an open-book matching paradigm indeed helps XtremeCLIP consistently outperform CoOp and Linear Probe. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? The fifth section "Limitation" ✓ A2. Did you discuss any potential risks of your work? The fifth section "Limitation" ✓ A3. Do the abstract and introduction summarize the paper's main claims? The first section "Introduction" ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? No response. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? No response. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? No response. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No response. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? No response. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. No response. ## C ✓ **Did You Run Computational Experiments?** The Third Section "Experiments" ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? The third section "Experiments" and the Appendix B2 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? the Appendix B2 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? The third section "Experiments" ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? The third section "Experiments" D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
li-etal-2023-factual
{FACTUAL}: A Benchmark for Faithful and Consistent Textual Scene Graph Parsing
https://aclanthology.org/2023.findings-acl.398
Textual scene graph parsing has become increasingly important in various vision-language applications, including image caption evaluation and image retrieval. However, existing scene graph parsers that convert image captions into scene graphs often suffer from two types of errors. First, the generated scene graphs fail to capture the true semantics of the captions or the corresponding images, resulting in a lack of faithfulness. Second, the generated scene graphs have high inconsistency, with the same semantics represented by different annotations. To address these challenges, we propose a novel dataset, which involves re-annotating the captions in Visual Genome (VG) using a new intermediate representation called FACTUAL-MR. FACTUAL-MR can be directly converted into faithful and consistent scene graph annotations. Our experimental results clearly demonstrate that the parser trained on our dataset outperforms existing approaches in terms of faithfulness and consistency. This improvement leads to a significant performance boost in both image caption evaluation and zero-shot image retrieval tasks. Furthermore, we introduce a novel metric for measuring scene graph similarity, which, when combined with the improved scene graph parser, achieves state-of-the-art (SOTA) results on multiple benchmark datasets for the aforementioned tasks.
# Factual: A Benchmark For Faithful And Consistent Textual Scene Graph Parsing Zhuang Li1,Yuyang Chai2,∗,Terry Yue Zhuo1,3,∗**,Lizhen Qu**1, Gholamreza Haffari1,Fei Li2,Donghong Ji2**,Quan Hung Tran**4 1Monash University, 2Wuhan University, 3CSIRO's Data61, 4Adobe Research {zhuang.li,terry.zhuo,lizhen.qu,Gholamreza.Haffari}@monash.edu, {yychai,lifei.csnlp,dhji}@whu.edu.cn, [email protected] ## Abstract Textual scene graph parsing has become increasingly important in various vision-language applications, including image caption evaluation and image retrieval. However, existing scene graph parsers that convert image captions into scene graphs often suffer from two types of errors. First, the generated scene graphs fail to capture the true semantics of the captions or the corresponding images, resulting in a lack of faithfulness. Second, the generated scene graphs have high inconsistency, with the same semantics represented by different annotations. To address these challenges, we propose a novel dataset, which involves re-annotating the captions in Visual Genome (VG) using a new intermediate representation called FACTUAL-MR. FACTUAL-MR can be directly converted into faithful and consistent scene graph annotations. Our experimental results clearly demonstrate that the parser trained on our dataset outperforms existing approaches in terms of faithfulness and consistency. This improvement leads to a significant performance boost in both image caption evaluation and zero-shot image retrieval tasks. Furthermore, we introduce a novel metric for measuring scene graph similarity, which, when combined with the improved scene graph parser, achieves stateof-the-art (SOTA) results on multiple benchmark datasets for the aforementioned tasks. The code and dataset are available at https: //github.com/zhuang-li/FACTUAL. ## 1 Introduction A scene graph is a representation that describes the contents of a visual scene, including objects, their attributes, and the relationships between them. The grounding of a scene graph with an image or a text can provide significant benefits for various vision-language tasks, such as image caption evaluation (Anderson et al., 2016) and image re- *The two authors contributed equally to this work. trieval (Johnson et al., 2015). Therefore, transduction of image descriptions into scene graphs through textual scene graph parsing has been a crucial vision-language research area. Accurately generating scene graphs that capture intersected information from images and their corresponding descriptions is crucial for a successful textual parser. However, current baseline parsers often generate *unfaithful* scene graphs that fail to represent the complete intersected information or generate semantically correct graphs, as shown in Figure 1. Furthermore, *inconsistencies* exist in the outputs of scene graph parsers, as depicted in the same figure, where "tennis" is interpreted as an attribute in one graph and as a part of an object in another graph. Such inconsistencies can severely impact downstream tasks of textual scene graph parsers, especially when they produce different graphs for a semantic unit, such as a phrase, across various captions, despite they carry the same semantic meaning. Upon inspection, we hypothesize that the issues of unfaithfulness and inconsistency arise due to the inherent shortcomings of scene graph parsing algorithms and limitations within the datasets. One widely utilized parser, SPICE-Parser (Anderson et al., 2016), is known for converting caption dependency graphs into scene graphs using predefined rules, which can result in error propagation. Furthermore, the dependency graphs may not adequately capture the semantic characteristics of scene graphs, as dependency graphs primarily focus on syntactical relationships. Additionally, the limitations of the datasets contribute to the problems as well. As demonstrated in Figure 1, the largest scene graph dataset, VG (Krishna et al., 2017), includes notable annotation issues regarding faithfulness and inconsistency. To address the aforementioned issues, we create a high-quality scene graph dataset for training parsers. We firmly believe that the problems of ![1_image_0.png](1_image_0.png) unfaithfulness and inconsistency within the dataset can be effectively resolved by incorporating two key measures: i) employing rigorous definitions for the literals and ii) implementing strict quality control during the annotation process. Therefore, we propose a novel intermediate meaning representation (MR) coined as FACTUAL-MR, which ensures FAithful and Consistent tex**TUAL** scene graph parsing. FACTUAL-MR is a semantic representation that can be deterministically mapped to the scene graph, thereby avoiding the issues that arise from converting syntactical graphs into scene graphs. The annotation of FACTUAL-MRs can be divided into manageable sub-tasks, allowing us to easily control the quality of annotations in each sub-task and ensure their faithfulness. Furthermore, the literals within the FACTUAL-MRs are precisely defined to ensure consistency in textual scene graph parsing annotations. As a result, we re-annotate captions sampled from the VG dataset with FACTUAL-MRs, enabling us to leverage the existing scene graph annotations from VG. Additionally, in order to further enhance the advantages provided by the scene graph parsing for its downstream tasks, we propose a simple yet effective metric called SoftSPICE. This metric calculates graph similarity and significantly improves the performance of vision-language tasks that leverage scene graphs. Overall, the key contributions are as follows: - We propose a novel intermediate representation, FACTUAL-MR, which can be easily annotated and converted into scene graphs. The annotation process of FACTUAL-MR could ensure the faithfulness and consistency of the scene graphs converted from FACTUAL-MR. TUAL, consisting of 40,369 parallel examples. We conduct thorough intrinsic and extrinsic evaluations to demonstrate that FACTUAL significantly improves the performance of textual scene graph parsing. - We propose a simple graph similarity metric, SoftSPICE, that achieves new SOTA results in image caption evaluation and zero-shot image retrieval tasks, when combined with a scene graph parser trained with FACTUAL. ## 2 Related Work Grounding a scene graph with an image or image description can be beneficial for a variety of downstream tasks, such as image retrieval (Andrews et al., 2019; Johnson et al., 2015), image caption evaluation (Anderson et al., 2016) and image captioning (Zhong et al., 2020). Currently, there are three main research directions to scene graph parsing: those that focus on parsing images (Zellers et al., 2018; Tang et al., 2020; Xu et al., 2017; Zhang et al., 2019a; Cong et al., 2022; Li et al., 2022), text (Anderson et al., 2016; Schuster et al., 2015; Wang et al., 2018; Choi et al., 2022; Andrews et al., 2019; Sharifzadeh et al., 2022), or both modalities (Zhong et al., 2021; Sharifzadeh et al., 2022) into scene graphs. Parsing images involves utilizing an object detection model to identify the location and class of objects, as well as classifiers to determine the relationships and attributes of the objects. Textual scene graph parsing employs techniques such as the Sequence-to-Sequence model (Sutskever et al., 2014) to parse image descriptions into linearized scene graphs (Sharifzadeh et al., 2022) or generate intermediate representations, such as dependency graphs or Abstract Meaning Representation (AMR) (Banarescu et al., 2013), - We construct a large-scale benchmark, FAC- which are then mapped into scene graphs using deterministic rules or machine learning models. However, directly utilizing intermediate representations like dependency graphs or AMR often leads to subpar performance in downstream tasks, as emphasized by Anderson et al. (2016), and may even be infeasible for multi-modal tasks requiring annotations for both modalities, given that the intermediate representations only annotate the text. Recent studies in parsing both modalities (Zhong et al., 2021; Sharifzadeh et al., 2022) have primarily utilized textual parsing models to enhance the performance of visual scene graph parsing. Our work primarily focuses on textual scene graph parsing. ## 3 Textual Scene Graph Parsing A scene graph, as introduced by Johnson et al. (2015), is a formal representation of the objects, their attributes, and the relationships between objects in a visual scene. Given a set of object classes C, a set of attribute types A, and a set of predicate types R, a scene graph G is defined as a tuple (*O, E*), where O = {o1*, ..., o*n} is a set of objects and E ∈ O × R × O is the set of edges connecting the objects. Each object oi = {ci, ai} is associated with an object class ci ∈ C and an attribute ai ∈ A. As depicted in Figure 1, our work linearizes a scene graph into a simplified format. In this format, each fact is represented either as (Object, Has_*attribute, Attribute*) or as (Objectsub, P redicate, Objectobj ), which is consistent with the format of the linearized scene graphs outlined in Choi et al. (2022); Sharifzadeh et al. (2022). Therefore, the textual scene parsing aims to learn a mapping πθ : *X → G*, which translates a textual image description X ∈ X into a scene graph G ∈ G. ## 3.1 Challenges Unfaithfulness. The scene graph faithfulness is determined by its completeness and correctness. Completeness is defined as the extent to which the graph conveys the complete semantic meaning of the intersected information from both the caption and the image. For example, Figure 1 demonstrates that the output of VG-T5 (Sharifzadeh et al., 2022) lacks the facts *(tennis player, hold, tennis racket)* and *(tennis balls, rest on, tennis racket)*, indicating an incomplete graph. This incompleteness issue of parsing outputs can be caused by the noisy training set from VG, which was generated without ![2_image_0.png](2_image_0.png) rigorous quality validation. The other datasets derived from VG also suffer from annotation noise. The customized dependency (CDP) dataset (Wang et al., 2018) transforms VG scene graphs (VG-SGs) into customized dependency graphs by aligning phrases of objects, attributes, and relations in VGSGs with corresponding phrases in captions. Consequently, the dependency graphs can be mapped back to scene graphs, referred to as CDP-SGs. Although this approach claims to enhance scene graph parsing performance by framing it as a dependency parsing problem, it results in the loss of additional information due to semantic misalignments between VG-SGs and the captions. As highlighted in Table 1, CDP-SGs have more serious completeness issues. Correctness refers to the semantic accuracy of the graph with respect to the intersected information from the caption and the image. The annotation errors of VG contribute significantly to the correctness issues. As in Figure 1, the presence of the predicate "rest balls on ten" highlights a significant annotation mistake. Dependency-based parsing methods, such as SPICE-Parser, produce graphs that lack correctness primarily due to error propagation. As shown in Figure 1, the term "rest" is incorrectly considered an attribute of "racket" due to the parsing errors from the Stanford dependency parser (Manning et al., 2014). Another issue with dependency-based methods is that they focus on capturing syntactic relationships among words rather than semantic relationships among objects. The phrases such as "without leaves" or "without a shirt" indicate the absence of objects like "leaves" or "shirt" in the scene, but dependency-based methods still interpret them as objects. Inconsistency. The inconsistency in the dataset is primarily the result of linguistic variations. The object, attribute, and relations are all extracted from texts, but the same semantics can be expressed in multiple ways. For instance, (tennis player, hold, tennis racket) and (tennis racket, held by, tennis player) are semantically equivalent, even though the orders of the subjects and objects differ. Different understanding of the tasks among crowd workers is also a serious issue. Some may consider "stone wall" as a composite object, while others may consider "stone" as an attribute and "wall" as an object. To measure the consistency of the annotations, we have calculated diversity metrics for the objects, attributes, and predicates within a set of examples encompassing various types of annotations. We assume that the diversity scores indicate the annotations' consistency. As in Table 1, the results of the three diversity metrics indicate that the annotations in VG and CDP datasets have a higher degree of diversity regarding their objects, attributes, and predicates than the ones in FACTUAL dataset. ## 4 Factual 4.1 Meaning Representation We propose a novel intermediate *semantic* representation, FACTUAL-MR, in which elements are clearly defined to eliminate confusion among annotators. The task of annotating captions and their associated images with FACTUAL-MRs can be broken down into manageable sub-tasks, and each FACTUAL-MR can be deterministically mapped into a conventional scene graph, enabling the utilization of FACTUAL parser outputs in a wide range of multi-modal applications that rely on scene graphs. Specifically, the template of each fact in FACTUAL-MR is presented in one of two formats: {*Object, Attribute*} or {Quantif iersub, Objectsub*, V erb, P reposition,* Quantif ierobj , Objectobj}. Object. An object in a scene graph is essentially defined as a grouping of concepts. This results from the widely accepted notion in vision tasks that an image object typically encompasses a collection of homogeneous concepts within a bounding box (Krishna et al., 2017). Therefore, a common source of inconsistency in VG-SG is the various methods used to represent the quantity of objects. This can be attributed to the varying understandings of tasks among annotators. For example, as depicted in Figure 1, three trees may be represented as a single collective object contained within a large bounding box on an image, with the attribute of "three" (trees, has_attribute, three), or as three distinct objects of *tree* distributed throughout three facts in the visual scene. These different representations of object quantity can lead to inconsistencies. To address this, we propose defining each object in FACTUAL-MR as a grouping of collective concepts. To differentiate between two collective objects with identical names, unique suffix identifiers are utilized. For instance, the phrase "men watch men" would be represented as *(men, watch, men:1)*. Attribute. The attribute definition in FACTUALMR is similar to the original scene graph, with one notable distinction. In FACTUAL-MR, attributes are used to describe all individual concepts within each collective object. For example, in the case of *(3, tennis balls, has_attribute, white)*, it implies that all the tennis balls are white. Quantifier. The quantifier indicates the quantity of concepts within a collective object if the quantity is explicitly mentioned in the text. Additionally, a quantifier modifier may be used to specify the unit of measurement when explicit quantifier modifiers are present in the text. For instance, the phrase "both men" is expressed as "*2, men*" while "both groups of men" would be represented as "*2g, men*" and "both pairs of" as "2p". To avoid annotation inconsistencies, a limited set of pre-defined modifiers is provided. In cases where the quantity of objects cannot be expressed by the predefined set, two special quantities, "*many*" and "*unaccountable*", are offered as placeholders for annotators. Verb and Preposition. Given the linguistic variations present in VG, the number of relations exceeds 36,000. Through analysis, we have determined that the semantics of each relation can be composed of both a verb and a preposition or either one alone. To this end, we have decomposed these relations into their respective verbs and prepositions. In order to ensure consistency in annotation, a fixed list of verbs and prepositions with exclusive semantics is provided for the annotators to select from. To further facilitate consistency, all verbs are lemmatized to their original forms. The benefits of this decomposition method will be further explained in Section 4.3. Additionally, the verb's voice plays a crucial role in the semantics of a fact. For example, the phrases "cup covered with blanket" and "cup covers blanket" possess distinct semantic meanings. To prevent ambiguity during annotation, an indicator, "p:", is used as a prefix to the verb to indicate whether it is in a passive voice. ## 4.2 Connection To Scene Graph To map a FACTUAL-MR into the original scene graph, we first combine the verb and prepositions into a predicate. The voice of the verb is altered based on whether it is passive or active. However, as the object in our annotation is collective, a collective-distributive ambiguity is present in the sentence, as also highlighted by Schuster et al. (2015). For instance, given an image describing "three men reading books", we can know which man is reading which book according to the image, while in the image caption, the information is insufficient to determine this. Previous approaches, such as SPICE (Anderson et al., 2016) and Stanford (Schuster et al., 2015) parsers, address this issue using heuristic rules. The SPICE-Parser considers all relations between two collective objects as collective, leading to the phrase being expressed as *(men, reading, books), (men, has_attribute, 3)*. However, this annotation type is not commonly used as annotators tend to annotate relations distributedly in the VG-SG annotations. Another option, adopted by the Stanford parser, is to consider all these cases as distributive behaviours, resulting in the phrase being expressed as "(man, reading, book), (man:1, reading, book), (man:2, reading, book)". This may also be incorrect, as three men might read two books. Therefore, in such cases, we improve this heuristic by utilizing our annotated quantifiers. We annotate the implicit quantifiers for the "books" according to the image content. If FACTUAL-MR annotates the number of books as three, we know that each man is distributedly reading one book. Otherwise, they are collectively engaging in the activity. ## 4.3 Annotation Our annotation process consists of two stages. In the first stage, we carefully selected approximately 44,000 captions, with each caption aligned to a distinct image, to ensure diversity in our FACTUAL dataset derived from the VG dataset. We hired 25 annotators with diverse backgrounds, either through Amazon Mechanical Turk (Paolacci et al., 2010) or from local undergraduate students, and provided them with one-hour training sessions to ensure consistent annotation practices. Throughout the annotation process, both the images and captions were presented to the annotators to ensure the faithfulness of the annotations to both modalities. Each annotator was reimbursed at a rate of 0.25 USD per task. In the second stage, three expert annotators with a high level of agreement in their annotations performed post-processing and verification steps to ensure the quality of the data. After undergoing the quality check, we retained 40,369 examples in the dataset. Object and Attribute. The annotation process for objects and attributes involved extracting information from the captions to ensure faithfulness to the text while utilizing the image to resolve any linguistic ambiguities. For example, in the caption, "the picture depicts a car" it is unclear whether the image includes an object labelled as "picture" or if the caption is referring to the image itself as a "picture" without the context of the image. Furthermore, during the training, the annotators were also instructed to extract the objects for the coreferences, such as the pronoun "it" mentioned in the captions. Quantifier. Regarding quantifiers, the annotators could only select from the pre-determined sets of quantities and quantity modifiers. If an exact match of a modifier was not found, the annotators were instructed to choose the modifier with the equivalent semantic meaning to the modifier in the text. In most cases, only the quantity was annotated when the number of objects was explicitly mentioned. However, exceptions were made for cases involving collective-distributive ambiguity, requiring the annotations of implicit quantities. Verb and Preposition. To ensure consistency in the predicate annotations, the annotators were instructed to select from a pre-determined set of predicates rather than writing them on their own. However, the predicates in the VG dataset were not mutually exclusive in semantics. Therefore, we implemented a process of partitioning them into 1000 clusters using K-means, followed by manually selecting around 2000 predicates by observing the clusters. Despite this pruning, the large number of remaining predicates still posed a challenge for annotators to make selections. Therefore, the predicates1 were further decomposed into around 400 verbs and 100 prepositions. For each selection slot, verbs and prepositions were ranked using an information retrieval method, and the annotators 1Please note that in some predicates, there are only verbs or only prepositions. ![5_image_0.png](5_image_0.png) Table 2: The statistics about the number of distinct labels and occurrence (occ.) of the various elements in the 40,369 FACTUAL-MRs. For simplicity, we omit their suffixes when calculating the occurrence of quantifiers. were asked to select from the 20 most probable candidates. Annotators were specifically instructed to annotate verbs in the active voice whenever possible. For example, if both active and passive voices were possible for annotation, as seen in the phrases "blanket covering cup" and "cup covered with a blanket", both should be annotated as (blanket, cover, cup). However, in cases where only the passive voice construction was syntactically and semantically valid, such as in the example "cup filled with water," it should be annotated as (cup, p:fill, with, water) since *(water, fill, cup)* would not be appropriate. ## Post-Processing And Verification. In the second stage, three expert annotators conducted a thorough examination of all cases to verify and rectify annotation errors. Particular attention was paid to identifying and correcting any incorrect annotations related to passive and active voice, as well as quantifiers and their modifiers. Furthermore, in cases where captions did not include specific name phrases for objects but only pronouns, those pronouns were converted into object names. For example, in the sentence "he is walking" where "he" was annotated as an object, it was resolved to "man." Additionally, any annotations that were entirely unrelated to the text and images were discarded. ## 4.4 Statistical Analysis Of Dataset We present a statistical overview of the FACTUAL dataset, which comprises 40,369 distinct captions and includes over 4,000 unique object labels with a total occurrence of 116,712. On average, each object label appears approximately 28 times throughout the dataset. Notably, prepositions occur more frequently compared to verbs, although there are four times as many distinct verb labels compared to the number of distinct prepositions. Furthermore, each fact within the dataset tends to be unique within a single caption, with an average occurrence of fewer than two times. Upon analyzing the scene level, we find that, on average, at least two distinct objects are present in each scene. However, there are much fewer distinct verbs, prepositions, and attributes. It is worth highlighting that quantifiers play a relatively minor role in the dataset, as most collective objects described in the image captions consist of only one individual object. ## 5 Experiments We evaluate the effectiveness of our new scene graph benchmark through one intrinsic evaluation and two extrinsic evaluation tasks. ## 5.1 Textual Scene Graph Parsing Task Setting. Following Schuster et al. (2015); Wang et al. (2018); Choi et al. (2022), we construct scene graph parsers to translate textual descriptions of image regions into scene graphs, which are then compared against their respective ground truth scene graphs. Datasets. In terms of datasets, our evaluations are conducted on the VG (Krishna et al., 2017), CDP (Wang et al., 2018), and FACTUAL dataset. The VG dataset comprises 108,077 images and 5.4 million region captions. The CDP dataset converts all scene graphs in VG into a customized dependency graph, which has a one-to-one mapping to the original scene graphs. We report the performance of the parsers on two data splits for each dataset representation. For the FACTUAL dataset, we consider a random split (Random), which includes 37,861 training, 1,000 validation, and 1,508 test examples. Additionally, we also evaluate a more challenging split (Length) to assess the parsers' compositional generalization abilities. The benchmark test set for this split comprises 1,053 examples. The caption of each example includes more than ten caption tokens and three facts in the corresponding scene graphs. The remaining examples are split into 38,316 training and 1,000 validation examples. The test examples for VG and CDP consist of captions from the Random and Length splits of FACTUAL, while the remaining examples are divided into a validation set of 1,000 and a training set of over 2 million. Baselines. In this study, we evaluated the performance of five parsers: **SPICE-Parser** (Anderson et al., 2016), **AMR-SG-T5** (Choi et al., 2022), CDP-T5 (Choi et al., 2022), **VG-T5** (Sharifzadeh et al., 2022), and **FACTUAL-T5**. SPICE utilizes a set of rules to convert dependency graphs of captions into scene graphs. AMR-SG-T5 converts captions into AMRs through the use of AMRBART (Bai et al., 2022), and subsequently converts the AMRs into CDP-SG format by using a T5 (Raffel et al., 2020) model. CDP-T5 directly converts captions into CDP-SGs without the intermediate steps. In contrast to the original CDPto-SG parser (Wang et al., 2018), which relies on intermediate representation, CDP-T5 demonstrates significantly better performance (Choi et al., 2022). VG-T5, trained on the VG, parses captions into VG-SGs. FACTUAL-T5 parses captions into FACTUAL-SGs and maps them into scene graphs in a collective way. FACTUAL-T5 (pre) was first pre-trained on the VG dataset and then fine-tuned on FACTUAL. As different datasets use different annotations, SPICE2, AMR-SG-T5 and CDP-T5 are evaluated against the ground truth of the CDP dataset, while VG-T5 and FACTUAL-T5 are evaluated against the ground truth VG-SGs and FACTUAL-SGs. Evaluation. Following Schuster et al. (2015); Wang et al. (2018); Choi et al. (2022), we evaluate scene graph parsers utilizing the SPICE metric (Anderson et al., 2016). The SPICE F-score measures the similarity between the candidate and ground truth graph representations extracted from captions by the parsers. In addition, we also employ the Exact Set Match metric (Yu et al., 2019), which assesses the accuracy of the parsers by determining whether the strings of the parsed facts match the ground truth facts while disregarding the order of the facts. During the evaluation, all intermediate representations are converted into scene graphs. We also evaluate the faithfulness and consistency of parser outputs by human evaluation and automatic lexical diversity metrics, respectively. Specifically, three students manually examine the rates of correctness and completeness of the parsing outputs, and we report the average scores. We employ Yules I (Yule, 2014), TTR (Templin, 1957), and MTLD (Koehn, 2005) to evaluate the lexical diversity of objects, attributes, and predicates, which indicate consistency of the output scene graphs. | Parser | Random | Length | | | |------------------|----------|-----------|-------|-------| | Set Match | SPICE | Set Match | SPICE | | | SPICE-Parser | 13.00 | 56.15 | 0.94 | 38.04 | | AMR-SG-T5 | 28.45 | 64.82 | 12.16 | 51.71 | | CDP-T5 | 46.15 | 73.56 | 26.50 | 61.21 | | VG-T5 | 11.54 | 47.46 | 2.94 | 42.98 | | FACTUAL-T5 (pre) | 79.77 | 92.91 | 42.35 | 82.43 | | FACTUAL-T5 | 79.44 | 92.23 | 38.65 | 80.76 | | Faithfulness ↑ | Consistency ↓ | | | | | |------------------|-----------------|---------|------|-------|-------| | Completeness | Correctness | Yules I | TTR | MTLD | | | SPICE-Parser | 49% | 57% | 1.56 | 10.26 | 14.87 | | AMR-SG-T5 | 31% | 71% | 2.85 | 15.45 | 22.56 | | CDP-T5 | 28% | 86% | 3.64 | 16.57 | 23.96 | | VG-T5 | 51% | 47% | 0.37 | 5.27 | 10.59 | | FACTUAL-T5 (pre) | 92% | 93% | 2.76 | 13.55 | 15.30 | Discussion. As shown in Table 3, the FACTUALT5 and FACTUAL-T5 (pre) models demonstrate a clear superiority over other parsers regarding Set Match and SPICE scores. Notably, the FACTUALT5 model, which utilizes the T5 architecture, outperforms other T5-based baselines trained on millions of data points with different annotations. This highlights the effectiveness of the FACTUAL benchmark in generating outputs that are wellaligned with ground truth annotations. In the more challenging Length setting, all parsers experience a decline regarding parsing text into ground truth scene graphs. However, the FACTUAL-T5 model has the least drop among all parsers. Furthermore, pre-training the FACTUAL-T5 model on millions of VG data points only results in a slight improvement in the Length split. This indicates that a dataset as small as 40,000 high-quality examples is sufficient to yield a competent parser. The SPICE-Parser has become the most frequently utilized parser in vision-language tasks. However, as shown in Table 3, it is unable to align with the CDP-SG in either of the two settings. However, this does not necessarily imply that the SPICEParser is the worst among the parsers, as the oracle CDP-SGs have a high degree of noise as well, as demonstrated in Table 1. Our human evaluation of the faithfulness of the parsing results, as presented in Table 4, indicates that the SPICE-Parser can perform comparably with the VG-T5 model and outperform the CDP-T5 model in terms of completeness. Furthermore, our subsequent extrinsic evaluation also shows that the SPICE-Parser is the | Metric | Parser | Flicker8K | FOIL (1-ref) | FOIL (4-ref) | | |-----------------|--------------|-------------|----------------|----------------|-------| | τc ↑ | ρ ↑ | Acc ↑ | Acc ↑ | | | | SPICE | SPICE-Parser | 44.77 | 60.11 | 76.31 | 87.02 | | CDP-T5 | 33.50 | 49.50 | 65.66 | 72.76 | | | VG-T5 | 37.18 | 51.94 | 68.43 | 76.12 | | | FACTUAL-T5(pre) | 45.12 | 60.78 | 76.69 | 86.88 | | | SoftSPICE | SPICE-Parser | 51.897 | 68.118 | 78.53 | 86.77 | | CDP-T5 | 45.54 | 59.64 | 53.58 | 59.49 | | | VG-T5 | 39.66 | 53.05 | 70.80 | 76.77 | | | FACTUAL-T5(pre) | 53.35 | 69.52 | 85.66 | 91.61 | | SPICESPICE-Parser 44.77 60.11 76.31 **87.02** CDP-T5 33.50 49.50 65.66 72.76 VG-T5 37.18 51.94 68.43 76.12 FACTUAL-T5(pre) 45.12 60.78 **76.69** 86.88 SoftSPICESPICE-Parser 51.897 68.118 78.53 86.77 CDP-T5 45.54 59.64 53.58 59.49 VG-T5 39.66 53.05 70.80 76.77 FACTUAL-T5(pre) 53.35 69.52 **85.66 91.61** Table 5: (Left) The correlation scores between SPICE or SoftSPICE with the human judgment. (Right) The accuracies of the metrics w.r.t. detecting the hallucinated sentences. second-best parser among the parsers evaluated. Table 4 also illustrates that our parser performs much better than the other baselines in terms of faithfulness while ranking second in terms of consistency. Interestingly, the VG-T5 model exhibits the best performance in consistency. However, its ORACLE annotations are more inconsistent than ours. Our analysis reveals that the VG-T5 prioritizes predicting scene graphs with simple lexicons and discards more complex patterns, resulting in its strong performance in consistency but much weaker performance in faithfulness metrics. ## 5.2 Image Caption Evaluation Task Setting. To assess the quality of the modelgenerated captions regarding a set of reference captions and an image, we adopt the SPICE and SoftSPICE metrics to calculate a graph similarity between graphs extracted from the candidate and reference captions. As these metrics are based on the parser outputs, a *better* parser will result in scores that more closely align with human judgment. Evaluation. Following Hessel et al. (2021), we employ two evaluation settings. The first setting involves calculating the correlation of the scores with human judgment utilizing Kendall's τ and Pearson correlation on the Flicker8K dataset (Hodosh et al., 2013). The Flicker8K dataset includes 17k "expert" human judgments for 5664 images, with each caption being rated on a scale of 1 to 4 against five reference captions. In the second setting, we utilize one (1-ref) or four (4-ref) reference captions sourced from the FOIL dataset (Shekhar et al., 2017). This dataset consists of 32k pairs of true captions and their corresponding corrupted versions, where a single word is replaced with an incorrect one. The objective is to assess the accuracy of each image caption evaluation metric in identifying and assigning higher scores to the uncorrupted captions. This setting aims to evaluate the metric's ability to detect instances of sentence hallucination effectively. SoftSPICE. SPICE calculates the similarity between two graphs by matching strings of subcomponents within the graphs. These subcomponents include *objects*, tuples *{object, attribute}* and triples *{object, predicate, object}*. To improve SPICE, we propose an alternative method that utilizes embedding-based techniques to calculate string similarity. This approach involves decomposing each graph into the aforementioned sub-components and encoding the text of each component using the Sentence-BERT (Reimers and Gurevych, 2019). The resulting similarity score, coined SoftSPICE, is as follows: $$\phi_{s}(G_{c},G_{r})=\frac{1}{|\mathcal{V}_{c}|}\sum_{\mathbf{e}_{c}\in\mathcal{V}_{c}}\max_{\mathbf{e}^{r}\in\mathcal{V}_{r}}(cos(\mathbf{e}_{c},\mathbf{e}_{r}))\tag{1}$$ where $\mathbf{e}$ denotes the embedding of each compo where e denotes the embedding of each component, Vr and Vc denote the sets of embeddings encoding components within the candidate and reference graphs, respectively. Additionally, we can also use the image I to compute a **SoftSPICE(img)** score, denoted as φi(Gc, I). This score is computed by combining the embeddings of the graph components and the image: $$\phi_{i}^{\prime}(G_{c},I)=\frac{1}{|\mathcal{V}_{c}|}\sum_{\mathbf{e}_{c}\in\mathcal{V}_{c}}cos(\mathbf{e}_{c},\mathbf{e}_{I})\tag{2}$$ $$\phi_{i}(G_{c},I)=\frac{2\cdot\phi_{s}(G_{c},I)\cdot\phi_{i}^{\prime}(G_{c},I)}{\phi_{s}(G_{c},G_{r})+\phi_{i}^{\prime}(G_{c},G_{r})}\tag{3}$$ where $e_{c}$ and $e_{I}$ are obtained by encoding the sub where ec and eI are obtained by encoding the subcomponents and the images with CLIP. Discussion. Table 5 illustrates that FACTUALT5 demonstrates improvement over other parsers in terms of enhancing the correlation of SPICE and SoftSPICE scores with human judgments. However, when using SPICE to detect hallucinated instances, our parser performs comparably to the SPICE-Parser. We attribute this to the fact that approximately one-third of the pairs will have tied SPICE scores due to the use of exact string matching. On the other hand, when using the embeddingbased metric, SoftSPICE, the superiority of our parser on FOIL is revealed. Currently, the SPICE utilizing the SPICE-Parser has been a common standard in image caption evaluation settings. We are confident that our parser can be a suitable replacement for SPICE-Parser. We also compare SoftSPICE with current SOTA image evaluation metrics, namely BERTScore (Zhang et al., 2019b), CLIPScore, ![8_image_0.png](8_image_0.png) Table 6: The results comparing SoftSPICE with current SOTA image caption evaluation metrics. We use FACTUAL-T5 as the parser for SoftSPICE. and RefCLIPScore. These metrics calculate the similarity between the embeddings of the candidate caption with the embeddings of the reference captions, the image, and both reference captions and images, respectively. As in Table 6, SoftSPICE performs comparably with all the SOTA methods when there are over four reference captions, and with the inclusion of image information, SoftSPICE(img) can even outperform SOTA results on Flicker8K. We also observed that the scene graph feature could be a useful supplement to caption-level features. By taking the harmonic mean of SoftSPICE(img) with BERTScore and RefCLIPScore, the performance of both metrics achieve new SOTA results. ## 5.3 Zero-Shot Image Retrieval Task Setting. The goal of image retrieval is to identify and retrieve an image that precisely corresponds to a given textual query description. This is typically accomplished by allocating scores to images based on their relevance to the query and selecting the top k images. Following the setting from Johnson et al. (2015); Wang et al. (2018), we have selected 456 captions and their corresponding images from the Random and Length test sets, initially prepared for intrinsic evaluation. These captions serve as queries to retrieve their associated images, forming the basis for evaluating the performance of our image retrieval system. We proceed under the assumption that an oracle scene graph corresponding to each selected image is available. Furthermore, we introduce a '*Local*' setting, which provides access to the coordinates of a bounding box within each image that corresponds to each caption and the ground truth scene graph aligned with this bounding box region. Evaluation. During the evaluation, the scene graph of the captions is generated using various baseline parsing methods. The 456 images are ranked according to the similarity scores computed ![8_image_2.png](8_image_2.png) ![8_image_3.png](8_image_3.png) R@1 R@5 R@1 R@5 Local. SoftSPICE ![8_image_1.png](8_image_1.png) SPICE-Parser 67.76 84.87 67.54 81.80 CDP-T5 72.59 88.16 62.28 80.70 VG-T5 49.56 68.86 58.77 74.34 FACTUAL-T5 79.39 92.32 **75 87.06** using either the SoftSPICE or CLIPScore between each image and the caption. Notably, the representation encoders employed in both similarity measurements are not fine-tuned on the in-domain dataset. The performance of various methods is assessed using the Recall@k metric. The performance of different methods is assessed using the Recall@k metric, which indicates the percentage of caption queries where the top k retrieved images, given a specific query, include the ground truth. Discussion. As observed in Table 7, FACTUALT5 consistently outperforms other baselines in zeroshot image retrieval tasks, highlighting the superiority of our dataset and parser. The performance of both SoftSPICE and CLIPScore is generally enhanced by incorporating location information of the bounding boxes, depicting that more accurate information could boost image retrieval. Moreover, when combined with all available parsers, SoftSPICE demonstrates significantly superior performance compared to CLIPScore, emphasizing the substantial potential benefits of utilizing structured information for image retrieval. ## 6 Conclusion We introduce a new intermediate representation, coined FACTUAL-MR, which aims to address the issues of faithfulness and consistency for textual scene graph parsers. By utilizing a rigorous annotation process, it is possible to create a large-scale dataset based on FACTUAL-MR. Our experiments demonstrate that FACTUAL-T5, trained on this dataset, is capable of generating consistent scene graphs that are highly faithful to corresponding images and captions. Utilizing a novel graph similarity metric, SoftSPICE, FACTUAL-T5 significantly improve performance in both image caption evaluation and zero-shot image retrieval. ## 7 Limitations Despite the significant advancements made by the proposed FACTUAL-MR representation in addressing the limitations of current scene graph parsing datasets, there remain several areas for future research. First, FACTUAL-MR currently relies on heuristic rules to resolve the collective-distributive ambiguity as introduced in Section 4.2. However, the limitations still remain due to the ambiguity of language. To obtain a perfect parser, rich-world knowledge from multi-modalities or textual context (Li et al., 2020) is required, which is left as our future work. Second, there is currently no explicit alignment between objects represented within FACTUALMR and the corresponding bounding boxes in the image. To fully utilize multi-modal information, collecting such alignments may be necessary. Third, the proposed method utilizes ORACLE scene graphs of the image, however, in practical applications, extracting a scene graph from an image remains a challenging problem. Further research is required to determine if utilizing a visual scene graph parsing model to extract scene graphs from images would negatively impact image retrieval performance. Lastly, our current approach utilizes a large pretrained language model to train the parser. However, the issue of robustness in parsers (Huang et al., 2021; Zhuo et al., 2023) has always been a significant concern. The captions in the VG dataset mainly consist of short sentences with simple patterns. It remains unclear whether the parser is robust enough to handle sentences with more complex linguistic variations, which calls for further investigation. ## Acknowledgments We would like to express our gratitude to Weibo Shi for his valuable assistance in conducting our human evaluation works. We also extend our appreciation to Adobe Inc. for their generous funding support in data collection. Additionally, we would like to thank Wuhan University for their valuable assistance in identifying students to assist with data annotation. ## References Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. 2016. Spice: Semantic propositional image caption evaluation. In *European conference* on computer vision, pages 382–398. Springer. Martin Andrews, Yew Ken Chia, and Sam Witteveen. 2019. Scene graph parsing by attention graph. *arXiv* preprint arXiv:1909.06273. Xuefeng Bai, Yulong Chen, and Yue Zhang. 2022. Graph pre-training for AMR parsing and generation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6001–6015, Dublin, Ireland. Association for Computational Linguistics. Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of the 7th linguistic annotation workshop and interoperability with discourse, pages 178–186. Woo Suk Choi, Yu-Jung Heo, Dharani Punithan, and Byoung-Tak Zhang. 2022. Scene graph parsing via Abstract Meaning Representation in pre-trained language models. In *Proceedings of the 2nd Workshop* on Deep Learning on Graphs for Natural Language Processing (DLG4NLP 2022), pages 30–35, Seattle, Washington. Association for Computational Linguistics. Yuren Cong, Michael Ying Yang, and Bodo Rosenhahn. 2022. Reltr: Relation transformer for scene graph generation. *arXiv preprint arXiv:2201.11460*. Jack Hessel, Ari Holtzman, Maxwell Forbes, Ronan Le Bras, and Yejin Choi. 2021. Clipscore: A reference-free evaluation metric for image captioning. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 7514–7528. Micah Hodosh, Peter Young, and Julia Hockenmaier. 2013. Framing image description as a ranking task: Data, models and evaluation metrics. *Journal of* Artificial Intelligence Research, 47:853–899. Shuo Huang, Zhuang Li, Lizhen Qu, and Lei Pan. 2021. On robustness of neural semantic parsers. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 3333–3342. Justin Johnson, Ranjay Krishna, Michael Stark, Li-Jia Li, David Shamma, Michael Bernstein, and Li FeiFei. 2015. Image retrieval using scene graphs. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3668–3678. Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In Proceedings of machine translation summit x: papers, pages 79–86. Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. 2017. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International journal of computer vision, 123(1):32– 73. Rongjie Li, Songyang Zhang, and Xuming He. 2022. Sgtr: End-to-end scene graph generation with transformer. In *Proceedings of the IEEE/CVF Conference* on Computer Vision and Pattern Recognition, pages 19486–19496. Zhuang Li, Lizhen Qu, and Gholamreza Haffari. 2020. Context dependent semantic parsing: A survey. In Proceedings of the 28th International Conference on Computational Linguistics, pages 2509–2521. Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations, pages 55–60. Gabriele Paolacci, Jesse Chandler, and Panagiotis G Ipeirotis. 2010. Running experiments on amazon mechanical turk. *Judgment and Decision making*, 5(5):411–419. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(140):1–67. Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. In *Proceedings of the 2019 Conference on Empirical* Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992. Sebastian Schuster, Ranjay Krishna, Angel Chang, Li Fei-Fei, and Christopher D Manning. 2015. Generating semantically precise scene graphs from textual descriptions for improved image retrieval. In Proceedings of the fourth workshop on vision and language, pages 70–80. Sahand Sharifzadeh, Sina Moayed Baharlou, Martin Schmitt, Hinrich Schütze, and Volker Tresp. 2022. Improving scene graph classification by exploiting knowledge from texts. In *Proceedings of the AAAI* Conference on Artificial Intelligence, volume 36, pages 2189–2197. Ravi Shekhar, Sandro Pezzelle, Yauhen Klimovich, Aurélie Herbelot, Moin Nabi, Enver Sangineto, and Raffaella Bernardi. 2017. Foil it! find one mismatch between image and language caption. In *Proceedings of the 55th Annual Meeting of the Association for* Computational Linguistics (Volume 1: Long Papers), pages 255–265. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. Advances in neural information processing systems, 27. Kaihua Tang, Yulei Niu, Jianqiang Huang, Jiaxin Shi, and Hanwang Zhang. 2020. Unbiased scene graph generation from biased training. In Conference on Computer Vision and Pattern Recognition. Mildred C Templin. 1957. Certain language skills in children: Their development and interrelationships, volume 10. JSTOR. Yu-Siang Wang, Chenxi Liu, Xiaohui Zeng, and Alan Yuille. 2018. Scene graph parsing as dependency parsing. In *Proceedings of NAACL-HLT*, pages 397– 407. Danfei Xu, Yuke Zhu, Christopher B Choy, and Li FeiFei. 2017. Scene graph generation by iterative message passing. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5410–5419. Tao Yu, Rui Zhang, Michihiro Yasunaga, Yi Chern Tan, Xi Victoria Lin, Suyi Li, Heyang Er, Irene Li, Bo Pang, Tao Chen, et al. 2019. Sparc: Cross-domain semantic parsing in context. In *Proceedings of the* 57th Annual Meeting of the Association for Computational Linguistics, pages 4511–4523. C Udny Yule. 2014. The statistical study of literary vocabulary. Cambridge University Press. Rowan Zellers, Mark Yatskar, Sam Thomson, and Yejin Choi. 2018. Neural motifs: Scene graph parsing with global context. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pages 5831–5840. Ji Zhang, Kevin J Shih, Ahmed Elgammal, Andrew Tao, and Bryan Catanzaro. 2019a. Graphical contrastive losses for scene graph parsing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11535–11543. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019b. Bertscore: Evaluating text generation with bert. In *International Conference on Learning Representations*. Yiwu Zhong, Jing Shi, Jianwei Yang, Chenliang Xu, and Yin Li. 2021. Learning to generate scene graph from natural language supervision. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1823–1834. Yiwu Zhong, Liwei Wang, Jianshu Chen, Dong Yu, and Yin Li. 2020. Comprehensive image captioning via scene graph decomposition. In *European Conference* on Computer Vision, pages 211–229. Springer. Terry Yue Zhuo, Zhuang Li, Yujin Huang, Fatemeh Shiri, Weiqing Wang, Gholamreza Haffari, and YuanFang Li. 2023. On robustness of prompt-based semantic parsing with large pre-trained language model: An empirical study on codex. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 1090– 1102. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? In the limitation section. ✗ A2. Did you discuss any potential risks of your work? There is no risk. ✓ A3. Do the abstract and introduction summarize the paper's main claims? In the abstract and introduction sections. ✓ A4. Have you used AI writing assistants when working on this paper? Grammarly and Quillbot to fix my writing errors and improve my writing. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. ✗ B1. Did you cite the creators of artifacts you used? Left blank. ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? No response. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No response. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? No response. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. No response. ## C ✓ **Did You Run Computational Experiments?** In The Experiments. ✗ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? No space. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✗ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? No space. ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? In the experiment. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? In the experiment. D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** ✓ ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? In Section 4. ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? In Section 4. ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? In Section 4. ✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No ehtic concern. ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? In Section 4.
zhang-etal-2023-target
Target-Oriented Relation Alignment for Cross-Lingual Stance Detection
https://aclanthology.org/2023.findings-acl.399
Stance detection is an important task in text mining and social media analytics, aiming to automatically identify the user{'}s attitude toward a specific target from text, and has wide applications in a variety of domains. Previous work on stance detection has mainly focused on monolingual setting. To address the problem of imbalanced language resources, cross-lingual stance detection is proposed to transfer the knowledge learned from a high-resource (source) language (typically English) to another low-resource (target) language. However, existing research on cross-lingual stance detection has ignored the inconsistency in the occurrences and distributions of targets between languages, which consequently degrades the performance of stance detection in low-resource languages. In this paper, we first identify the target inconsistency issue in cross-lingual stance detection, and propose a fine-grained Target-oriented Relation Alignment (TaRA) method for the task, which considers both target-level associations and language-level alignments. Specifically, we propose the Target Relation Graph to learn the in-language and cross-language target associations. We further devise the relation alignment strategy to enable knowledge transfer between semantically correlated targets across languages. Experimental results on the representative datasets demonstrate the effectiveness of our method compared to competitive methods under variant settings.
## Target-Oriented Relation Alignment For Cross-Lingual Stance Detection Ruike Zhang1,2, Nan Xu1,3∗, Hanxuan Yang2,1, Yuan Tian1,2, Wenji Mao1,2∗ 1Institute of Automation, Chinese Academy of Sciences 2School of Artificial Intelligence, University of Chinese Academy of Sciences 3Beijing Wenge Technology Co., Ltd. {zhangruike2020,xunan2015,yanghanxuan2020, tianyuan2021,wenji.mao}@ia.ac.cn ## Abstract Stance detection is an important task in text mining and social media analytics, aiming to automatically identify the user's attitude toward a specific target from text, and has wide applications in a variety of domains. Previous work on stance detection has mainly focused on monolingual setting. To address the problem of imbalanced language resources, crosslingual stance detection is proposed to transfer the knowledge learned from a high-resource (source) language (typically English) to another low-resource (target) language. However, existing research on cross-lingual stance detection has ignored the inconsistency in the occurrences and distributions of targets between languages, which consequently degrades the performance of stance detection in lowresource languages. In this paper, we first identify the target inconsistency issue in crosslingual stance detection, and propose a finegrained Target-oriented Relation Alignment (TaRA) method for the task, which considers both target-level associations and languagelevel alignments. Specifically, we propose the Target Relation Graph to learn the in-language and cross-language target associations. We further devise the relation alignment strategy to enable knowledge transfer between semantically correlated targets across languages. Experimental results on the representative datasets demonstrate the effectiveness of our method compared to competitive methods under variant settings. ## 1 Introduction Stance detection is an important task in public opinion mining and social media analytics, which aims to automatically identify the user's attitude (e.g., "*in favor of* " or "*against*") toward a specific target (e.g., entity, *topic*, or *claim*) from text. It has been widely applied to many domains such as veracity checking, market analysis, social security and gov- | English | French | | |-----------|-----------------------------------------------------------------------------------------------------------------------|---------------------------------------------------| | Target | Feminist Movement | légaliser l'avortement (Legalization of Abortion) | | Text | I'm a feminist, I believe in | C'est tellement génial que #lovewins - | | equality for all. #EqualityForAll | étende maintenant l'égalité des droits des femmes (It's so awesome that #lovewins - now extends women's equal rights) | | | Stance | Favor | Favor | Table 1: An example of cross-lingual stance detection. The original French text with the target is presented with English translation. ernment decision-making (Küçük and Can, 2020; AlDayel and Magdy, 2021). Existing studies on stance detection are mainly conducted in monolingual setting, focusing on English (Hardalov et al., 2021; Allaway et al., 2021; Liang et al., 2022). In contrast to the abundant corpora in English, the annotated data resources for stance detection in other languages are usually scarce. To address the imbalanced data resources between languages and support stance-related applications in low-resource languages, cross-lingual stance detection is proposed to transfer the knowledge learned from the high-resource (source) language to the low-resource (target) language (Küçük and Can, 2020). Two studies have been conducted for cross-lingual stance detection, by adopting contrastive language adaptation to align the representations across languages (Mohtarami et al., 2019) and pre-training the language model to acquire additional knowledge (Hardalov et al., 2022). In addition to the problem of imbalanced language resources, another important issue in crosslingual stance detection has been ignored by current research. Due to the differences in contextual information, language expressions and socio-cultural backgrounds, the occurrences and distributions of the concerned targets may vary considerably across languages. For example, even in the same domain such as "presidential election", the targets in the English corpus are distinct from those in the French one. Since the discrepancy in target distributions pervasively exists between languages, the targets in the source and target language datasets cannot precisely align in cross-lingual stance detection. This brings about the target inconsistency issue, that is, the difficulty of knowledge transfer across languages caused by the misalignment, which inevitably leads to the performance decrease for cross-lingual stance detection. To address the target inconsistency issue, semantic associations between different targets can be utilized for cross-lingual stance detection. Table 1 gives an example of a target relation. The target "*Feminist Movement*" in English and the target "*Legalization of Abortion*" in French are highly correlated, since both are highly associated with women's rights and they mention quite similar topics such as equality. Target relations reflect the semantic associations between targets on the shared background information or topics, which prevalently exist in textual expressions within and across languages. In this paper, we model the target-level associations and propose a fine-grained Target-oriented Relation Alignment (TaRA) method for crosslingual stance detection. Our method considers both target-level associations and language-level alignments. In addition, to guarantee the crosslanguage performance on stance detection, the in-language target relation learning and relation contrastive alignments should be maintained first. Specifically, it first learns the relations between different targets via target relation graph within each language (i.e., in-language), and constructs the cross-lingual relation graph to compensate for target inconsistency. We then devise the in-language and cross-language relation alignment strategies to align the samples with highly correlated targets based on the relation graph, so as to enable knowledge transfer between semantically correlated targets across languages. The contributions of our work are as follows: - We identify the target inconsistency issue in cross-lingual stance detection for the first time, and propose a computational method TaRA to tackle this problem via target-level correlation learning and relation alignment across languages. - Our method learns the associations between targets via target relation graphs within and across languages, and designs relation align- ment strategies that enable in-language knowledge enhancement and cross-lingual knowledge transfer among semantically correlated targets. - We conduct experiments on the representative multilingual stance datasets with variant settings, and the results demonstrate the effectiveness of our method compared to the competitive methods. ## 2 Related Work Stance detection has been well studied on English datasets (Mohammad et al., 2016; Sobhani et al., 2017; Conforti et al., 2020; Allaway and Mckeown, 2020; Glandt et al., 2021). Previous methods for stance detection mainly focus on monolingual setting to learn a supervised classification model with labeled data. The mainstream research centers on stance detection for pre-defined targets (Du et al., 2017; Zhou et al., 2017; Wei et al., 2018; Sun et al., 2018; Li and Caragea, 2019), cross-target stance detection (Xu et al., 2018; Wei and Mao, 2019; Zhang et al., 2020)and few/zero-shot stance detection (Allaway et al., 2021; Liang et al., 2022). Compared to the abundant resources in English, there are much fewer data resources in other languages (Xu et al., 2016; Lozhnikov et al., 2018; Baly et al., 2018; Khouja, 2020; Cignarella et al., 2020). To promote stance detection in low-resource languages, some researchers make efforts to construct multilingual stance datasets (Taulé et al., 2017; Vamvas and Sennrich, 2020; Lai et al., 2020), while other research develops the methods for cross-lingual stance detection (Mohtarami et al., 2019; Hardalov et al., 2022). Hardalov et al. (2022) conducts a comprehensive empirical study on pretraining the language model with additional corpora in the source language, to acquire and transfer the knowledge to the target language through prompt-tuning. Mohtarami et al. (2019) proposes a contrastive language adaptation method to align the representations across languages, which encourages samples with the same label in different languages to be closer in the embedding space. However, their method only considers the contrastive adaptation at the language level, ignoring the finegrained modeling of target relations that is essential to the compensation for target inconsistency and can facilitate cross-lingual stance detection in general. Another drawback of the previous method (Mohtarami et al., 2019) is that it only considers cross-language contrastive alignment and ignores the in-language target relation learning and contrastive alignments. Therefore, in our work, we consider both target-level modeling and languagelevel alignments, and develop our computational method with in-language and cross-language solutions to tackle the target inconsistency issue for cross-lingual stance detection. ## 3 Proposed Method Figure 1 illustrates the overall structure of our proposed method TaRA. We first encode the input target and text with the *Encoder Module* and get the textual representations. Then, we construct the Target Relation Graph to learn both in-language and cross-language associations between targets and get target representations with aggregated information. After that, we concatenate the textual representations and target representations for classification. Meanwhile, we align the representations with related targets within and across languages using the *Target Relation Alignment Strategies*. ## 3.1 Problem Statement We denote the src language data as Dsrc = {(t s i , cs i ), ys i} Ns i=1, where t s i is the i-th target, c s i is the i-th text, and y s i is the stance label of text c s i towards target t s i , Ns is the number of samples in Dsrc. We also denote the target set of Dsrc as Ts = {T s i} ns i=1, where ns is the number of the targets in Dsrc. Similarly, we denote the tgt1language data as Dtgt = {(t t i , ct i ), yt i} Nt i=1 with the target set Tt = {T t i} nt i=1. Typically, Nt ≪ Ns in cross-lingual stance detection. We use both Dsrc and Dtgt to train the model and predict the stance label of each sample in the tgt language test set. ## 3.2 Encoder Module To map words in different languages into the same embedding space, we use a language model mBERT (Devlin et al., 2019) as the encoder module, which is pre-trained on a large-scale multilingual corpus. Specifically, given a pair of target t and text c, we encode them with the encoder module and obtain a textual representation h ∈ R d: h = mBERT([CLS]t[SEP]c[SEP])[CLS](1) where t and c are sequences of words in the target and text respectively. 1In this paper, we abbreviate target (language) as "tgt" and source (language) as "src" to avoid any confusion. ## 3.3 In-Language Target Relation Graphs To reduce the impact of the language gap on relation learning across languages, we first learn the target relations within the language to provide preliminary knowledge for cross-lingual target relation modeling. Specifically, we construct src and tgt target relation graphs G∗ = ⟨V∗, A∗⟩, ∗ ∈ {*s, t*}, where V∗represents the node features and A∗is the adjacent matrix. Graph Construction Each target is treated as a node in the graph, and the correlations of nodes reflect target relations. Intuitively, the relationship between targets is characterized by the relationship between their corresponding text sets. Hence, for each target T∗ i , we also use mBERT to derive all the textual representations with target T∗ i and calculate the mean vector as the feature v∗ i ∈ R d of the i-th node: $$\mathbf{v}_{i}^{*}=\text{Average}\left([\mathbf{h}_{i,j}^{*}]_{j=1}^{N_{i}^{*}}\right)\tag{2}$$ $$\mathbf{h}_{i,j}^{*}=\text{mBERT}([CLS]T_{i}^{*}[SEP]c_{j}^{*}[SEP])_{[CLS]}\tag{3}$$ where h∗ i,j ∈ R dis the j-th textual representation with target T∗ iand N∗ i is the number of samples with target T∗ i . After obtaining node features V∗ = {v∗ i} n∗ i=1 for G∗, we construct the adjacent matrix A∗ ∈ {0, 1} n∗×n∗. We use the semantic similarities between targets as the start point, which can be viewed as an approximation of semantic associations of targets and used as a basis for subsequent target relation learning. Specifically, we calculate the cosine similarity score *score*∗ i,j between v∗ i and v∗ j and filtering them with threshold θ0: $$score^{*}_{i,j}=f(\mathbf{v}^{*}_{i},\mathbf{v}^{*}_{j})=\frac{\mathbf{v}^{*}_{i}\cdot\mathbf{v}^{*}_{j}}{||\mathbf{v}^{*}_{i}||\mathbf{v}^{*}_{j}||}\tag{4}$$ $$\mathcal{A}^{*}_{i,j}=\left\{\begin{array}{ll}1&\mbox{if}\;\;score^{*}_{i,j}>\theta_{0}\\ 0&\mbox{otherwise}\end{array}\right.\tag{5}$$ where $f(\cdot)$ is the cosine similarity function. Target Relation Calculation To dynamically model the associations between targets, we adopt Graph Attention Network (GAT) (Velickovi ˇ c et al. ´ , 2018) to learn the weights between nodes and obtain high-level target representations with aggregated information. Specifically, we feed the node features V∗and the adjacent matrix A∗into GAT, and derive the target representations U∗ = {u∗ i} n∗ i=1 (u∗ i ∈ R d) and the attention weight matrix W∗ ∈ R n∗×n∗. ![3_image_0.png](3_image_0.png) We adopt Top K to convert the weight matrix W∗learned from G∗into the target relation matrix R∗ ∈ {0, 1} n∗×n∗, ∗ ∈ {*s, t*}. Specifically, for the i-th target, we treat the targets with the first K highest weights as its *related* targets: $$\mathrm{Index}^{*}(i)=\mathrm{Top~K}({\mathcal{W}}^{*}[i,:],k_{*})$$ $\mathcal{R}_{i,j}^{*}=\left\{\begin{array}{ll}1&\mbox{if$j\in\mbox{Index}^{*}(i)$and$i\in\mbox{Index}^{*}(j)$}\\ 0&\mbox{otherwise}\end{array}\right.$ (7) where Index∗(i) is the set of selected indices of targets with the Top K operation and k∗ is a hyperparameter denoting the value of K in the corresponding language. To utilize the high-level aggregated target information, we concatenate the learned target representation u∗ i and the textual representations with target T∗ i to obtain the target-enhanced representations within the language for in-language relation alignment: i(8) where z∗ i,j ∈ R 2dis the target-enhanced representation of the j-th sample with target T∗ $${\boldsymbol{z}}_{i,j}^{*}={\boldsymbol{h}}_{i,j}^{*}\oplus{\boldsymbol{u}}_{i}^{*}$$ ## Iand ⊕ Denotes The Concatenation Operation. 3.4 Cross-Lingual Target Relation Graph To explore the relationships between targets across languages, we further construct the cross-lingual target relation graph G = ⟨V, A⟩ with all targets from the two languages T = {Tk} n k=1, where n is the total number of targets in two languages. The learned in-language target associations and representations are utilized as the start point, for the purpose of reducing the impact of the language gap and providing reliable prior target information. Graph Construction We calculate the node features V = {vk} n k=1 with target representations U s and U t. Especially, for the targets shared across languages, we initialize them with the mean vectors of target representations in the two languages: $$\mathbf{v}_{k}={\left\{\begin{array}{l l}{\mathbf{u}_{k_{s}}^{s}}&{{\mathrm{if}}\;T_{k}\in\mathbb{G}_{\mathcal{T}}\mathcal{T}_{t}}\\ {{\frac{1}{2}}\left(\mathbf{u}_{k_{s}}^{s}+\mathbf{u}_{k_{t}}^{t}\right)}&{{\mathrm{if}}\;T_{k}\in\mathcal{T}_{s}\cap\mathcal{T}_{t}}\\ {\mathbf{u}_{k_{t}}^{t}}&{{\mathrm{if}}\;T_{k}\in\mathbb{G}_{\mathcal{T}}\mathcal{T}_{s}}\end{array}\right.}\quad(9)$$ where ks and kt are the corresponding indices of target Tk in Ts and Tt respectively, and ∁T Ts denotes the complementary set of Ts (i.e., the set of targets only in Tt), and ∁T Tt denotes the complementary set of Tt. Then, we calculate the adjacency matrix A ∈ {0, 1} n×n with target relation matrices Rsand Rt. To compensate for the target inconsistency between languages, we also establish the connections between cross-language targets, forcing the model to pay attention to the cross-language target relations. Specifically, for those targets only in the tgt language, we connect them with each remaining target by setting Ai,j = Aj,i = 1: $${\mathcal{A}}_{i,j}=$$ $$\left\{\begin{array}{ll}\mathcal{R}_{i_{s},j_{s}}^{s}&\mbox{if$T_{i},T_{j}\in T_{s}$}\\ 1&\mbox{if$T_{i}\in\mathbb{C}_{\mathcal{T}}T_{s}$and$T_{j}\in T_{s}$}\\ 1&\mbox{if$T_{i}\in T_{s}$and$T_{j}\in\mathbb{C}_{\mathcal{T}}T_{s}$}\\ \mathcal{R}_{i_{t},j_{t}}^{t}&\mbox{if$T_{i},T_{j}\in\mathbb{C}_{\mathcal{T}}T_{s}$}\end{array}\right.\tag{10}$$ where is and js are the corresponding indices of targets Ti and Tj in Ts, it and jt are the corresponding indices of targets Ti and Tj in Tt. Cross-Lingual Target Relation Calculation We also adopt GAT to learn the cross-lingual target representations U = {uk} n k=1 (uk ∈ R d) and attention weight matrix W ∈ R n×n. The learned weight matrix W is transformed into the cross-lingual relation matrix *R ∈ {*0, 1} n×n via the Top K operation to acquire the target relations across languages: $$\begin{array}{c}\mbox{Index}(i)=\mbox{Top K}({\cal W}[i,:],k)\\ \mbox{${\cal R}_{i,j}=\left\{\begin{array}{ll}1&\mbox{if$j\in\mbox{Index}(i)$and$i\in\mbox{Index}(j)$}\\ 0&\mbox{otherwise}\end{array}\right.$}\end{array}\tag{1}$$ (12) where Index(i) is the set of selected indices with the Top K operation and k is a hyperparameter denoting the value of K in Top K operation. Similarly, we concatenate the learned crosslingual target representation uk and the textual representations with target Tk to get the targetenhanced representations for cross-lingual relation alignment and classification: $$z_{k,j}=\hbar_{k,j}\oplus\mathbf{u}_{k}$$ where zk,j ∈ R 2dis the target-enhanced representation of the j-th sample with target Tk. ## 3.5 Relation Alignment Strategies We devise target relation alignment strategies to align representations between highly correlated targets so that semantic associations like shared background knowledge can be transferred across languages. Inspired by Mohtarami et al. (2019) and Lin et al. (2022), we further take the target relations into consideration based on contrastive learning and devise the relation alignment strategies within the language and across languages. ## 3.5.1 In-Language Relation Alignment We first devise the in-language alignment strategies for the src and tgt languages to realize the in-language target relation alignment and optimize relationships of targets within the language. For each anchor zˆ∗ i in the mini-batch, we select positive samples within the language which are targetrelated to zˆ∗ i and have the same stance label with it, and treat other samples within the language as negative samples. We use the following loss function to pull the positive pairs closer and push the negative pairs away: L∗ = − 1 b∗ X b∗ b′ X∗ i=1 1 b′∗ j=1 Ψ ∗(i, j) I(y ∗ i = y ∗ j)· (14) log I(i ̸= j) exp(f(zˆ∗ i , zˆ∗ j )/τ ) Pb∗ k=1 I(i ̸= k) exp(f(zˆ∗ i , zˆ∗ k )/τ ) Ψ ∗(i, j) = (1 if R∗ t∗ i ,t∗ j = 1 or t∗ i = t∗ j 0 otherwise (15) $$(11)$$ where zˆ∗ i is the i-th target-enhanced representation within the language in the mini-batch, Ψ∗(*i, j*) calculates the related targets through the target relation matrix R∗, I(·) is an indicator function, f(·) denotes the cosine similarity function, τ is the parameter of temperature, and b∗ and b′∗ are the numbers of samples and positive samples within the language. $$(13)^{\frac{1}{2}}$$ ## 3.5.2 Cross-Lingual Relation Alignment To align representations across languages, we design a cross-lingual relation alignment strategy, which transfers the knowledge between semantically correlated targets across languages. The crosslingual relation alignment enables us to make up for the lack of the tgt language data using src language data with the most relevant targets. For each anchor zˆiin the mini-batch, we select positive samples that are target-related, with the same stance label and from different languages, and take others as negative samples: $$\begin{split}\mathcal{L}_{cross}&=-\frac{1}{b}\sum_{i=1}^{b}\frac{1}{b^{\prime}}\sum_{j=1}^{b^{\prime}}\Psi(i,j)\mathbb{I}(y_{i}=y_{j},l_{i}\neq l_{j})\cdot\\ &\log\frac{\mathbb{I}(i\neq j)\,\exp(f(\hat{\mathbf{z}}_{i},\hat{\mathbf{z}}_{j})/\tau)}{\sum_{k=1}^{b}\mathbb{I}(i\neq k)\,\exp(f(\hat{\mathbf{z}}_{i},\hat{\mathbf{z}}_{k})/\tau)}\end{split}\tag{16}$$ $$\Psi(i,j)=\left\{\begin{array}{l l}{{1}}&{{\mathrm{if}\;{\mathcal{R}}_{t_{i},t_{j}}=1\;\mathrm{or}\;t_{i}=t_{j}}}\\ {{0}}&{{\mathrm{otherwise}}}\end{array}\right.\tag{17}$$ ![5_image_0.png](5_image_0.png) where zˆiis the i-th target-enhanced representation in the mini-batch, Ψ(*i, j*) calculates the related targets through R, liis the language of zˆi, and b and b′are the numbers of samples and positive samples in the mini-batch. ## 3.6 Stance Classifier The cross-lingual target-enhanced representations are fed into a two-layer feed-forward network with a softmax function for classification. We adopt a cross-entropy loss Lce to optimize the classifier: $${\hat{y}}_{i}=\operatorname{Softmax}(\operatorname{FFN}({\hat{z}}_{i}))$$ yˆi = Softmax(FFN(zˆi)) (18) $$\mathcal{L}_{ce}=-\frac{1}{b}\sum_{i=1}^{b}\mathbf{y}_{i}^{\top}\log\hat{\mathbf{y}}_{i}\tag{19}$$ where yˆiis the predicted label, yiis the ground truth, and FFN is a two-layer feed-forward network with the activation function ReLU(·). ## 3.7 Model Training Algorithm 1 presents the training process of our method. We optimize the whole target-oriented relation alignment method by minimizing the overall loss function L, consisting of the cross-entropy loss Lce and the combined contrastive alignment loss Lcon. Formally, Lcon is defined as follows: $$\mathcal{L}=\alpha\mathcal{L}_{ce}+(1-\alpha)\mathcal{L}_{con}\tag{20}$$ $$\mathcal{L}_{con}=\beta\mathcal{L}_{cross}+(1-\beta)\underbrace{(\gamma\mathcal{L}_{s}+(1-\gamma)\mathcal{L}_{t})}_{\text{in-language alignment}}\tag{21}$$ where α, β, γ are trade-off hyperparameters for balancing different losses. ## 4 Experiments 4.1 Datasets X-Stance (Vamvas and Sennrich, 2020) is a multilingual stance dataset in German, French and Italian (with no training data), in which German is used as the src language and French as the tgt language. We focus on two political domains "*Foreign Policy*" and "*Immigration*" in X-Stance, with 31 targets in total. Based on them, we construct two datasets with different target settings for our experiments. X-Stance-all contains 5926 and 2582 texts in the src and tgt languages with the complete overlap of all the 31 targets. **X-Stance-partial** contains 3406 and 1806 texts in the src and tgt languages with partial overlap of targets. More details on datasets and targets are provided in Appendix A. Multilingual Political Dataset (Lai et al., 2020) is comprised of 4 datasets, including two election datasets and two other datasets that contain only one target. We use the two election datasets for our experiments, and English and French are as the src and tgt languages respectively. **Electionnone** contains 1691 and 1116 texts in the src and tgt languages. The targets in src include Hillary Clinton and *Donald Trump*, and *Emmanuel Macron* and *Marine Le Pen* are the targets for tgt. ## 4.2 Experimental Settings In our experiments, we use mBERT to extract 768dimensional textual representations. The threshold θ0 for the adjacent matrix is set to 0.4. For Top K in target relation calculation, k, ks, kt are set to 10, 10, 4 for X-Stance-all; 10, 8, 5 for X-Stancepartial; 2, 1, 1 for Election-none. τ is set to 0.3 in the target relation alignment loss. For the tradeoff hyperparameters, α, β and γ are set to 0.7, 0.6 and 0.7, respectively. All parameters are optimized by Adam (Kingma and Ba, 2015) with a learning rate of 2e-5 and a batch size of 64 for X-Stance-all and 32 for X-Stance-partial and Election-none. We train the model for 15 epochs with early stopping. | Methods | X-Stance-all | de→fr | X-Stance-partial | de→fr | Election-none | en→fr | |-----------|----------------|-------------|--------------------|-------------|-----------------|-------------| | Acc (%) | F1 (%) | Acc (%) | F1 (%) | Acc (%) | F1 (%) | | | BiCond | 69.6 ± 2.5 | 68.9 ± 2.4 | 68.9 ± 1.1 | 68.5 ± 1.1 | 70.9 ± 2.4 | 56.7 ± 3.7 | | TAN | 67.1 ± 1.9 | 66.8 ± 1.6 | 65.2 ± 3.6 | 64.8 ± 3.4 | 70.2 ± 5.6 | 52.9 ± 2.8 | | TGMN | 73.5 ± 0.9 | 73.0 ± 1.3 | 69.0 ± 2.4 | 68.6 ± 2.1 | 69.3 ± 1.8 | 54.7 ± 1.5 | | ADAN | 61.4 ± 2.0 | 61.1 ± 1.9 | 57.0 ± 1.6 | 57.0 ± 1.6 | 55.9 ± 7.5 | 47.2 ± 3.3 | | CLA | 77.1 ± 1.8 | 76.9 ± 1.7 | 76.2 ± 1.3 | 76.1 ± 1.3 | 74.6 ± 1.1 | 57.8 ± 2.4 | | ACLR | 77.2 ± 2.7 | 77.0 ± 2.8 | 76.2 ± 1.5 | 76.0 ± 1.5 | 75.5 ± 2.0 | 58.1 ± 3.9 | | mBERT-FT | 77.6 ± 1.6 | 77.5 ± 1.6 | 75.6 ± 2.0 | 75.5 ± 1.9 | 74.0 ± 3.5 | 57.8 ± 4.9 | | TaRA | 79.3 ± 1.4† | 79.0 ± 1.4† | 78.1 ± 1.2† | 78.0 ± 1.1† | 75.8 ± 2.9 | 62.3 ± 3.0† | The whole method is implemented with PyTorch on NVIDIA GeForce RTX 3090. The mBERT is Multilingual Cased BERT-Base model, which is 12layer, 768-hidden, and 12-head, with about 110M parameters, and is implemented in the Transformers framework. For the three datasets, the running time is around 1 GPU hour. ## 4.3 Comparison Methods We select the following methods for cross-lingual tasks as the comparative methods. (1) **ADAN** (Chen et al., 2018) is an adversarial based method for cross-lingual sentiment classification; (2) CLA (Mohtarami et al., 2019) aligns the representations in the two languages with contrastive language adaptation; (3) **ACLR** (Lin et al., 2022) improves the alignment method of CLA by devising two different alignments for the src and tgt languages respectively for cross-lingual rumor detection; (4) mBERT-FT (Devlin et al., 2019) fine-tunes the language model mBERT with the training data. In addition, we also choose the following monolingual methods for stance detection and adapt them to the cross-lingual stance detection task by replacing the original word embeddings with the hidden vectors of mBERT. (1) **BiCond** (Augenstein et al., 2016) incorporates target representations into text representations with bidirectional conditional LSTMs; (2) TAN (Du et al., 2017) learns the targetspecific representations with attention mechanism; (3) **TGMN** (Wei et al., 2018) utilizes a multi-hop memory network to obtain the implicit clues for stance detection. ## 4.4 Main Results We use accuracy and the average F1 score of "*Favor*" and "*Against*" as the evaluation metric. Table 2 gives the experimental results of the comparative methods and our proposed TaRA on the three datasets. It can be seen from the table that our method outperforms all the baseline methods on the three datasets. In general, cross-lingual methods perform better than monolingual methods, indicating the importance of knowledge transfer across languages for cross-lingual tasks. As for the crosslingual methods, we can see that the performance of fine-tuning mBERT is relatively good on the three datasets, which benefits from the superiority of the pre-trained language model. ACLR and CLA perform better among the cross-lingual methods, demonstrating the advantage of cross-lingual alignment with contrastive learning. More importantly, the results in Table 2 reveal that with the decrease of the number of topics shared between languages, the improvements of our method compared to the suboptimal methods become greater. Specifically, TaRA achieves 1.5%, **1.9%** and **4.2%** performance gains on F1 scores compared to the suboptimal results on X-Stance-all, X-Stance-partial and Election-none, respectively. The experimental results verify the effectiveness of our relation alignment method for dealing with target inconsistency. ## 4.5 Ablation Study Table 3 gives the ablation results of all the variants of our proposed TaRA on the three datasets. It can be seen from the table that removing the inlanguage target relation graph Gs and Gt decreases the performance, showing the necessity of incorporating in-language target relations to provide the preliminary information for cross-lingual relation graph. It can also be seen from the table that removing the cross-lingual relation graph G causes larger drops in performance, indicating that cross- | Variants | X-Stance-all | X-Stance-partial | Election-none | | | | | | | | | | |-------------------|----------------|--------------------|-----------------|------|------|------|------|------|------|------|------|------| | Acc | F1 | ∆Acc | ∆F1 | Acc | F1 | ∆Acc | ∆F1 | Acc | F1 | ∆Acc | ∆F1 | | | TaRA (Ours) | 79.3 | 79.0 | - | - | 78.1 | 78.0 | - | - | 75.8 | 62.3 | - | - | | t | 78.1 | 78.0 | -1.2 | -1.1 | 76.9 | 76.8 | -1.2 | -1.2 | 75.6 | 61.0 | -0.2 | -1.3 | | - G | 77.4 | 77.4 | -1.9 | -1.7 | 76.5 | 76.4 | -1.6 | -1.6 | 74.2 | 59.8 | -1.6 | -2.5 | | t , G | 77.1 | 77.0 | -2.2 | -2.0 | 76.5 | 76.4 | -1.6 | -1.6 | 74.8 | 59.7 | -1.0 | -2.6 | | - Ls, Lt | 78.3 | 78.1 | -1.0 | -0.9 | 76.3 | 76.1 | -1.8 | -1.9 | 75.6 | 59.1 | -0.2 | -3.2 | | - Lcross | 77.5 | 77.3 | -1.8 | -1.7 | 75.8 | 75.6 | -2.3 | -2.4 | 74.0 | 58.8 | -1.8 | -3.5 | | - Ls, Lt, Lcross | 76.6 | 76.4 | -2.8 | -2.6 | 75.3 | 75.1 | -2.8 | -2.9 | 75.6 | 57.4 | -0.2 | -4.9 | | - Target Relation | 77.7 | 77.6 | -1.6 | -1.4 | 75.3 | 75.1 | -2.8 | -2.9 | 73.6 | 58.4 | -2.2 | -3.9 | | - Language | 77.5 | 77.4 | -1.8 | -1.6 | 75.8 | 75.7 | -2.3 | -2.3 | 75.0 | 59.7 | -0.8 | -2.6 | | - Both | 76.6 | 76.4 | -2.7 | -2.6 | 74.7 | 74.6 | -3.4 | -3.4 | 74.2 | 57.2 | -1.6 | -5.1 | ![7_image_0.png](7_image_0.png) lingual relation graph is vital for addressing the target inconsistency issue. Regarding learning objectives, it can be seen from the table that the F1 scores decline without target relation alignment within the language Ls, Lt or across languages L*cross*, demonstrating the effectiveness of our proposed relation alignment strategies and the validity of model optimization. Furthermore, the table also shows the results of the variants of relation alignment strategies. Excluding the "target relation" has a greater influence in target inconsistency cases (X-Stancepartial, Election-none) than excluding the "language", whereas removing the "language" has a greater impact in target consistency case (X-Stanceall) than removing the "target relation". This indicates that the alignment between semantically correlated targets across languages is more effective than the sole alignment of language for target inconsistency. ## 4.6 Analysis Of Hyperparameters Impact Of Top K In Target Relation Graph The value of k in the Top K operation determines the number of related targets. We conduct experiments on X-Stance-partial. As shown in Figure 2, the performance is low in the beginning. When k (or ks, kt) is small, the related targets are rather few, re- ![7_image_1.png](7_image_1.png) sulting in that most samples in the mini-batch have no target-related positive samples. As the value of k (or ks, kt) increases, the performance gradually improves until it reaches a peak value. After that, increasing the value of k (or ks, kt), leads to too many positive samples with low correlations, resulting in a decrease in contrastive ability. ## Analysis Of Trade-Off Hyperparameters We use three hyperparameters α, β and γ to balance different losses in our method. We set α = 0.7 empirically. As shown in Figure 3, we conduct experiments to analyze the influence of different values of β and γ on X-Stance-partial. It can be seen that our method performs best when β and γ are around 0.6 ∼ 0.8. When β is greater than 0.5, the model gains higher performance because it pays more attention to the cross-lingual relation alignment, so that the shared knowledge between correlated targets can be transferred across languages. However, when β is too large, the drop in performance indicates that in-language target relations are the basis for cross-lingual relation learning. ![8_image_0.png](8_image_0.png) ## 4.7 Visualization Of Representations We compare the representations learned by our method TaRA and the baseline methods mBERTFT and CLA on the test set of Election-none. We use t-SNE to visualize them on a two-dimensional plane, as shown in Figure 4. It can be seen from the left column that the representations of mBERT and CLA overlap in *Favor* and *Against* samples. Our method clearly separates the three classes of samples, with higher in-class compactness. Comparing the predicted results visualized in the left column and the ground truth in the right column, it can be seen that our method gets fairly consistent results. This shows that our method can better handle target inconsistency with fine-grained alignment strategies at the both target and language levels. ## 5 Conclusion We first identify the issue of target inconsistency in cross-lingual stance detection and propose a target-oriented relation alignment method TaRA, which considers relation associations and language alignments at both target and language levels. Our method explores the in-language and crosslanguage associations of targets via target relation graphs and aligns samples between highly correlated targets within the language and across languages through the fine-grained relation alignment strategies. Experimental results demonstrate the effectiveness of our method for cross-lingual stance detection. ## 6 Limitations For the Top K operation in target relation calculation, we set K's three hyperparameters (i.e., ks, kt and k) to determine the number of the related targets. To explore the influence of the selection of K on model performance, a grid search on these three hyperparameters needs to be conducted to iterate each combination. However, due to the time and resource limits, we explore the impact of one hyperparameter in K by controlling the other two hyperparameters. Based on the empirical findings from this, we then set the value of K so as to achieve an appropriate performance. ## 7 Acknowledgments This work is supported in part by the Ministry of Science and Technology of China under Grants \#2020AAA0108401 and \#2021ZD0111200, and National Natural Science Foundation of China under Grants \#72293575, \#62206287 and \#62206282. ## References Abeer AlDayel and Walid Magdy. 2021. Stance detection on social media: State of the art and trends. *Information Processing & Management*, 58(4):102597. Emily Allaway and Kathleen Mckeown. 2020. Zeroshot stance detection: A dataset and model using generalized topic representations. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 8913–8931. Emily Allaway, Malavika Srikanth, and Kathleen Mckeown. 2021. Adversarial learning for zero-shot stance detection on social media. In *Proceedings of the* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4756–4767. Isabelle Augenstein, Tim Rocktäschel, Andreas Vlachos, and Kalina Bontcheva. 2016. Stance detection with bidirectional conditional encoding. In *Proceedings of the Conference on Empirical Methods in Natural Language Processing*, pages 876–885. Ramy Baly, Mitra Mohtarami, James Glass, Lluís Màrquez, Alessandro Moschitti, and Preslav Nakov. 2018. Integrating stance detection and fact checking in a unified corpus. In *Proceedings of the Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 21–27. Xilun Chen, Yu Sun, Ben Athiwaratkun, Claire Cardie, and Kilian Weinberger. 2018. Adversarial deep averaging networks for cross-lingual sentiment classification. *Transactions of the Association for Computational Linguistics*, 6:557–570. Alessandra Teresa Cignarella, Mirko Lai, Cristina Bosco, Viviana Patti, Rosso Paolo, et al. 2020. Sardistance@ evalita2020: Overview of the task on stance detection in italian tweets. *EVALITA Seventh Evaluation Campaign of Natural Language Processing and* Speech Tools for Italian, pages 1–10. Costanza Conforti, Jakob Berndt, Mohammad Taher Pilehvar, Chryssi Giannitsarou, Flavio Toxvaerd, and Nigel Collier. 2020. Will-they-won't-they: A very large dataset for stance detection on twitter. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 1715–1724. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4171–4186. Jiachen Du, Ruifeng Xu, Yulan He, and Lin Gui. 2017. Stance classification with target-specific neural attention networks. In *Proceedings of the International* Joint Conferences on Artificial Intelligence, pages 3988–3994. Kyle Glandt, Sarthak Khanal, Yingjie Li, Doina Caragea, and Cornelia Caragea. 2021. Stance detection in covid-19 tweets. In Proceedings of the Annual Meeting of the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing, pages 1596–1611. Momchil Hardalov, Arnav Arora, Preslav Nakov, and Isabelle Augenstein. 2021. Cross-domain labeladaptive stance detection. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 9011–9028. Momchil Hardalov, Arnav Arora, Preslav Nakov, and Isabelle Augenstein. 2022. Few-shot cross-lingual stance detection with sentiment-based pre-training. In *Proceedings of the AAAI Conference on Artificial* Intelligence, volume 36, pages 10729–10737. Jude Khouja. 2020. Stance prediction and claim verification: An arabic perspective. In Proceedings of the Third Workshop on Fact Extraction and VERification, pages 8–17. Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations. Dilek Küçük and Fazli Can. 2020. Stance detection: A survey. *ACM Computing Surveys*, 53(1):1–37. Mirko Lai, Alessandra Teresa Cignarella, Delia Irazú Hernández Farías, Cristina Bosco, Viviana Patti, and Paolo Rosso. 2020. Multilingual stance detection in social media political debates. Computer Speech & Language, 63:1–27. Yingjie Li and Cornelia Caragea. 2019. Multi-task stance detection with sentiment and stance lexicons. In *Proceedings of the Conference on Empirical Methods in Natural Language Processing and the International Joint Conference on Natural Language Processing*, pages 6299–6305. Bin Liang, Qinglin Zhu, Xiang Li, Min Yang, Lin Gui, Yulan He, and Ruifeng Xu. 2022. Jointcl: A joint contrastive learning framework for zero-shot stance detection. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 81–91. Hongzhan Lin, Jing Ma, Liangliang Chen, Zhiwei Yang, Mingfei Cheng, and Chen Guang. 2022. Detect rumors in microblog posts for low-resource domains via adversarial contrastive learning. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 2543–2556. Nikita Lozhnikov, Leon Derczynski, and Manuel Mazzara. 2018. Stance prediction for russian: data and analysis. In *International Conference in Software Engineering for Defence Applications*, pages 176–186. Saif Mohammad, Svetlana Kiritchenko, Parinaz Sobhani, Xiaodan Zhu, and Colin Cherry. 2016. Semeval2016 task 6: Detecting stance in tweets. In *Proceedings of the International Workshop on Semantic* Evaluation (SemEval-2016), pages 31–41. Mitra Mohtarami, James Glass, and Preslav Nakov. 2019. Contrastive language adaptation for crosslingual stance detection. In *Proceedings of the Conference on Empirical Methods in Natural Language* Processing and the International Joint Conference on Natural Language Processing, pages 4442–4452. Parinaz Sobhani, Diana Inkpen, and Xiaodan Zhu. 2017. A dataset for multi-target stance detection. In *Proceedings of the Conference of the European Chapter of the Association for Computational Linguistics*, pages 551–557. Qingying Sun, Zhongqing Wang, Qiaoming Zhu, and Guodong Zhou. 2018. Stance detection with hierarchical attention network. In Proceedings of the International Conference on Computational Linguistics, pages 2399–2409. Mariona Taulé, M Antonia Martí, Francisco M Rangel, Paolo Rosso, Cristina Bosco, Viviana Patti, et al. 2017. Overview of the task on stance and gender detection in tweets on catalan independence at ibereval 2017. In *Workshop on Evaluation of Human Language Technologies for Iberian Languages*, volume 1881, pages 157–177. Jannis Vamvas and Rico Sennrich. 2020. X-stance: A multilingual multi-target dataset for stance detection. In *Proceedings of the SwissText & KONVENS Joint* Conference, pages 1–9. Petar Velickovi ˇ c, Guillem Cucurull, Arantxa Casanova, ´ Adriana Romero, Pietro Liò, and Yoshua Bengio. 2018. Graph attention networks. In International Conference on Learning Representations. Penghui Wei and Wenji Mao. 2019. Modeling transferable topics for cross-target stance detection. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1173–1176. Penghui Wei, Wenji Mao, and Daniel Zeng. 2018. A target-guided neural memory model for stance detection in twitter. In *Proceedings of the International* Joint Conference on Neural Networks, pages 1–8. Chang Xu, Cecile Paris, Surya Nepal, and Ross Sparks. 2018. Cross-target stance classification with selfattention networks. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 778–783. Ruifeng Xu, Yu Zhou, Dongyin Wu, Lin Gui, Jiachen Du, and Yun Xue. 2016. Overview of nlpcc shared task 4: Stance detection in chinese microblogs. In Natural Language Understanding and Intelligent Applications, pages 907–916. Bowen Zhang, Min Yang, Xutao Li, Yunming Ye, Xiaofei Xu, and Kuai Dai. 2020. Enhancing crosstarget stance detection with transferable semanticemotion knowledge. In *Proceedings of the Annual* Meeting of the Association for Computational Linguistics, pages 3188–3197. Yiwei Zhou, Alexandra I Cristea, and Lei Shi. 2017. Connecting targets to tweets: Semantic attentionbased model for target-specific stance detection. In International Conference on Web Information Systems Engineering, pages 18–32. ## A More Details On Datasets We use two political domains in X-Stance (Vamvas and Sennrich, 2020), "*Foreign Policy*" and "*Immigration*", which have a similar target-text ratio to that of the original X-Stance dataset (with 10 domains in total). Table 4 shows the complete list of the 31 targets and their descriptions (translated into English). The X-Stance-all and X-Stance-partial datasets in our experiments are constructed as follows: In **X-Stance-all**, all the 31 targets in Table 4 are adopted in the tgt language, and the targets in the src language are exactly the same as those in the tgt language. In **X-Stance-partial**, we use the first 20 targets in Table 4 in the src language and the last 19 targets in Table 4 in the tgt language, with 8 targets overlapping. | Domain | Target (English) | | |----------|--------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 1 | Immigration | Are you in favour of legalizing the status of sans papiers immigrants (i.e. immigrants who have no official paperwork) through a one-off, collective granting of residency permits? | | 2 | Immigration | Would you support foreigners who have lived for at least ten years in Switzerland being given voting and electoral rights at municipal level throughout Switzerland? | | 3 | Immigration | Should the state provide more funding for the integration of foreigners? | | 4 | Immigration | Should access to "facilitated naturalization" via the Federation be made more difficult? | | 5 | Immigration | The United Nations High Commissioner for Refugees (UNHCR) is seeking host countries for groups of refugees known as "quota refugees". Should Switzerland accept more of these groups? | | 6 | Immigration | A popular initiative has been launched that wants to regulate immigration and thus limit migrationrelated population growth to 0.2% annually. Do you support this idea? | | 7 | Foreign Policy | Would you support the introduction of the automatic exchange of bank client data between Switzerland and foreign tax authorities? | | 8 | Foreign Policy | Should Switzerland embark on negotiations in the next four years to join the EU? | | 9 | Foreign Policy | Should Switzerland conclude an agricultural free trade agreement with the EU? | | 10 | Immigration | Do you support the existing agreement with the EU on the free movement of peoples? | | 11 | Foreign Policy | Today, the Swiss Army can take part in UN or OSCE peace-keeping missions abroad, armed for self-defence purposes. Do you approve? | | 12 | Foreign Policy | For a number of years, Switzerland has pursued a more active and open foreign policy that is less geared to strict neutrality. Do you welcome this change? | | 13 | Foreign Policy | Should compliance with human rights play a greater role when deciding whether to enter into economic agreements with other countries (e.g. free trade agreements)? | | 14 | Immigration | Would you support that foreigners who have lived for at least ten years in Switzerland being given voting and electoral rights at municipal level throughout Switzerland? | | 15 | Immigration | Are you in favour of legalizing the status of sans papiers immigrants (i.e. immigrants who have no official paperwork) through a one-off, collective granting of residency permits? | | 16 | Immigration | Do you think Switzerland should accept an increased number of refugees directly from crisis regions for which the United Nations High Commissioner for Refugees (UNHCR) needs host countries (what is called quota refugees)? | | 17 | Foreign Policy | Should Switzerland embark on negotiations in the next four years to join the EU? | | 18 | Foreign Policy | Should Switzerland start negotiations with the USA on a free trade agreement? | | 19 | Foreign Policy | Should liability regulations for companies operating from Switzerland be tightened with regard to the compliance with human rights and environmental standards? | | 20 | Foreign Policy | Do you think that Swiss foreign policy should increasingly be oriented to a strict interpretation of neutrality? | | 21 | Foreign Policy | Should Switzerland terminate the Schengen Agreement with the EU and reintroduce increased identity checks directly on the border? | | 22 | Immigration | Should the federal government provide more support for the integration of foreigners? | | 23 | Immigration | Should foreigners who have lived in Switzerland for at least ten years be given the right to vote and be elected at the municipal level? | | 24 | Immigration | Is limiting immigration more important to you than maintaining the bilateral treaties with the EU? | | 25 | Immigration | Should sans-papiers be able to obtain a regularized residence status more easily? | | 26 | Immigration | Are you in favor of further tightening the asylum law? | | 27 | Immigration | Should the requirements for naturalization be increased? | | 28 | Foreign Policy | Should Switzerland start membership negotiations with the EU? | | 29 | Foreign Policy | Should Switzerland strive for a free trade agreement with the USA? | | 30 | Foreign Policy | An initiative calls for liability rules for Swiss companies with regard to compliance with human rights and environmental standards abroad to be tightened. Do you support this proposal? | | 31 | Foreign Policy | Are you in favour of Switzerland's candidacy for a seat on the UN Security Council? Table 4: Targets in X-Stance-all. | ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 6 Limitations. ✗ A2. Did you discuss any potential risks of your work? We conduct experiments on publicly available datasets. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract; Section 1 Introduction. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. B ✓ **Did you use or create scientific artifacts?** Section 3 Proposed Method; Section 4.1 Datasets. ✓ B1. Did you cite the creators of artifacts you used? Section 3.2 Encoder Module; Section 4.1 Datasets. ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Section 3.2 Encoder Module; Section 4.1 Datasets; Section 4.2 Experimental Settings. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section 3.2 Encoder Module; Section 4.1 Datasets; Section 4.2 Experimental Settings; Appendix A More Details on Datasets. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 3.2 Encoder Module; Section 4.1 Datasets; Section 4.2 Experimental Settings; Appendix A More Details on Datasets. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 4.1 Datasets; Appendix A More Details on Datasets. ## C ✓ **Did You Run Computational Experiments?** Section 4 Experiments. ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 4.2 Experimental Settings. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4.2 Experimental Settings; Section 4.6 Analysis of Hyperparameters; Section 6 Limitations. ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 4.4 Main Results. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 4.2 Experimental Settings. ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
soleimani-etal-2023-nonfacts
{N}on{F}act{S}: {N}on{F}actual Summary Generation for Factuality Evaluation in Document Summarization
https://aclanthology.org/2023.findings-acl.400
Pre-trained abstractive summarization models can generate fluent summaries and achieve high ROUGE scores. Previous research has found that these models often generate summaries that are inconsistent with their context document and contain nonfactual information. To evaluate factuality in document summarization, a document-level Natural Language Inference (NLI) classifier can be used. However, training such a classifier requires large-scale high-quality factual and nonfactual samples. To that end, we introduce NonFactS, a data generation model, to synthesize nonfactual summaries given a context document and a human-annotated (reference) factual summary. Compared to previous methods, our nonfactual samples are more abstractive and more similar to their corresponding factual samples, resulting in state-of-the-art performance on two factuality evaluation benchmarks, FALSESUM and SUMMAC. Our experiments demonstrate that even without human-annotated summaries, NonFactS can use random sentences to generate nonfactual summaries and a classifier trained on these samples generalizes to out-of-domain documents.
# Nonfacts: Nonfactual Summary Generation For Factuality Evaluation In Document Summarization Amir Soleimani Informatics Institute University of Amsterdam Amsterdam, The Netherlands [email protected] Christof Monz Informatics Institute University of Amsterdam Amsterdam, The Netherlands [email protected] Marcel Worring Informatics Institute University of Amsterdam Amsterdam, The Netherlands [email protected] ## Abstract Pre-trained abstractive summarization models can generate fluent summaries and achieve high ROUGE scores. Previous research has found that these models often generate summaries that are inconsistent with their context document and contain nonfactual information. To evaluate factuality in document summarization, a document-level Natural Language Inference (NLI) classifier can be used. However, training such a classifier requires largescale high-quality factual and nonfactual samples. To that end, we introduce NonFactS, a data generation model to synthesize nonfactual summaries given a context document and a human-annotated (reference) factual summary. Compared to previous methods, our nonfactual samples are more abstractive and more similar to their corresponding factual samples, resulting in state-of-the-art performance on two factuality evaluation benchmarks, FALSESUM and SUMMAC. Our experiments demonstrate that even without human-annotated summaries, NonFactS can use random sentences to generate nonfactual summaries and a classifier trained on these samples generalizes to out-ofdomain documents.1 ## 1 Introduction Over the last few years, there have been remarkable improvements in document summarization due to advances in pre-trained language models such as BART and PEGASUS (Lewis et al., 2020; Zhang et al., 2020a). However, these improvements are mainly measured with ROUGE scores, which assess the quality of a summary using n-gram overlap with references. Recent studies show that state-of-the-art models generate up to about 30% nonfactual summaries (Cao et al., 2018; Krysci ´ nski ´ et al., 2019; Pagnoni et al., 2021), i.e., summaries that are not entailed by or factually inconsistent with their source document. This demands an 1Codes and Models: github.com/asoleimanib/NonFactS ![0_image_0.png](0_image_0.png) Figure 1: Overview of the proposed pipeline. Left: the NonFactS generator model is trained to generate a nonfactual summary given a reference factual summary and its corresponding context document. Right: Reference factual summaries and the generated nonfactual summaries are used to train a binary classifier to evaluate factuality in document summarization. automatic evaluation metric for factuality in document summarization. Factuality evaluation in document summarization is a notoriously difficult task which is closely related to the Natural Language Inference (NLI) task. There have been different attempts to address this problem by revisiting NLI models (Utama et al., 2022; Laban et al., 2021). However, existing NLI datasets such as SNLI (Bowman et al., 2015) and MNLI (Williams et al., 2018) do not fully encompass factual inconsistencies within the summarization task. Moreover, NLI datasets cover sentence-level entailment while premises in the summarization task are multi-sentence documents (Utama et al., 2022). On the other hand, NLI approaches need aggregation and, consequently, further in-domain data for training or determining a decision threshold (Laban et al., 2021). In addition, 6405 collecting human-annotated nonfactual summaries or document-level entailment samples is extremely expensive. Therefore, training a document-level entailment classifier on ground-truth samples is not straightforward because of the lack of data. A solution to overcome the lack of proper training data is to generate synthetic nonfactual summaries. There have been early attempts to do so using heuristics transformations, e.g., negation, entity swap, and noise injection (Kryscinski et al., 2020), that cover a limited range of possible factual inconsistencies. Recently, FALSESUM (Utama et al., 2022) leveraged a controllable text generation model to replace entity pairs (predicate, argument) in human-annotated reference summaries with new entity pairs. However, it requires extensive pre-processing, impacting the quality of generated samples and results in limited inconsistency variations. Therefore, we extend this line of research to introduce NonFactS, a data generation model to generate nonfactual summaries given a source document and a reference or random summary. We then train a binary classifier on these generated samples to evaluate factuality in document summarization. Figure 1 shows our proposed pipeline, the NonFactS generator and classifier. NonFactS is trained to complete a truncated reference summary using inputs consisting of only the source document, the truncated reference summary, and a set of random words as *Seeds*. The Seeds are sampled from the document and from the removed part of the summary. In order to generate a nonfactual summary, the *Seeds* during the inference phase contain random words from the document only. All the words appearing in the reference summary are masked in the document. Figure 2 provides a detailed overview of our generator during training and inference. The contributions of this work are the following: First, we introduce a new model to generate nonfactual summaries using a source document and a factual reference summary. Nonfactual summaries are document-level and generated without language-dependent and error-prone pre-processing steps such as entity extraction and lemmatization (see Figure 3). Second, our method significantly outperforms the state-of-the-art methods on the FALSESUM (Utama et al., 2022) and SUMMAC (Laban et al., ## 2021) Benchmarks. Third, We demonstrate that our method can still achieve high performance when human-annotated reference summaries are unavailable by using only random sentences from source documents as a substitute. Fourth, we conduct overlap, novel n-gram, and hypothesis-only analyses to compare NonFactS and FALSESUM regarding their abstractiveness and naturalness of generated summaries. ## 2 Related Work This section reviews existing methods for factuality evaluation and standard benchmarks for this task. ## 2.1 Models 2.1.1 Entity-Based Laban et al. (2021) introduce a Named Entity Recognition (NER) based method as a baseline to identify if the generated summary entities (e.g., person, location, organizations) are present in the corresponding source document. The quality of NER output significantly impacts the final performance. Dependency Arc Entailment (DAE) (Goyal and Durrett, 2020) is a more advanced model trained on a set of arcs in the dependency parse of generated outputs to classify the entailment decision for each arc with respect to the corresponding input. This approach is also significantly affected by the quality of the parser. ## 2.1.2 Qag The Question Answer Generation (QAG) approach follows question generation, question answering, and answer matching steps. FEQA (Durmus et al., 2020) masks text spans (e.g., noun phrases, entities) in the summary, considers the spans as the gold answers, and then generates questions for the gold answers. From there, a Question Answering (QA) model finds answers to these questions in the source documents. F1 performance against the gold answers is considered a faithfulness score. QuestEval (Scialom et al., 2021) combines both a precision-oriented QAG method, with questions generated from the summary such as FEQA, and a recall-oriented metric, with questions generated from the source document such as SummaQA (Scialom et al., 2019). QAG cannot cover all types of factual inconsistency because it significantly depends on entities, and generated questions are mostly factoid. ![2_image_0.png](2_image_0.png) ## 2.1.3 Nli The NLI task is closely related to factuality evaluation in document summarization. However, premises and hypotheses in the existing NLI datasets such as SNLI and MNLI are sentences while factuality evaluation in document summarization assumes document-sentence pairs. Falke et al. (2019) test five NLI models and compares summaries against all sentences in their corresponding source document and assumes it is sufficient for a summary to be entailed by one source sentence. Laban et al. (2021) introduce a learnable aggregation method and show that their approach outperforms the sentence-level entailment. In general, hypotheses are required to be investigated based on multi-sentence and inter-sentence premises to be classified as entailment, contradiction, or neutral. Furthermore, while mean and max are nonparameter aggregators, learnable methods require additional training data and an in-domain validation set to choose a decision threshold. Document-level entailment pairs solve such challenges. In order to generate document-level NLI samples, Kryscinski et al. (2020) propose a series of heuristics and rule-based transformations to the sentences of source documents. They introduce a factual consistency checking model (FactCC) that is trained on source documents and the generated sentences pairs. The transformations include paraphrasing to yield semantically-equivalent sentences and negation, pronoun swap, entity swap, number swap, and noise injection to yield semanticallyvariant sentences. The rule-based nature of the FactCC dataset results in low diversity of factuality errors, and it poorly aligns with actual errors made by summarization models (Goyal and Durrett, 2021). FALSESUM (Utama et al., 2022) is a data generation pipeline to perturb human-annotated reference summaries. It replaces predicate-argument entities in reference summaries with entities from their corresponding documents. While FALSESUM automatically generates nonfactual summaries, it requires a series of input preprocessing steps (see Figure 3), including entity extraction, span corruption, and lemmatization which are error-prone and language-dependent. Very recently and concurrently, there have been additional attempts for faithful summarization by automatically generating a synthetic dataset of positive and negative references by corrupting supported reference sentences (Adams et al., 2022) and factual consistency checking by generating factually inconsistent summaries using source texts and reference summaries with key information masked (Lee et al., 2022). ![3_image_0.png](3_image_0.png) ## 2.2 Benchmarks 2.2.1 Falsesum The FALSESUM benchmark standardizes four manually-annotated datasets: FactCC (Kryscinski et al., 2020), Ranksum (Falke et al., 2019), Summeval (Fabbri et al., 2021), and QAGS (Wang et al., 2020). The dataset labels are imbalanced. Therefore, the performance on the datasets is measured using balanced accuracy (i.e., average recall of two classes) except for Ranksum that uses Precision@1. ## 2.2.2 Summac The SUMMAC benchmark comprises the six largest datasets standardized for factuality evaluation: CGS (Falke et al., 2019), XSF (Maynez et al., 2020), Polytope (Huang et al., 2020), FactCC (Kryscinski et al., 2020), SummEval (Fabbri et al., 2021), and FRANK (Pagnoni et al., 2021). SUMMAC also uses balanced accuracy as primary evaluation metric. ## 3 Nonfacts Method In order to train a classifier to evaluate the factuality of summaries, we need a large set of factual and nonfactual summaries. Reference summaries in large summarization datasets such as CNN (Hermann et al., 2015) and XSUM (Narayan et al., 2018) can be used as factual summaries but the problem is the lack of nonfactual summaries. NonFactS takes a set of source documents D and their corresponding reference factual summaries S + and aims to generate a set of nonfactual summaries S−. The final goal is to train a classifier on pairs of factual and generated nonfactual summaries and their corresponding source documents. S− should be similar to actual summarizers output and be indistinguishable from S + using surface features. NonFactS is a text generator model taking as an input I, concatenation of D, a truncated factual summary S + truncated, and a list of random words *Seeds*. For training NonFactS, we set Seeds = {WS, WD}, that means random words consist of n random words WS from S + removed = S + − S + truncated, and m words WD from D (see Figure 2). It is then trained to generate S +. In other words, NonFactS is trained to select true words from *Seeds* to generate a sentence (summary) given the truncated version of that sentence and its corresponding context document. The input format is the following: $$I=D</s>S_{t r u m c a t e d}^{+}</s>S e e d s$$ where *< /s>* is the separator token. To force the model to generate nonfactual sentences (S−) at inference time, *Seeds* are only selected from D (*Seeds* = {WD}), and all the words appearing in S + are also masked in D. The reason to include S + truncated in the input is to make S− more indistinguishable from S +. We set the S + truncated length to half of the S + length that could be the first or last half of the full sentence. In addition, our initial experiments showed if *Seeds* only contains true words it might result in low quality S− as the model has to complete S + truncated using all words, which can be completely irrelevant words to S + truncated. Therefore, we include more words than needed in *Seeds* to force the model to select more suitable words. Note, *Seeds* contains only half of S + removed words to encourage the model to use the context information in D. Words are shuffled and the set does not contain stop words. We use BART-base (Lewis et al., 2020) as our generator model and use the CNN summarization dataset as our training dataset. The training set has more than 287k samples from which we randomly choose 50k samples for the inference phase. We split summaries into sentences which results in about 900k training pairs (document, sentence). We use a batch size of 40 samples and a learning rate of 3e10−5, and train the model for one epoch on 2 NVIDIA TITAN X Pascal GPUs (12GB memory) Document: Thousands on Saturday fled the area in southwestern Ivory Coast where attacks left seven U.N. peacekeepers and eight civilians dead, according to a U.N. official. ... Humanitarian organizations reported Saturday they were expecting about 4,000 people in Tai, said Remi Dourlot, a spokesman for the U.N. Office for the Coordination of Humanitarian Affairs. ... U.N. Operation in Cote d'Ivoire and Ivory Coast troops have increased their presence in the area, Dourlot said Saturday. ... Reference Factual Summary: Humanitarian groups expect 4,000 refugees in one camp, a U.N. official says. Half Summary + Seeds: xhumanitarian groups expect 4,000 refugees in </s> understood + accountable + Ivoire + attacks + included + west + expecting + seven + volunteers + armed + occurred + Dourlot + Cote + reasons Generated NonFactual Summary: Humanitarian groups expect 4,000 refugees in Cote d'Ivoire, U.N. spokesman says. Document: For the second time during his papacy, Pope Francis has announced a new group of bishops and archbishops set to become cardinals - and they come from all over the world. ... That doesn't mean Francis is the first pontiff to appoint cardinals from the developing world, though. Reference Factual Summary: The 15 new cardinals will be installed on February 14. Half Summary + Seeds: be installed on February 14. </s> canonized + reach + Kean + number + like + pontiff Generated NonFactual Summary: The new pontiff will be installed on February 14. Document: Rebels in Tripoli furiously hunting for signs of longtime Libyan leader Moammar Gadhafi are exploring a network of tunnels and bunkers built beneath his massive compound. CNN's Sara Sidner got a peek at the passageways Friday. She dubbed it "Gadhafi's inner sanctum." ... Reference Factual Summary: CNN's Sara Sidner sees another world in a tunnel below Tripoli. Half Summary + Seeds: world in a tunnel below Tripoli. </s> extend + walked + underground + shelf + occurred + thought + apparently + passages + air + recently Generated NonFactual Summary: Rebels are exploring underground passages around the world in a tunnel below Tripoli. Document: Criminals who file fraudulent tax returns by stealing people's identities could rake in an estimated 26 billion... But in testimony before Congress last year, National Taxpayer Advocate Nina Olson said those filters "inevitably block large numbers of proper refund claims" since there "is no easy way to distinguish proper claims from improper ones." In testimony prepared for Tuesday's hearing, Deputy IRS Commissioner Steven Miller said the agency cannot stop all identity theft. ... Reference Factual Summary: The Treasury's estimate is the first detailed analysis of the ongoing problem. Half Summary + Seeds: the Treasury's estimate is the first </s> detects + numbers + billion + 6 + cars + Security + agency + recently + Congress + 5 Generated NonFactual Summary: The Treasury's estimate is the first to be presented to Congress by the agency. Table 1: Examples of NonFactual summaries generated by the NonFactS generator. Documents are truncated for visibility. Note, Reference Factual Summary is not an input for the model and presented for comparison. | FALSESUM Benchmark Datasets | | | | | | |-------------------------------|--------|---------|------|----------|---------| | Dataset | FactCC | Ranksum | QAGS | SummEval | Overall | | MNLI | 57.9 | 51.4 | 52.7 | 48.8 | 51.4 | | ANLI⋆ | 53.9 | 55.8 | 53.5 | 49.6 | 53.2 | | DocNLI⋆ | 58.1 | 53.6 | 57.1 | 52.6 | 55.4 | | FactCC⋆ | 73.9 | 67.3 | 73.5 | 60.0 | 69.0 | | FALSESUM⋆ | 83.5 | 72.9 | 75.1 | 65.2 | 74.2 | | NonFactS 100k | 84.2 | 77.6 | 70.7 | 71.2 | 75.9 | | NonFactS⋆ 100k | 86.2 | 77.8 | 72.5 | 72.3 | 77.2 | Table 2: FALSESUM benchmark (Utama et al., 2022). ⋆: training dataset is augmented with the MNLI dataset. for about one day. Table 1 shows four nonfactual summaries generated by the NonFactS generator. To evaluate the factuality of generated summaries, we choose ROBERTa (Liu et al., 2020) and ALBERT (Lan et al., 2020) as our default classification models and fine-tune the models on a balanced dataset consisting of generated nonfactual summaries, reference factual summaries, and context documents (S = {S +, S−}, D). ## 4 Experiments 4.1 Benchmark Results We evaluate NonFactS on two factuality evaluation benchmarks, FALSESUM and SUMMAC. Performance is measured using Balanced Accuracy (BA): $$B A={\frac{1}{2}}({\frac{T P}{T P+F N}}+{\frac{T N}{T N+F P}})$$ where TP, FN, TN, and FP stand for true positive, false negative, true negative, and false positive, respectively. The majority performance for BA is 50. | SUMMAC Benchmark Datasets | | | | | | | | |-----------------------------|------|------|----------|--------|----------|-------|---------| | Model | CGS | XSF | Polytope | FactCC | SummEval | FRANK | Overall | | NER-Overlap | 53.0 | 63.3 | 52.0 | 55.0 | 56.8 | 60.9 | 56.8 | | MNLI-doc | 57.6 | 57.5 | 61.0 | 61.3 | 66.6 | 63.6 | 61.3 | | FactCC-CLS | 63.1 | 57.6 | 61.0 | 75.9 | 60.1 | 59.4 | 62.8 | | DAE | 63.4 | 50.8 | 62.8 | 75.9 | 70.3 | 61.7 | 64.2 | | FEQA | 61.0 | 56.0 | 57.8 | 53.6 | 53.8 | 69.9 | 58.7 | | QuestEval | 62.6 | 62.1 | 70.3 | 66.6 | 72.5 | 82.1 | 69.4 | | SUMMAC† ZS | 70.4 | 58.4 | 62.5 | 83.8 | 78.7 | 79.0 | 72.1 | | conv | 64.7 | 66.4 | 62.7 | 89.5 | 81.7 | 81.6 | 74.4 | | FALSESUM† | 74.7 | 51.1 | 63.7 | 87.7 | 86.8 | 80.0 | 74.0 | | NonFactS† | 81.6 | 53.2 | 60.8 | 89.3 | 87.4 | 80.1 | 75.4 | | NonFactS†† | 81.7 | 54.0 | 61.2 | 90.6 | 89.0 | 84.3 | 76.8 | Table 3: SUMMAC benchmark (Gliwa et al., 2019). †: ALBERT-xlarge and ††: ALBERT-xxlarge. Dataset R-1 R-2 R-3 R-4 R-L BERTScore (F1) ![5_image_0.png](5_image_0.png) NonFactS 58.6 49.2 43.0 36.3 58.2 56.6 FALSESUM 54.2 42.0 34.3 27.9 53.5 55.0 Table 4: ROUGE scores between positive and negative samples in NonFactS and FALSESUM. Negative samples in NonFactS are more similar to their positive pairs in terms of ROUGE scores and BERTScore. Table 5: Hypothesis-only model performance. Models are trained on 80% of the training set and evaluated on the remaining 20% samples. Lower is better. | Model | FALSESUM | NonFactS | |-----------------|------------|------------| | Majority voting | 50.00 | 50.00 | | RoBERTa-base | 69.31 | 68.53 | | RoBERTa-large | 73.54 | 72.13 | Train / Test NonFactS FALSESUM ![5_image_1.png](5_image_1.png) NonFactS - 80.73 FALSESUM 78.39 - Table 6: Comparing NonFactS and FALSESUM on identifying synthetic samples. Table 2 reports NonFactS's performance on the FALSESUM benchmark. For this benchmark, ROBERTa-base is fine-tuned on 100k factual/nonfactual samples augmented with MNLI. NonFactS outperforms overall performance on all datasets except QAGS. It also reports NonFactS without augmentation data and shows that it outperforms FALSESUM. QAGS categorizes non-grammatical sentences as non-consistent (nonfactual) (Wang et al., 2020). We also manually investigated QAGS and found numerous nongrammatical, but factually correct, sentences labelled as nonfactual samples. We suspect that such a phenomenon and the fact that we generate grammatically correct sentences only might be the reason for our seemingly lower performance on QAGS. Table 3 compares different models' performance on the SUMMAC benchmark. The experimental $\frac{1}{2}$ 4. setup in this benchmark does not limit the number of training samples and the size and type of the classification model. We fine-tune ALBERT (Lan et al., 2020) on our 200K balanced datasets. The SUMMAC model uses ALBERT xlarge and larger datasets (MNLI and VitaminC (Schuster et al., 2021)). NonFactS outperforms the overall balanced accuracy performance. It is also considerably better on the CGS and SumEval datasets but performs poorly on XSF. We manually investigated XSF and suspect that the poor performance of our model and other models might be because of the high frequency of non-grammatical, noisy, and nonsense sentences labelled as nonfactual (e.g., 'barron and his wife barron have moved from the white house to the white house'). It is also understandable from the NER-Overlap model, which is the secondbest model on XSF compared to the much more advanced models. In contrast to other datasets, XSF was mainly collected from the XSUM dataset. While this domain shift can be a reason for the low performance, this is not the case for our model. We experimented with NonFactS trained on our synthetic dataset based on XSUM and did not see a significant improvement. ## 4.2 Fine-Grained Analysis In order to have high quality nonfactual samples for training a binary classifier, nonfactual samples must not be identified by surface features. Table 4 compares NonFactS and FALSESUM regarding the similarity of factual and generated nonfactual samples. NonFactS's nonfactual samples are much ![6_image_0.png](6_image_0.png) m ![6_image_1.png](6_image_1.png) more similar to factual samples in terms of ROUGE scores and BERTScore (Zhang et al., 2020b). In addition, inspired by Gururangan et al. (2018) and Utama et al. (2022), we perform a hypothesis-only experiment. The classifier is trained and evaluated on only summaries without any access to the context documents. The goal is understanding to what extent the factuality of generated summaries can be determined using semantic plausibility and spurious surface features (e.g., grammatical mistakes or fluency errors). Table 5 indicates that NonFactS generated summaries are marginally better than FALSESUM generated summaries in hypothesisonly factuality evaluation. We also manually investigated 100 randomly sampled generated nonfactual summaries and found that 85% of the labels are truly labelled as nonfactual. This is almost the same as FALSESUM reported manual verification (Utama et al., 2022). We study the ability of the same classifier (ALBERT xlarge) fine-tuned on the NonFactS/FALSESUM datasets to evaluate factuality on FALSESUM/NonFactS. The rest of the variables, such as the number of training samples, are the same as our default. Table 6 indicates that NonFactS yields better performance on FALSESUM. We investigate the performance of the NonFactS factuality evaluation model based on the level of abstractiveness of summaries. We use different metrics to partition the lexical overlap between the summaries and their context documents. Overlap Score is defined by the multiplication of the density, i.e., the percentage of words in a summary that are present in the context document, and normalized coverage, i.e., the percentage of a summary that is a continuous fragment of the context document (Utama et al., 2022; Grusky et al., 2018). We also use the percentage of novel n-grams in summaries, i.e., the percentage of a summary n-grams that are not present in the context document. Higher values for the overlap score and lower values for percentage of novel n-grams correspond to higher overlap and more extractive summaries. Figure 4 plots NonFactS and FALSESUM re- ![7_image_0.png](7_image_0.png) garding the overlap score and percentage of novel n-grams. Both generated datasets cover more abstractive than extractive summaries. However, NonFactS contains more abstractive samples. This is evident from the higher frequency of lower overlap scores. NonFactS also has more samples with a higher percentage of novel 4-grams and trigrams, while FALSESUM covers more novel Bigrams and unigrams. To study the effect of summary extractiveness, we evaluate our model on the FALSESUM and SUMMAC benchmarks. Figure 5 indicates the higher performance of NonFactS over FALSESUM on more abstractive summaries (lower overlap scores) on both benchmarks, which is in line with more abstractive samples in NonFactS. ## 4.3 Zero Reference Analysis In this section, we consider the case in which there is no access to human-annotated reference summaries (factual summaries) for training a model to generate nonfactual summaries. This is a realistic case, for example, in a real scenario where one has no access to reference summaries in a new domain. We use randomly selected sentences from context documents as factual reference summaries corresponding to the documents. Next, we train the NonFactS generator with the same procedure explained in Section 3 to generate nonfactual summaries. Note, during the training and inference phase, we remove the randomly selected sentences from the documents to eliminate trivial performance and maintain the abstractive summarization approach. The exact number of documents (230k/50k) are used for training and inference. Documents during inference are sampled more than once to provide more samples (200k,400k,1000k) for training the classifier. To single out the model and dataset effects, we experiment with both ROBERTa and ALBERT and CNN and XSUM as training and inference datasets. The default case (presence of reference summary) is limited regarding the number of training samples for the classifier (max 400k samples). Figure 6 compares the performance of the factuality evaluation models in the presence and absence of reference summaries on the FALSESUM and SUMMAC benchmarks (see Appendix for detailed results). In both benchmarks, zero reference models reach or outperform reference models after training on 400k random factual samples and their corresponding nonfactual summaries. This superiority is much more evident in the ALBERT models. In addition, the figure shows that CNN based models performs better on both benchmarks which is to be expected as both benchmarks are consisting of more CNN based datasets. However, we see that the ALBERT models trained on CNN or XSUM random samples relatively converge together. Therefore, the effect of in-domain datasets vanishes as the model trained on more samples. ## 5 Conclusion We introduced NonFactS, a data generation model to generate large-scale nonfactual summaries. Non- FactS only requires context documents and reference summaries as factual summaries. To evaluate factuality in document summarization, we used a binary classifier trained on a balanced dataset of factual and generated nonfactual summaries. Our model outperforms prior works on two standard benchmarks, FALSESUM and SUMMAC. Compared to previous methods, NonFactS generates nonfactual samples without requiring extensive language-dependent pre-processing steps. Also, our generated samples are more abstractive and more similar to their factual references, and therefore, it is harder to identify the samples based on spurious surface features and semantic plausibility. Additionally, we demonstrated that NonFactS is capable of generating nonfactual summaries without the need for human-annotated reference summaries by utilizing randomly selected sentences from context documents. Our experiments indicated that a classifier trained on these generated samples achieves comparable performance to a classifier trained on human-annotated samples and their generated nonfactual pairs. ## Limitations NonFactS generates grammatically correct nonfactual summaries. However, in practice, summaries can be non-grammatical, noisy, and nonsensical. This can limit the generalization of our performance in such cases. Additionally, hypothesis-only results show that a considerable number of samples are identified correctly without their context document. The reason can be the memorized knowledge in pre-trained classifiers or surface features and semantic plausibility. ## Broader Impact Our model has no direct environmental impacts, fairness or privacy considerations. However, it is important to note that it must not be used as a factchecking tool as there is a potential risk that false statements may be labelled as true. Our classifier evaluates the factuality of a summary based on a context document, and if the document is misleading, the summary can be factual based on misleading information. Additionally, NonFactS generates nonfactual summaries, which might have potential risks if misused for generating massive nonfactual summaries (claims). Addressing such risks is an open issue in the field and is not specific to our work. ## Acknowledgments This research was partly supported by Athora Netherlands and the Netherlands Organization for Scientific Research (NWO) under project number VI.C.192.080. ## References Griffin Adams, Han-Chin Shing, Qing Sun, Christopher Winestock, Kathleen McKeown, and Noémie Elhadad. 2022. Learning to revise references for faithful summarization. In *Findings of the Association for Computational Linguistics: EMNLP 2022*, pages 4009–4027, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In *Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing*, pages 632–642, Lisbon, Portugal. Association for Computational Linguistics. Ziqiang Cao, Furu Wei, Wenjie Li, and Sujian Li. 2018. Faithful to the original: Fact aware neural abstractive summarization. In *AAAI*, pages 4784–4791. Esin Durmus, He He, and Mona Diab. 2020. FEQA: A question answering evaluation framework for faithfulness assessment in abstractive summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5055– 5070, Online. Association for Computational Linguistics. Alexander R. Fabbri, Wojciech Krysci ´ nski, Bryan Mc- ´ Cann, Caiming Xiong, Richard Socher, and Dragomir Radev. 2021. SummEval: Re-evaluating summarization evaluation. *Transactions of the Association for* Computational Linguistics, 9:391–409. Tobias Falke, Leonardo F. R. Ribeiro, Prasetya Ajie Utama, Ido Dagan, and Iryna Gurevych. 2019. Ranking generated summaries by correctness: An interesting but challenging application for natural language inference. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 2214–2220, Florence, Italy. Association for Computational Linguistics. Bogdan Gliwa, Iwona Mochol, Maciej Biesek, and Aleksander Wawer. 2019. SAMSum corpus: A humanannotated dialogue dataset for abstractive summarization. In *Proceedings of the 2nd Workshop on* New Frontiers in Summarization, pages 70–79, Hong Kong, China. Association for Computational Linguistics. Tanya Goyal and Greg Durrett. 2020. Evaluating factuality in generation with dependency-level entailment. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 3592–3603, Online. Association for Computational Linguistics. Tanya Goyal and Greg Durrett. 2021. Annotating and modeling fine-grained factuality in summarization. In *Proceedings of the 2021 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1449–1462, Online. Association for Computational Linguistics. Max Grusky, Mor Naaman, and Yoav Artzi. 2018. Newsroom: A dataset of 1.3 million summaries with diverse extractive strategies. In *Proceedings of the* 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 708–719, New Orleans, Louisiana. Association for Computational Linguistics. Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural language inference data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 107–112, New Orleans, Louisiana. Association for Computational Linguistics. Karl Moritz Hermann, Tomás Kocisky, Edward Grefen- ` stette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In *NIPS*. Dandan Huang, Leyang Cui, Sen Yang, Guangsheng Bao, Kun Wang, Jun Xie, and Yue Zhang. 2020. What have we achieved on text summarization? In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 446–469, Online. Association for Computational Linguistics. Wojciech Krysci ´ nski, Nitish Shirish Keskar, Bryan Mc- ´ Cann, Caiming Xiong, and Richard Socher. 2019. Neural text summarization: A critical evaluation. arXiv preprint arXiv:1908.08960. Wojciech Kryscinski, Bryan McCann, Caiming Xiong, and Richard Socher. 2020. Evaluating the factual consistency of abstractive text summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9332–9346, Online. Association for Computational Linguistics. Philippe Laban, Tobias Schnabel, Paul N. Bennett, and Marti A. Hearst. 2021. Summac: Re-visiting nlibased models for inconsistency detection in summarization. *CoRR*, abs/2111.09525. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. Albert: A lite bert for self-supervised learning of language representations. In *International Conference on Learning Representations*. Hwanhee Lee, Kang Min Yoo, Joonsuk Park, Hwaran Lee, and Kyomin Jung. 2022. Masked summarization to generate factually inconsistent summaries for improved factual consistency checking. In *Findings of the Association for Computational Linguistics: NAACL 2022*, pages 1019–1030, Seattle, United States. Association for Computational Linguistics. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 7871–7880, Online. Association for Computational Linguistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Ro{bert}a: A robustly optimized {bert} pretraining approach. Joshua Maynez, Shashi Narayan, Bernd Bohnet, and Ryan McDonald. 2020. On faithfulness and factuality in abstractive summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1906–1919, Online. Association for Computational Linguistics. Shashi Narayan, Shay B Cohen, and Mirella Lapata. 2018. Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. In *Proceedings of the 2018* Conference on Empirical Methods in Natural Language Processing, pages 1797–1807. Artidoro Pagnoni, Vidhisha Balachandran, and Yulia Tsvetkov. 2021. Understanding factuality in abstractive summarization with FRANK: A benchmark for factuality metrics. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, pages 4812–4829, Online. Association for Computational Linguistics. Tal Schuster, Adam Fisch, and Regina Barzilay. 2021. Get your vitamin C! robust fact verification with contrastive evidence. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 624–643, Online. Association for Computational Linguistics. Thomas Scialom, Paul-Alexis Dray, Sylvain Lamprier, Benjamin Piwowarski, Jacopo Staiano, Alex Wang, and Patrick Gallinari. 2021. QuestEval: Summarization asks for fact-based evaluation. In *Proceedings of* the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6594–6604, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Thomas Scialom, Sylvain Lamprier, Benjamin Piwowarski, and Jacopo Staiano. 2019. Answers unite! unsupervised metrics for reinforced summarization models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3246–3256, Hong Kong, China. Association for Computational Linguistics. Prasetya Utama, Joshua Bambrick, Nafise Moosavi, and Iryna Gurevych. 2022. Falsesum: Generating document-level NLI examples for recognizing factual inconsistency in summarization. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2763–2776, Seattle, United States. Association for Computational Linguistics. Alex Wang, Kyunghyun Cho, and Mike Lewis. 2020. Asking and answering questions to evaluate the factual consistency of summaries. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5008–5020, Online. Association for Computational Linguistics. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122. Association for Computational Linguistics. Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter J. Liu. 2020a. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. In Proceedings of the 37th International Conference on Machine Learning, ICML'20. JMLR.org. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020b. Bertscore: Evaluating text generation with bert. In International Conference on Learning Representations. ## A Appendix Table 7 and 8 show the detailed results for comparing the performance of our model with and without access to human-annotated factual summaries (see Figure 6). | FALSESUM Benchmark Datasets | | | | | | |-------------------------------------------------------|--------|---------|------|----------|---------| | Dataset | FactCC | Ranksum | QAGS | SummEval | Overall | | ROBERTa, CNN reference summaries 100k 84.2 77.6 70.7 | 71.2 | 75.9 | | | | | 200k | 84.0 | 77.1 | 73.9 | 70.7 | 76.4 | | 400k | 81.1 | 77.9 | 73.7 | 69.7 | 75.6 | | ROBERTa, CNN random sentences 100k 63.8 54.2 50.7 | 62.7 | 57.8 | | | | | 200k | 79.1 | 60.8 | 74.2 | 70.5 | 71.2 | | 400k | 79.9 | 68.3 | 74.5 | 70.4 | 73.3 | | 1000k | 80.3 | 70.9 | 73.8 | 70.9 | 74.0 | | ALBERT, CNN reference summaries 100k 87.2 79.2 75.0 | 75.5 | 79.2 | | | | | 200k | 87.8 | 79.8 | 76.7 | 77.5 | 80.5 | | 400k | 86.2 | 79.5 | 79.8 | 78.0 | 80.9 | | ALBERT, CNN random sentences 100k 82.1 78.3 74.7 | 73.6 | 77.2 | | | | | 200k | 87.2 | 79.1 | 79.7 | 74.3 | 80.0 | | 400k | 88.0 | 79.2 | 79.8 | 76.0 | 80.7 | | 1000k | 87.0 | 78.9 | 79.4 | 76.6 | 80.5 | | ROBERTa, XSUM reference summaries 100k 60.8 51.6 56.6 | 63.7 | 58.2 | | | | | 200k | 66.0 | 55.2 | 60.5 | 63.6 | 61.4 | | 400k | 63.3 | 55.5 | 60.8 | 62.9 | 60.6 | | ROBERTa, XSUM random sentences 100k 54.3 50.7 48.1 | 52.1 | 51.3 | | | | | 200k | 61.1 | 52.4 | 53.4 | 56.3 | 55.8 | | 400k | 75.0 | 56.8 | 71.0 | 68.1 | 67.8 | | 1000k | 88.0 | 71.2 | 79.2 | 77.0 | 78.8 | | ALBERT, XSUM reference summaries 100k 76.3 60.2 71.6 | 72.2 | 70.1 | | | | | 200k | 77.0 | 61.5 | 72.8 | 72.3 | 70.9 | | 400k | 79.7 | 65.8 | 72.0 | 72.6 | 72.5 | | ALBERT, XSUM random sentences 100k 82.5 68.6 74.1 | 74.1 | 74.8 | | | | | 200k | 83.6 | 69.5 | 74.7 | 75.1 | 75.7 | | 400k | 88.1 | 71.4 | 78.6 | 78.2 | 79.1 | | 1000k | 88.0 | 71.2 | 79.2 | 77.0 | 78.8 | | SUMMAC Benchmark Datasets | | | | | | | | |-------------------------------------------------------|------|------|----------|--------|----------|-------|---------| | Dataset | CGS | XSF | Polytope | FactCC | SummEval | FRANK | Overall | | ROBERTa, CNN reference summaries 100k 69.2 48.6 59.2 | 83.0 | 78.3 | 74.8 | 68.9 | | | | | 200k | 72.9 | 49.5 | 55.6 | 82.3 | 79.7 | 73.1 | 68.9 | | 400k | 69.6 | 50.7 | 57.0 | 88.8 | 72.4 | 73.3 | 68.6 | | ROBERTa, CNN random sentences 100k 55.6 51.4 | 48.8 | 60.4 | 58.1 | 71.4 | 57.6 | | | | 200k | 63.5 | 48.8 | 55.0 | 78.6 | 73.6 | 76.3 | 66.0 | | 400k | 63.9 | 53.3 | 52.5 | 81.8 | 72.4 | 75.6 | 66.6 | | 1000k | 69.9 | 53.4 | 58.5 | 78.9 | 78.2 | 74.9 | 68.6 | | ALBERT, CNN reference summaries 100k 72.8 52.7 | 57.1 | 88.3 | 83.3 | 77.9 | 71.9 | | | | 200k | 81.6 | 53.2 | 60.8 | 89.3 | 87.4 | 80.1 | 75.4 | | 400k | 79.9 | 54.2 | 60.8 | 87.0 | 85.9 | 81.9 | 74.9 | | ALBERT, CNN random sentences 100k 63.7 50.6 | 62.4 | 83.9 | 74.9 | 82.7 | 69.7 | | | | 200k | 74.2 | 49.1 | 61.1 | 88.5 | 83.3 | 80.9 | 72.8 | | 400k | 78.0 | 52.0 | 61.4 | 86.3 | 91.6 | 82.8 | 75.4 | | 1000k | 77.2 | 52.4 | 60.9 | 85.3 | 88.6 | 82.0 | 74.4 | | ROBERTa, XSUM reference summaries 100k 60.9 53.7 55.8 | 58.4 | 70.0 | 74.6 | 62.2 | | | | | 200k | 57.8 | 54.7 | 53.4 | 65.8 | 72.2 | 72.3 | 62.7 | | 400k | 59.2 | 54.0 | 49.9 | 61.6 | 73.2 | 72.0 | 61.7 | | ROBERTa, XSUM random sentences 100k 50.1 49.0 | 49.4 | 53.8 | 51.1 | 50.2 | 50.6 | | | | 200k | 52.5 | 51.3 | 48.4 | 58.6 | 55.9 | 64.6 | 55.2 | | 400k | 56.5 | 59.6 | 59.9 | 74.0 | 68.4 | 75.2 | 65.6 | | 1000k | 56.0 | 48.9 | 57.5 | 69.3 | 64.1 | 74.1 | 61.6 | | ALBERT, XSUM reference summaries 100k 67.5 50.5 57.3 | 74.6 | 80.8 | 81.4 | 67.9 | | | | | 200k | 63.6 | 50.9 | 57.8 | 75.5 | 78.2 | 81.4 | 67.9 | | 400k | 68.1 | 52.1 | 57.3 | 77.3 | 77.3 | 81.9 | 69.0 | | ALBERT, XSUM random sentences 100k 67.2 51.7 | 56.4 | 83.7 | 78.5 | 80.5 | 69.7 | | | | 200k | 68.2 | 48.4 | 59.3 | 83.6 | 77.2 | 81.2 | 69.6 | | 400k | 72.1 | 52.7 | 60.0 | 90.5 | 82.7 | 82.4 | 73.4 | | 1000k | 69.2 | 52.4 | 59.5 | 87.8 | 81.6 | 82.5 | 72.2 | ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitation section just after conclusion ✓ A2. Did you discuss any potential risks of your work? Broader Impact section just after Limitation section ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract. Introduction (section 1) ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3 ✓ B1. Did you cite the creators of artifacts you used? Section 3 ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? The datasets we used are the two well-known datasets in the field and are publicly available and free to use for academic and research purposes. When we publish the dataset, we will provide the licence and respect the previous licences. ✗ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? CNN and XSUM summarization datasets are public and free to use for academic and research purposes. We completely respect their intended use. When we publish the dataset, we will provide the licence and respect the previous licences. We have considered that our contribution is compatible with the original datasets. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. It is not applicable since the data we use have already been checked by their authors. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. It is completely the same as the datasets we use and therefore we only cite corresponding works. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 3 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ## C ✓ **Did You Run Computational Experiments?** Section 3 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 3 ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 3. We stick to default models parameters and for our specific parameters, we discussed that. ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 4 C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Not applicable. We do not need specific packages but when we publish our model we will specify frameworks and other requirements versions. ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.